code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def get_ignored_files(self): return [os.path.join(self.path, p) for p in self.run('ls-files', '--ignored', '--exclude-standard', '--others').strip().split()]
Returns the list of files being ignored in this repository. Note that file names, not directories, are returned. So, we will get the following: a/b.txt a/c.txt instead of just: a/ Returns: List[str] - list of ignored files. The paths are absolute.
codesearchnet
def MakeZip(self, xar_file, output_file): logging.info('Generating zip template file at %s', output_file) with zipfile.ZipFile(output_file, mode='a') as zf: build_yaml = io.BytesIO() self.WriteBuildYaml(build_yaml) build_yaml.seek(0) zf.writestr('build.yaml', build_yaml.read())
Add a zip to the end of the .xar containing build.yaml. The build.yaml is already inside the .xar file, but we can't easily open this on linux. To make repacking easier we add a zip to the end of the .xar and add in the build.yaml. The repack step will then look at the build.yaml and insert the config.yaml. We end up storing the build.yaml twice but it is tiny, so this doesn't matter. Args: xar_file: the name of the xar file. output_file: the name of the output ZIP archive.
codesearchnet
def IsDecltype(clean_lines, linenum, column): (text, _, start_col) = ReverseCloseExpression(clean_lines, linenum, column) if start_col < 0: return False if Search(r'\bdecltype\s*$', text[0:start_col]): return True return False
Check if the token ending on (linenum, column) is decltype(). Args: clean_lines: A CleansedLines instance containing the file. linenum: the number of the line to check. column: end column of the token to check. Returns: True if this token is decltype() expression, False otherwise.
juraj-google-style
def bsp_new_with_size(x: int, y: int, w: int, h: int) -> tcod.bsp.BSP: return Bsp(x, y, w, h)
Create a new BSP instance with the given rectangle. Args: x (int): Rectangle left coordinate. y (int): Rectangle top coordinate. w (int): Rectangle width. h (int): Rectangle height. Returns: BSP: A new BSP instance. .. deprecated:: 2.0 Call the :any:`BSP` class instead.
codesearchnet
def addColumn(self, columnName, dtype, defaultValue): model = self.tableView.model() if model is not None: model.addDataFrameColumn(columnName, dtype, defaultValue) self.addColumnButton.setChecked(False)
Adds a column with the given parameters to the underlying model This method is also a slot. If no model is set, nothing happens. Args: columnName (str): The name of the new column. dtype (numpy.dtype): The datatype of the new column. defaultValue (object): Fill the column with this value.
juraj-google-style
def send_put(self, mri, attribute_name, value): path = attribute_name + ".value" typ, value = convert_to_type_tuple_value(serialize_object(value)) if isinstance(typ, tuple): _, typeid, fields = typ value = Value(Type(fields, typeid), value) try: self._ctxt.put(mri, {path: value}, path) except RemoteError: if attribute_name == "exports": self._queues[mri].get(timeout=DEFAULT_TIMEOUT) else: raise
Abstract method to dispatch a Put to the server Args: mri (str): The mri of the Block attribute_name (str): The name of the Attribute within the Block value: The value to put
juraj-google-style
def route(self, method, pattern): def decorator(callback): self._router.add(method, pattern, callback) return callback return decorator
Decorator to add route for a request with any HTTP method. Arguments: method (str): HTTP method name, e.g. GET, POST, etc. pattern (str): Routing pattern the path must match. Returns: function: Decorator function to add route.
codesearchnet
def get_service_state_object_id(subsystem: str, name: str, version: str) -> str: return '{}:{}:{}'.format(subsystem, name, version)
Return service state data object key. Args: subsystem (str): Subsystem the service belongs to name (str): Name of the Service version (str): Version of the Service Returns: str, Key used to store the service state data object
codesearchnet
def integer_key_convert(dictin, dropfailedkeys=False): return key_value_convert(dictin, keyfn=int, dropfailedkeys=dropfailedkeys)
Convert keys of dictionary to integers Args: dictin (DictUpperBound): Input dictionary dropfailedkeys (bool): Whether to drop dictionary entries where key conversion fails. Defaults to False. Returns: Dict: Dictionary with keys converted to integers
codesearchnet
def ParseCode(unformatted_source, filename='<unknown>'): if not unformatted_source.endswith(os.linesep): unformatted_source += os.linesep try: ast_tree = ast.parse(unformatted_source, filename) ast.fix_missing_locations(ast_tree) readline = StringIO(unformatted_source).readline tokens = tokenize.generate_tokens(readline) except Exception: raise logical_lines = _CreateLogicalLines(tokens) split_penalty_visitor.SplitPenalty(logical_lines).visit(ast_tree) return logical_lines
Parse a string of Python code into logical lines. This provides an alternative entry point to YAPF. Arguments: unformatted_source: (unicode) The code to format. filename: (unicode) The name of the file being reformatted. Returns: A list of LogicalLines. Raises: An exception is raised if there's an error during AST parsing.
github-repos
def get_resources(minify=False): all_resources = dict() subclasses = (resource_base.ResourceBase.__subclasses__() + resource_definitions.ResourceAngular.__subclasses__()) for resource in subclasses: obj = resource(minify) all_resources[resource.RESOURCE_NAME] = dict(css=tuple(obj.resources_css), js=tuple(obj.resources_js)) return all_resources
Find all resources which subclass ResourceBase. Keyword arguments: minify -- select minified resources if available. Returns: Dictionary of available resources. Keys are resource names (part of the config variable names), values are dicts with css and js keys, and tuples of resources as values.
codesearchnet
def db_dict(c): db_d = {} c.execute('SELECT * FROM library_spectra') db_d['library_spectra'] = [list(row) for row in c] c.execute('SELECT * FROM library_spectra_meta') db_d['library_spectra_meta'] = [list(row) for row in c] c.execute('SELECT * FROM library_spectra_annotation') db_d['library_spectra_annotations'] = [list(row) for row in c] c.execute('SELECT * FROM library_spectra_source') db_d['library_spectra_source'] = [list(row) for row in c] c.execute('SELECT * FROM metab_compound') db_d['metab_compound'] = [list(row) for row in c] return db_d
Get a dictionary of the library spectra from a database Example: >>> from msp2db.db import get_connection >>> conn = get_connection('sqlite', 'library.db') >>> test_db_d = db_dict(conn.cursor()) If using a large database the resulting dictionary will be very large! Args: c (cursor): SQL database connection cursor Returns: A dictionary with the following keys 'library_spectra', 'library_spectra_meta', 'library_spectra_annotations', 'library_spectra_source' and 'metab_compound'. Where corresponding values for each key are list of list containing all the rows in the database.
codesearchnet
def show(self, view: View, request: Request): return view.render('welcome', { 'app': request.app().make('Application') })
Show the welcome page. Arguments: view {masonite.view.View} -- The Masonite view class. Application {config.application} -- The application config module. Returns: masonite.view.View -- The Masonite view class.
juraj-google-style
def _convert_values_and_partition(cls, values, row_partition, name): if not isinstance(row_partition, RowPartition): raise TypeError(f'Argument `row_partition` must be a RowPartition. Received {row_partition}.') if isinstance(values, RaggedTensor): if values._row_partition.dtype != row_partition.dtype: if not ragged_config.auto_cast_partition_dtype(): raise ValueError(f'Argument `row_partition` of RaggedTensor with name: {name} must have same dtype as Argument `values`. ({row_partition.dtype} vs. {values._row_partition.dtype}).') values = values.with_row_splits_dtype(row_partition.dtype) else: values = _convert_to_ragged_tensor_values(values) return (values, row_partition)
Converts `values` and `partition` to Tensors. If `values` is a `RaggedTensor`, then converts `values` and `partition` to have compatible row-partitioning dtypes. In particular, if any of the row partitioning tensors are `int64`, then all of the other row partitioning tensors will be cast to `int64` (if auto_cast_partition_dtype() is true) or an error will be raised (if auto_cast_partition_dtype() is false). Args: values: The `values` for the `RaggedTensor` being constructed. row_partition: A RowPartition object for the `RaggedTensor` being constructed. name: The name of the RowPartition object. Returns: A tuple (values, partition).
github-repos
def wait_for_transform_job(self, job, poll=5): desc = _wait_until((lambda : _transform_job_status(self.sagemaker_client, job)), poll) self._check_job_status(job, desc, 'TransformJobStatus') return desc
Wait for an Amazon SageMaker transform job to complete. Args: job (str): Name of the transform job to wait for. poll (int): Polling interval in seconds (default: 5). Returns: (dict): Return value from the ``DescribeTransformJob`` API. Raises: ValueError: If the transform job fails.
codesearchnet
def color(self, color): self._data['color'] = color request = self._base_request request['color'] = color return self._tc_requests.update(request, owner=self.owner)
Updates the security labels color. Args: color:
juraj-google-style
def url(self, url): if (url and url.endswith('/')): url = url[:(- 1)] self._url = url
Set API URL endpoint Args: url: the url of the API endpoint
codesearchnet
def generate_version(max_major: int = 1, max_minor: int = 7, max_patch: int = 15) -> str: major = randint(0, max_major) minor = randint(0, max_minor) patch = randint(0, max_patch) return '{:d}.{:d}.{:d}'.format(major, minor, patch)
Select a random version. Args: max_major (int, optional) maximum major version max_minor (int, optional) maximum minor version max_patch (int, optional) maximum patch version Returns: str, Version String
juraj-google-style
def EnqueueBreakpointUpdate(self, breakpoint): with self._transmission_thread_startup_lock: if (self._transmission_thread is None): self._transmission_thread = threading.Thread(target=self._TransmissionThreadProc) self._transmission_thread.name = 'Cloud Debugger transmission thread' self._transmission_thread.daemon = True self._transmission_thread.start() self._transmission_queue.append((breakpoint, 0)) self._new_updates.set()
Asynchronously updates the specified breakpoint on the backend. This function returns immediately. The worker thread is actually doing all the work. The worker thread is responsible to retry the transmission in case of transient errors. Args: breakpoint: breakpoint in either final or non-final state.
codesearchnet
def cmd_ssh(options): import os import subprocess from os.path import expanduser options.inst_state = "running" (i_info, param_str) = gather_data(options) (tar_inst, tar_idx) = determine_inst(i_info, param_str, options.command) home_dir = expanduser("~") if options.user is None: tar_aminame = awsc.get_one_aminame(i_info[tar_idx]['ami']) options.user = cmd_ssh_user(tar_aminame, i_info[tar_idx]['tag']['Name']) else: debg.dprint("LoginUser set by user: ", options.user) os_spec = {"nt": ["powershell plink", "\\", "ppk"]} c_itm = os_spec.get(os.name, ["ssh", "/", "pem"]) cmd_ssh_run = c_itm[0] if not options.nopem: cmd_ssh_run += (" -i {0}{1}.aws{1}{2}.{3}". format(home_dir, c_itm[1], i_info[tar_idx]['ssh_key'], c_itm[2])) else: debg.dprint("Connect string: ", "ssh {}@{}". format(options.user, i_info[tar_idx]['pub_dns_name'])) cmd_ssh_run += " {0}@{1}".format(options.user, i_info[tar_idx]['pub_dns_name']) print(cmd_ssh_run) subprocess.call(cmd_ssh_run, shell=True)
Connect to the specified instance via ssh. Finds instances that match the user specified args that are also in the 'running' state. The target instance is determined, the required connection information is retreived (IP, key and ssh user-name), then an 'ssh' connection is made to the instance. Args: options (object): contains args and data from parser
juraj-google-style
def GetUserById(self, local_id): user = self.rpc_helper.GetAccountInfoById(local_id) return GitkitUser.FromApiResponse(user)
Gets user info by id. Args: local_id: string, the user id at Gitkit server. Returns: GitkitUser, containing the user info.
juraj-google-style
def create_constructor_args(cls, proto_list: List[american_option_pb2.AmericanEquityOption], config: AmericanOptionConfig=None) -> Dict[str, Any]: am_option_data = proto_utils.from_protos(proto_list, config) res = {} for key in am_option_data: tensor_repr = proto_utils.tensor_repr(am_option_data[key]) res[key] = tensor_repr return res
Creates a dictionary to initialize AmericanEquityOption. The output dictionary is such that the instruments can be initialized as follows: ``` initializer = create_constructor_args(proto_list, config) american_options = [AmericanEquityOption(**data) for data in initializer.values()] ``` The keys of the output dictionary are unique identifiers of the batched instruments. This is useful for identifying an existing graph that could be reused for the instruments without the need of rebuilding the graph. Args: proto_list: A list of protos for which the initialization arguments are constructed. config: An instance of `AmericanOptionConfig`. Returns: A possibly nested dictionary such that each value provides initialization arguments for the AmericanEquityOption.
github-repos
def is_initialised(self): if (not self.lattice): raise AttributeError('Running a simulation needs the lattice to be initialised') if (not self.atoms): raise AttributeError('Running a simulation needs the atoms to be initialised') if ((not self.number_of_jumps) and (not self.for_time)): raise AttributeError('Running a simulation needs number_of_jumps or for_time to be set')
Check whether the simulation has been initialised. Args: None Returns: None
codesearchnet
def _set_all_lims(self, which, lim, d, scale, fontsize=None): setattr(self.general, (which + 'lims'), lim) setattr(self.general, ('d' + which), d) setattr(self.general, (which + 'scale'), scale) if (fontsize is not None): setattr(self.general, (which + '_tick_label_fontsize'), fontsize) return
Set limits and ticks for an axis for whole figure. This will set axis limits and tick marks for the entire figure. It can be overridden in the SinglePlot class. Args: which (str): The indicator of which part of the plots to adjust. This currently handles `x` and `y`. lim (len-2 list of floats): The limits for the axis. d (float): Amount to increment by between the limits. scale (str): Scale of the axis. Either `log` or `lin`. fontsize (int, optional): Set fontsize for associated axis tick marks. Default is None.
codesearchnet
def vflip(img): if (not _is_pil_image(img)): raise TypeError('img should be PIL Image. Got {}'.format(type(img))) return img.transpose(Image.FLIP_TOP_BOTTOM)
Vertically flip the given PIL Image. Args: img (PIL Image): Image to be flipped. Returns: PIL Image: Vertically flipped image.
codesearchnet
def _resolve_credential(self, credential): if self._credentials_found_in_instance: return elif self._credentials_found_in_envars(): return os.getenv(('PAN_' + credential.upper())) else: return self.storage.fetch_credential(credential=credential, profile=self.profile)
Resolve credential from envars or credentials store. Args: credential (str): Credential to resolve. Returns: str or None: Resolved credential or ``None``.
codesearchnet
def _get_data_by_field(self, field_number): if not self.is_data_loaded: self._import_data() if not 0 <= field_number < self._num_of_fields: raise ValueError("Field number should be between 0-%d" % self._num_of_fields) return self._data[field_number]
Return a data field by field number. This is a useful method to get the values for fields that Ladybug currently doesn't import by default. You can find list of fields by typing EPWFields.fields Args: field_number: a value between 0 to 34 for different available epw fields. Returns: An annual Ladybug list
juraj-google-style
def add_cohp_dict(self, cohp_dict, key_sort_func=None): if key_sort_func: keys = sorted(cohp_dict.keys(), key=key_sort_func) else: keys = cohp_dict.keys() for label in keys: self.add_cohp(label, cohp_dict[label])
Adds a dictionary of COHPs with an optional sorting function for the keys. Args: cohp_dict: dict of the form {label: Cohp} key_sort_func: function used to sort the cohp_dict keys.
juraj-google-style
def get_files_in_branch(profile, branch_sha): tree_sha = get_commit_tree(profile, branch_sha) files = get_files_in_tree(profile, tree_sha) tree = [prepare(x) for x in files] return tree
Get all files in a branch's tree. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. branch_sha The SHA a branch's HEAD points to. Returns: A list of dicts containing info about each blob in the tree.
juraj-google-style
def __init__(self, url, conn=None, user=None, password=None, verify=True, proxies=None): if conn and (user or password): raise InvalidArgumentsError("A connection and user/password may" " not both be provided.") elif conn: self._conn = conn else: self._conn = _HTTPConnection(user, password, verify, proxies) if url[-1] == "/": self.url = url else: self.url = url + "/"
Create a TAXII endpoint. Args: user (str): username for authentication (optional) password (str): password for authentication (optional) verify (bool): validate the entity credentials (default: True) conn (_HTTPConnection): A connection to reuse (optional) proxies (dict): key/value pair for http/https proxy settings. (optional)
juraj-google-style
def DeserializeUnsignedWithoutType(self, reader): self.Version = reader.ReadByte() self.DeserializeExclusiveData(reader) self.Attributes = reader.ReadSerializableArray('neo.Core.TX.TransactionAttribute.TransactionAttribute', max=self.MAX_TX_ATTRIBUTES) self.inputs = reader.ReadSerializableArray('neo.Core.CoinReference.CoinReference') self.outputs = reader.ReadSerializableArray('neo.Core.TX.Transaction.TransactionOutput')
Deserialize object without reading transaction type data. Args: reader (neo.IO.BinaryReader):
juraj-google-style
async def get_data(self, url): logger.debug('making request to %r', url) with aiohttp.ClientSession() as session: async with session.get(url, headers=self.headers) as response: body = json.loads((await response.read()).decode('utf-8')) if response.status == HTTPStatus.OK: if url != self.url_builder('configuration'): await self._update_config() return body elif response.status == HTTPStatus.TOO_MANY_REQUESTS: timeout = self.calculate_timeout( response.headers['Retry-After'], ) logger.warning( 'Request limit exceeded, waiting %s seconds', timeout, ) await asyncio.sleep(timeout) return await self.get_data(url) logger.warning( 'request failed %s: %r', response.status, body.get('status_message', '<no message>') )
Get data from the TMDb API via :py:func:`aiohttp.get`. Notes: Updates configuration (if required) on successful requests. Arguments: url (:py:class:`str`): The endpoint URL and params. Returns: :py:class:`dict`: The parsed JSON result.
juraj-google-style
def get_referenced_object_as_list( prev_obj, obj, dot_separated_name, desired_type=None): res = get_referenced_object(prev_obj, obj, dot_separated_name, desired_type) if res is None: return [] elif type(res) is list: return res else: return [res]
Same as get_referenced_object, but always returns a list. Args: prev_obj: see get_referenced_object obj: see get_referenced_object dot_separated_name: see get_referenced_object desired_type: see get_referenced_object Returns: same as get_referenced_object, but always returns a list
juraj-google-style
def gen_permutations(self, index=0, args=None): if args is None: args = [] try: name = self.layout_json_names[index] display = self.layout_json_params.get(name, {}).get('display') input_type = self.install_json_params().get(name, {}).get('type') if self.validate_layout_display(self.input_table, display): if input_type.lower() == 'boolean': for val in [True, False]: args.append({'name': name, 'value': val}) self.db_update_record(self.input_table, name, val) self.gen_permutations(index + 1, list(args)) args.pop() elif input_type.lower() == 'choice': valid_values = self.expand_valid_values( self.install_json_params().get(name, {}).get('validValues', []) ) for val in valid_values: args.append({'name': name, 'value': val}) self.db_update_record(self.input_table, name, val) self.gen_permutations(index + 1, list(args)) args.pop() else: args.append({'name': name, 'value': None}) self.gen_permutations(index + 1, list(args)) else: self.gen_permutations(index + 1, list(args)) except IndexError: self._input_permutations.append(args) outputs = [] for o_name in self.install_json_output_variables(): if self.layout_json_outputs.get(o_name) is not None: display = self.layout_json_outputs.get(o_name, {}).get('display') valid = self.validate_layout_display(self.input_table, display) if display is None or not valid: continue for ov in self.install_json_output_variables().get(o_name): outputs.append(ov) self._output_permutations.append(outputs)
Iterate recursively over layout.json parameter names. TODO: Add indicator values. Args: index (int, optional): The current index position in the layout names list. args (list, optional): Defaults to None. The current list of args.
juraj-google-style
def save_metadata(self, file_path): data = self.metadata with open(file_path, 'w') as out_file: json.dump(data, out_file)
Saves a json file of the search result metadata. Saves a json file of the search result metadata from :class:`api.results`.metadata. Args: file_path (str): Path to the json file to save metadata to.
juraj-google-style
def zero_add(previous_value, x, name=None, reuse=None): with tf.variable_scope(name, default_name='zero_add', reuse=reuse): gamma = tf.get_variable('gamma', (), initializer=tf.zeros_initializer()) return (previous_value + (gamma * x))
Resnet connection with zero initialization. Another type of resnet connection which returns previous_value + gamma * x. gamma is a trainable scalar and initialized with zero. It is useful when a module is plugged into a trained model and we want to make sure it matches the original model's performance. Args: previous_value: A tensor. x: A tensor. name: name of variable scope; defaults to zero_add. reuse: reuse scope. Returns: previous_value + gamma * x.
codesearchnet
def window_partition(hidden_states, window_size): batch_size, height, width, num_channels = hidden_states.shape hidden_states = hidden_states.view(batch_size, height windows = hidden_states.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, num_channels) return windows
Returns the resized hidden states. The output shape should be `(batch_size * num_windows, window_size, window_size, num_channels)` Args: hidden_states (`torch.FloatTensor` of shape `(batch_size, height, width, num_channels)`): Input hidden states window_size (`int`): Window size
github-repos
def get_random_url(ltd="com"): url = [ "https: RandomInputHelper.get_random_value(8, [string.ascii_lowercase]), ".", ltd ] return "".join(url)
Get a random url with the given ltd. Args: ltd (str): The ltd to use (e.g. com). Returns: str: The random url.
juraj-google-style
def run_cell(self, cell): globals = self.ipy_shell.user_global_ns locals = self.ipy_shell.user_ns globals.update({ "__ipy_scope__": None, }) try: with redirect_stdout(self.stdout): self.run(cell, globals, locals) except: self.code_error = True if self.options.debug: raise BdbQuit finally: self.finalize()
Run the Cell code using the IPython globals and locals Args: cell (str): Python code to be executed
juraj-google-style
def _get_iam_rest_api_url_from_creds(rest_client, credentials): res = rest_client.make_request(credentials[_IAMConstants.V2_REST_URL]) base = res['streams_self'] end = base.find('/instances') return base[:end] + '/resources'
Retrieves the Streams REST API URL from the provided credentials using iam authentication. Args: rest_client (:py:class:`rest_primitives._IAMStreamsRestClient`): A client for making REST calls using IAM authentication credentials (dict): A dict representation of the credentials. Returns: str: The remote Streams REST API URL.
juraj-google-style
def union(self, second_iterable, selector=identity): if self.closed(): raise ValueError('Attempt to call union() on a closed Queryable.') if (not is_iterable(second_iterable)): raise TypeError('Cannot compute union() with second_iterable of non-iterable {0}'.format(str(type(second_iterable))[7:(- 1)])) return self._create(itertools.chain(self, second_iterable)).distinct(selector)
Returns those elements which are either in the source sequence or in the second_iterable, or in both. Note: This method uses deferred execution. Args: second_iterable: Elements from this sequence are returns if they are not also in the source sequence. selector: An optional single argument function which is used to project the elements in the source and second_iterables prior to comparing them. If omitted the identity function will be used. Returns: A sequence containing all elements in the source sequence and second sequence. Raises: ValueError: If the Queryable has been closed. TypeError: If the second_iterable is not in fact iterable. TypeError: If the selector is not callable.
codesearchnet
def guess_content_type_and_encoding(path): for ext, content_type in _EXTENSION_TO_MIME_TYPE.items(): if path.endswith(ext): return content_type content_type, encoding = mimetypes.guess_type(path) content_type = content_type or "application/binary" return content_type, encoding
Guess the content type of a path, using ``mimetypes``. Falls back to "application/binary" if no content type is found. Args: path (str): the path to guess the mimetype of Returns: str: the content type of the file
juraj-google-style
def calibrate(self, fetch_names, num_runs, feed_dict_fn=None, input_map_fn=None): assert self._converted assert self._need_calibration assert not self._calibration_data_collected if feed_dict_fn and input_map_fn or (not feed_dict_fn and (not input_map_fn)): raise ValueError('Should specify one and only one of feed_dict_fn and input_map_fn.') if input_map_fn: for k, v in input_map_fn().items(): if not isinstance(k, str): raise ValueError('Keys of input_map_fn must be of type str') if not isinstance(v, tensor.Tensor): raise ValueError('Values of input_map_fn must be of type tf.Tensor') self._calibration_graph = ops.Graph() with self._calibration_graph.as_default(): fetches = importer.import_graph_def(self._converted_graph_def, input_map=input_map_fn() if input_map_fn else None, return_elements=fetch_names, name='') calibrate_rewriter_cfg = rewriter_config_pb2.RewriterConfig() if self._test_only_disable_non_trt_optimizers: trt_utils.disable_non_trt_optimizers_in_rewriter_config(calibrate_rewriter_cfg) calibrate_config = config_pb2.ConfigProto(allow_soft_placement=True, graph_options=config_pb2.GraphOptions(rewrite_options=calibrate_rewriter_cfg)) with session.Session(graph=self._calibration_graph, config=calibrate_config) as calibration_sess: for _ in range(num_runs): calibration_sess.run(fetches, feed_dict=feed_dict_fn() if feed_dict_fn else None) device_to_get_resource_op_map = {} with self._calibration_graph.as_default(): resource_name_input = array_ops.placeholder(dtypes.string) for node in self._converted_graph_def.node: if node.op == _TRT_ENGINE_OP_NAME: if node.device not in device_to_get_resource_op_map: with self._calibration_graph.device(node.device): serialized_resources_output = gen_trt_ops.get_calibration_data_op(resource_name_input) device_to_get_resource_op_map[node.device] = serialized_resources_output calibration_result = calibration_sess.run(device_to_get_resource_op_map[node.device], feed_dict={resource_name_input: _get_canonical_engine_name(node.name)}) node.attr['calibration_data'].s = calibration_result self._calibration_data_collected = True return self._converted_graph_def
Run the calibration and return the calibrated GraphDef. Args: fetch_names: a list of output tensor name to fetch during calibration. num_runs: number of runs of the graph during calibration. feed_dict_fn: a function that returns a dictionary mapping input names (as strings) in the GraphDef to be calibrated to values (e.g. Python list, numpy arrays, etc). One and only one of `feed_dict_fn` and `input_map_fn` should be specified. input_map_fn: a function that returns a dictionary mapping input names (as strings) in the GraphDef to be calibrated to Tensor objects. The values of the named input tensors in the GraphDef to be calibrated will be re-mapped to the respective `Tensor` values during calibration. One and only one of `feed_dict_fn` and `input_map_fn` should be specified. Raises: ValueError: if the input combination is invalid. RuntimeError: if this method is called in eager mode. Returns: The GraphDef after the calibration.
github-repos
def copy_table(self, src, dst): self.create_table_from(dst, src) self.execute("INSERT INTO {dst} SELECT * FROM {src}" .format(dst=dst, src=src)) self.commit()
Create a carbon copy of the source table. Arguments: src (str): The name of the table to copy. dst (str): The name of the target duplicate table. Raises: sql.OperationalError: If source table does not exist.
juraj-google-style
def schema(self) -> Schema: return self._schema
Schema of the EventSetNode. The schema defines the name and dtype of the features and the index. Returns: Schema of the EventSetNode.
github-repos
def uid(uid): if uid is None: raise ValueError('UID cannot be None.') def decorate(test_func): @functools.wraps(test_func) def wrapper(*args, **kwargs): return test_func(*args, **kwargs) setattr(wrapper, 'uid', uid) return wrapper return decorate
Decorator specifying the unique identifier (UID) of a test case. The UID will be recorded in the test's record when executed by Mobly. If you use any other decorator for the test method, you may want to use this as the outer-most one. Note a common UID system is the Universal Unitque Identifier (UUID), but we are not limiting people to use UUID, hence the more generic name `UID`. Args: uid: string, the uid for the decorated test function.
github-repos
def create_scheduler(self, num_training_steps: int, optimizer: torch.optim.Optimizer=None): if self.lr_scheduler is None: self.lr_scheduler = get_scheduler(self.args.lr_scheduler_type, optimizer=self.optimizer if optimizer is None else optimizer, num_warmup_steps=self.args.get_warmup_steps(num_training_steps), num_training_steps=num_training_steps, scheduler_specific_kwargs=self.args.lr_scheduler_kwargs) self._created_lr_scheduler = True return self.lr_scheduler
Setup the scheduler. The optimizer of the trainer must have been set up either before this method is called or passed as an argument. Args: num_training_steps (int): The number of training steps to do.
github-repos
def get(self, name): name = str(name) if name not in self._properties: raise ArgumentError("Unknown property in DeviceModel", name=name) return self._properties[name]
Get a device model property. Args: name (str): The name of the property to get
juraj-google-style
def log_cdf(self, value, name='log_cdf'): return self._call_log_cdf(value, name)
Log cumulative distribution function. Given random variable `X`, the cumulative distribution function `cdf` is: ```none log_cdf(x) := Log[ P[X <= x] ] ``` Often, a numerical approximation can be used for `log_cdf(x)` that yields a more accurate answer than simply taking the logarithm of the `cdf` when `x << -1`. Args: value: `float` or `double` `Tensor`. name: Python `str` prepended to names of ops created by this function. Returns: logcdf: a `Tensor` of shape `sample_shape(x) + self.batch_shape` with values of type `self.dtype`.
github-repos
def to_json_file(self, json_file_path: Union[str, os.PathLike]): with open(json_file_path, 'w', encoding='utf-8') as writer: writer.write(self.to_json_string())
Save this instance to a JSON file. Args: json_file_path (`str` or `os.PathLike`): Path to the JSON file in which this feature_extractor instance's parameters will be saved.
github-repos
def onTagAdd(self, name, func): if '*' in name: self.ontagaddglobs.add(name, func) else: self.ontagadds[name].append(func)
Register a callback for tag addition. Args: name (str): The name of the tag or tag glob. func (function): The callback func(node, tagname, tagval).
juraj-google-style
def get_clinvar_submission(store, institute_id, case_name, variant_id, submission_id): institute_obj, case_obj = institute_and_case(store, institute_id, case_name) pinned = [store.variant(variant_id) or variant_id for variant_id in case_obj.get('suspects', [])] variant_obj = store.variant(variant_id) clinvar_submission_objs = store.clinvars(submission_id=submission_id) return dict( today = str(date.today()), institute=institute_obj, case=case_obj, variant=variant_obj, pinned_vars=pinned, clinvars = clinvar_submission_objs )
Collects all variants from the clinvar submission collection with a specific submission_id Args: store(scout.adapter.MongoAdapter) institute_id(str): Institute ID case_name(str): case ID variant_id(str): variant._id submission_id(str): clinvar submission id, i.e. SUB76578 Returns: A dictionary with all the data to display the clinvar_update.html template page
juraj-google-style
def _get_name_and_module(full_name): name_segments = full_name.split('.') return ('.'.join(name_segments[:-1]), name_segments[-1])
Split full_name into module and short name. Args: full_name: Full name of symbol that includes module. Returns: Full module name and short symbol name.
github-repos
def list(cls, session, mailbox): endpoint = '/mailboxes/%d/conversations.json' % mailbox.id return super(Conversations, cls).list(session, endpoint)
Return conversations in a mailbox. Args: session (requests.sessions.Session): Authenticated session. mailbox (helpscout.models.Mailbox): Mailbox to list. Returns: RequestPaginator(output_type=helpscout.models.Conversation): Conversations iterator.
juraj-google-style
def put(self, filename, encoding=None): from . import LocalFile if os.path.isdir(filename) and self.source is None: raise ValueError("Cannot write this object to " "directory %s without an explicit filename." % filename) target = get_target_path(filename, self.source) if encoding is None: encoding = self.encoding if self._isbytes: kwargs = {'mode': 'wb'} else: kwargs = {'mode': 'w', 'encoding': encoding} with open(target, **kwargs) as outfile: outfile.write(self._contents) return LocalFile(target, encoded_with=encoding)
Write the file to the given path Args: filename (str): path to write this file to encoding (str): file encoding (default: system default) Returns: LocalFile: reference to the copy of the file stored at ``filename``
juraj-google-style
def print_colored_columns(printer, rows, padding=2): rows_ = [x[:-1] for x in rows] colors = [x[-1] for x in rows] for col, line in zip(colors, columnise(rows_, padding=padding)): printer(line, col)
Like `columnise`, but with colored rows. Args: printer (`colorize.Printer`): Printer object. Note: The last entry in each row is the row color, or None for no coloring.
juraj-google-style
def bit_to_int(x_bit, num_bits, base=2): x_l = tf.stop_gradient(tf.to_int32(tf.reshape(x_bit, [-1, num_bits]))) x_labels = [ x_l[:, i] * tf.to_int32(base)**tf.to_int32(i) for i in range(num_bits)] res = sum(x_labels) return tf.to_int32(tf.reshape(res, common_layers.shape_list(x_bit)[:-1]))
Turn x_bit representing numbers bitwise (lower-endian) to int tensor. Args: x_bit: Tensor containing numbers in a particular base to be converted to int. num_bits: Number of bits in the representation. base: Base of the representation. Returns: Integer representation of this number.
juraj-google-style
def merge(self, obj): if obj.id in self.cache: self.cache[obj.id].merge(obj) else: self.cache[obj.id] = obj return self.cache[obj.id]
Add a given object to the cache, or update an existing entry to include more fields. Args: obj (SkypeObj): object to add to the cache
juraj-google-style
def GetFileEntryByPathSpec(self, path_spec): tsk_file = None inode = getattr(path_spec, 'inode', None) location = getattr(path_spec, 'location', None) root_inode = self.GetRootInode() if (location == self.LOCATION_ROOT or (inode is not None and root_inode is not None and inode == root_inode)): tsk_file = self._tsk_file_system.open(self.LOCATION_ROOT) return tsk_file_entry.TSKFileEntry( self._resolver_context, self, path_spec, tsk_file=tsk_file, is_root=True) try: if inode is not None: tsk_file = self._tsk_file_system.open_meta(inode=inode) elif location is not None: tsk_file = self._tsk_file_system.open(location) except IOError: pass if tsk_file is None: return None return tsk_file_entry.TSKFileEntry( self._resolver_context, self, path_spec, tsk_file=tsk_file)
Retrieves a file entry for a path specification. Args: path_spec (PathSpec): path specification. Returns: TSKFileEntry: a file entry or None if not available.
juraj-google-style
def supply(self, issuer): issuer_uri_config = self._issuer_uri_configs.get(issuer) if not issuer_uri_config: return jwks_uri = issuer_uri_config.jwks_uri if jwks_uri: return jwks_uri open_id_valid = issuer_uri_config.open_id_valid if open_id_valid: discovered_jwks_uri = _discover_jwks_uri(issuer) self._issuer_uri_configs[issuer] = IssuerUriConfig(False, discovered_jwks_uri) return discovered_jwks_uri
Supplies the `jwks_uri` for the given issuer. Args: issuer: the issuer. Returns: The `jwks_uri` that is either statically configured or retrieved via OpenId discovery. None is returned when the issuer is unknown or the OpenId discovery fails.
juraj-google-style
def cancelTickByTickData(self, contract: Contract, tickType: str): ticker = self.ticker(contract) reqId = self.wrapper.endTicker(ticker, tickType) if reqId: self.client.cancelTickByTickData(reqId) else: self._logger.error( f'cancelMktData: No reqId found for contract {contract}')
Unsubscribe from tick-by-tick data Args: contract: The exact contract object that was used to subscribe with.
juraj-google-style
def convertData(self, contents, def_buf, kwh_scale=ScaleKWH.EmptyScale): log_str = "" count = 0 if kwh_scale == ScaleKWH.EmptyScale: scale_offset = int(def_buf.keys().index(Field.kWh_Scale)) self.m_kwh_precision = kwh_scale = int(contents[scale_offset]) for fld in def_buf: if def_buf[fld][MeterData.CalculatedFlag]: count += 1 continue if len(contents) == 0: count += 1 continue try: raw_data = contents[count] fld_type = def_buf[fld][MeterData.TypeValue] fld_scale = def_buf[fld][MeterData.ScaleValue] if fld_type == FieldType.Float: float_data = float(str(raw_data)) divisor = 1 if fld_scale == ScaleType.KWH: divisor = 1 if kwh_scale == ScaleKWH.Scale10: divisor = 10 elif kwh_scale == ScaleKWH.Scale100: divisor = 100 elif (kwh_scale != ScaleKWH.NoScale) and (kwh_scale != ScaleKWH.EmptyScale): ekm_log("Unrecognized kwh scale.") elif fld_scale == ScaleType.Div10: divisor = 10 elif fld_scale == ScaleType.Div100: divisor = 100 elif fld_scale != ScaleType.No: ekm_log("Unrecognized float scale.") float_data /= divisor float_data_str = str(float_data) def_buf[fld][MeterData.StringValue] = float_data_str def_buf[fld][MeterData.NativeValue] = float_data elif fld_type == FieldType.Hex: hex_data = raw_data.encode('hex') def_buf[fld][MeterData.StringValue] = hex_data def_buf[fld][MeterData.NativeValue] = hex_data elif fld_type == FieldType.Int: integer_data = int(raw_data) integer_data_str = str(integer_data) if len(integer_data_str) == 0: integer_data_str = str(0) def_buf[fld][MeterData.StringValue] = integer_data_str def_buf[fld][MeterData.NativeValue] = integer_data elif fld_type == FieldType.String: string_data = str(raw_data) def_buf[fld][MeterData.StringValue] = string_data def_buf[fld][MeterData.NativeValue] = string_data elif fld_type == FieldType.PowerFactor: def_buf[fld][MeterData.StringValue] = str(raw_data) def_buf[fld][MeterData.NativeValue] = str(raw_data) else: ekm_log("Unrecognized field type") log_str = log_str + '"' + fld + '": "' + def_buf[fld][MeterData.StringValue] + '"\n' except: ekm_log("Exception on Field:" + str(fld)) ekm_log(traceback.format_exc(sys.exc_info())) self.writeCmdMsg("Exception on Field:" + str(fld)) count += 1 return True
Move data from raw tuple into scaled and conveted values. Args: contents (tuple): Breakout of passed block from unpackStruct(). def_buf (): Read buffer destination. kwh_scale (int): :class:`~ekmmeters.ScaleKWH` as int, from Field.kWhScale` Returns: bool: True on completion.
juraj-google-style
def _setup_mock_socket_file(mock_socket_create_conn, resp): fake_file = mock.Mock() fake_file.readline.side_effect = resp fake_conn = mock.Mock() fake_conn.makefile.return_value = fake_file mock_socket_create_conn.return_value = fake_conn return fake_file
Sets up a mock socket file from the mock connection. Args: mock_socket_create_conn: The mock method for creating a socket connection. resp: iterable, the side effect of the `readline` function of the mock socket file. Returns: The mock socket file that will be injected into the code.
github-repos
def get_paginated_catalog_courses(self, catalog_id, querystring=None): return self._load_data(self.CATALOGS_COURSES_ENDPOINT.format(catalog_id), default=[], querystring=querystring, traverse_pagination=False, many=False)
Return paginated response for all catalog courses. Returns: dict: API response with links to next and previous pages.
codesearchnet
def price( self, instrument, **kwargs ): request = Request( 'GET', '/v3/instruments/{instrument}/price' ) request.set_path_param( 'instrument', instrument ) request.set_param( 'time', kwargs.get('time') ) response = self.ctx.request(request) if response.content_type is None: return response if not response.content_type.startswith("application/json"): return response jbody = json.loads(response.raw_body) parsed_body = {} if str(response.status) == "200": if jbody.get('price') is not None: parsed_body['price'] = \ self.ctx.pricing_common.Price.from_dict( jbody['price'], self.ctx ) elif str(response.status) == "400": if jbody.get('errorCode') is not None: parsed_body['errorCode'] = \ jbody.get('errorCode') if jbody.get('errorMessage') is not None: parsed_body['errorMessage'] = \ jbody.get('errorMessage') elif str(response.status) == "401": if jbody.get('errorCode') is not None: parsed_body['errorCode'] = \ jbody.get('errorCode') if jbody.get('errorMessage') is not None: parsed_body['errorMessage'] = \ jbody.get('errorMessage') elif str(response.status) == "404": if jbody.get('errorCode') is not None: parsed_body['errorCode'] = \ jbody.get('errorCode') if jbody.get('errorMessage') is not None: parsed_body['errorMessage'] = \ jbody.get('errorMessage') elif str(response.status) == "405": if jbody.get('errorCode') is not None: parsed_body['errorCode'] = \ jbody.get('errorCode') if jbody.get('errorMessage') is not None: parsed_body['errorMessage'] = \ jbody.get('errorMessage') else: parsed_body = jbody response.body = parsed_body return response
Fetch a price for an instrument. Accounts are not associated in any way with this endpoint. Args: instrument: Name of the Instrument time: The time at which the desired price is in effect. The current price is returned if no time is provided. Returns: v20.response.Response containing the results from submitting the request
juraj-google-style
def peek(self, iroute: 'InstanceRoute') -> Optional[Value]: val = self.value sn = self.schema_node for sel in iroute: (val, sn) = sel.peek_step(val, sn) if (val is None): return None return val
Return a value within the receiver's subtree. Args: iroute: Instance route (relative to the receiver).
codesearchnet
def add_payload(self, key, val, append=True): if append: self._params.setdefault(key, []).append(val) else: self._params[key] = val
Add a key value pair to payload for this request. .. Note:: For ``_search`` you can pass a search argument. (e.g. _search?summary=1.1.1.1). Args: key (string): The payload key val (string): The payload value append (bool): Indicates whether the value should be appended or overwritten.
juraj-google-style
def segments(seg_type=None): for index in xrange(idaapi.get_segm_qty()): seg = Segment(index=index) if (seg_type is None) or (seg.type == seg_type): yield Segment(index=index)
Iterate segments based on type Args: seg_type: type of segment e.g. SEG_CODE Returns: iterator of `Segment` objects. if seg_type is None , returns all segments otherwise returns only the relevant ones
juraj-google-style
def _placement_points_generator(self, skyline, width): skyline_r = skyline[-1].right skyline_l = skyline[0].left ppointsl = (s.left for s in skyline if s.left+width <= skyline_r) ppointsr = (s.right-width for s in skyline if s.right-width >= skyline_l) return heapq.merge(ppointsl, ppointsr)
Returns a generator for the x coordinates of all the placement points on the skyline for a given rectangle. WARNING: In some cases could be duplicated points, but it is faster to compute them twice than to remove them. Arguments: skyline (list): Skyline HSegment list width (int, float): Rectangle width Returns: generator
juraj-google-style
def getTraitCovar(self, term_i=None): assert term_i < self.n_randEffs, 'VarianceDecomposition:: specied term out of range' if term_i is None: RV = sp.zeros((self.P,self.P)) for term_i in range(self.n_randEffs): RV += self.getTraitCovarFun().K() else: assert term_i<self.n_randEffs, 'Term index non valid' RV = self.getTraitCovarFun(term_i).K() return RV
Return the estimated trait covariance matrix for term_i (or the total if term_i is None) To retrieve the matrix of correlation coefficient use \see getTraitCorrCoef Args: term_i: index of the random effect term we want to retrieve the covariance matrix Returns: estimated trait covariance
juraj-google-style
def get_repo_config(self, repo='default'): for repo_config in self.repositories: if repo_config.name == repo or repo_config.url in RepositoryURL(repo): return repo_config return None
Retrieve configuration for a given repository. Args: repo (str): a repository "realm" (alias) or its URL Returns: RepositoryConfig: if there is configuration for that repository None: otherwise
juraj-google-style
def to_dict(ramons, flatten=False): if type(ramons) is not list: ramons = [ramons] out_ramons = {} for r in ramons: out_ramons[r.id] = { "id": r.id, "type": _reverse_ramon_types[type(r)], "metadata": vars(r) } return out_ramons
Converts a RAMON object list to a JSON-style dictionary. Useful for going from an array of RAMONs to a dictionary, indexed by ID. Arguments: ramons (RAMON[]): A list of RAMON objects flatten (boolean: False): Not implemented Returns: dict: A python dictionary of RAMON objects.
juraj-google-style
def add(self, predicted, target): predicted = predicted.cpu().numpy() target = target.cpu().numpy() assert (predicted.shape[0] == target.shape[0]), 'number of targets and predicted outputs do not match' if (np.ndim(predicted) != 1): assert (predicted.shape[1] == self.k), 'number of predictions does not match size of confusion matrix' predicted = np.argmax(predicted, 1) else: assert ((predicted.max() < self.k) and (predicted.min() >= 0)), 'predicted values are not between 1 and k' onehot_target = (np.ndim(target) != 1) if onehot_target: assert (target.shape[1] == self.k), 'Onehot target does not match size of confusion matrix' assert ((target >= 0).all() and (target <= 1).all()), 'in one-hot encoding, target values should be 0 or 1' assert (target.sum(1) == 1).all(), 'multi-label setting is not supported' target = np.argmax(target, 1) else: assert ((predicted.max() < self.k) and (predicted.min() >= 0)), 'predicted values are not between 0 and k-1' x = (predicted + (self.k * target)) bincount_2d = np.bincount(x.astype(np.int32), minlength=(self.k ** 2)) assert (bincount_2d.size == (self.k ** 2)) conf = bincount_2d.reshape((self.k, self.k)) self.conf += conf
Computes the confusion matrix of K x K size where K is no of classes Args: predicted (tensor): Can be an N x K tensor of predicted scores obtained from the model for N examples and K classes or an N-tensor of integer values between 0 and K-1. target (tensor): Can be a N-tensor of integer values assumed to be integer values between 0 and K-1 or N x K tensor, where targets are assumed to be provided as one-hot vectors
codesearchnet
def get_attribute(self, main_type, sub_type, unique_id, attribute_id, owner=None, params=None): return self.attribute( main_type, sub_type, unique_id, attribute_id, action='GET', owner=owner, params=params )
Args: owner: main_type: sub_type: unique_id: attribute_id: params: Return:
juraj-google-style
def update_panel(store, panel_name, csv_lines, option): new_genes = [] panel_obj = store.gene_panel(panel_name) if (panel_obj is None): return None try: new_genes = parse_genes(csv_lines) except SyntaxError as error: flash(error.args[0], 'danger') return None if (option == 'replace'): for gene in panel_obj['genes']: gene['hgnc_symbol'] = gene['symbol'] store.add_pending(panel_obj, gene, action='delete', info=None) for new_gene in new_genes: if (not new_gene['hgnc_id']): flash('gene missing hgnc id: {}'.format(new_gene['hgnc_symbol']), 'danger') continue gene_obj = store.hgnc_gene(new_gene['hgnc_id']) if (gene_obj is None): flash('gene not found: {} - {}'.format(new_gene['hgnc_id'], new_gene['hgnc_symbol']), 'danger') continue if (new_gene['hgnc_symbol'] and (gene_obj['hgnc_symbol'] != new_gene['hgnc_symbol'])): flash('symbol mis-match: {0} | {1}'.format(gene_obj['hgnc_symbol'], new_gene['hgnc_symbol']), 'warning') info_data = {'disease_associated_transcripts': new_gene['transcripts'], 'reduced_penetrance': new_gene['reduced_penetrance'], 'mosaicism': new_gene['mosaicism'], 'inheritance_models': new_gene['inheritance_models'], 'database_entry_version': new_gene['database_entry_version']} if (option == 'replace'): action = 'add' else: existing_genes = {gene['hgnc_id'] for gene in panel_obj['genes']} action = ('edit' if (gene_obj['hgnc_id'] in existing_genes) else 'add') store.add_pending(panel_obj, gene_obj, action=action, info=info_data) return panel_obj
Update an existing gene panel with genes. Args: store(scout.adapter.MongoAdapter) panel_name(str) csv_lines(iterable(str)): Stream with genes option(str): 'add' or 'replace' Returns: panel_obj(dict)
codesearchnet
def add_group_member(self, grp_name, user): self.project_service.set_auth(self._token_project) self.project_service.add_group_member(grp_name, user)
Add the given user to the named group. Both group and user must already exist for this to succeed. Args: name (string): Name of group. user_name (string): User to add to group. Raises: requests.HTTPError on failure.
juraj-google-style
def execute(self, command, data={}): (method, uri) = command try: path = self._formatter.format_map(uri, data) body = self._formatter.get_unused_kwargs() url = '{0}{1}'.format(self._url, path) return self._request(method, url, body) except KeyError as err: LOGGER.debug('Endpoint {0} is missing argument {1}'.format(uri, err)) raise
Format the endpoint url by data and then request the remote server. Args: command(Command): WebDriver command to be executed. data(dict): Data fulfill the uri template and json body. Returns: A dict represent the json body from server response. Raises: KeyError: Data cannot fulfill the variable which command needed. ConnectionError: Meet network problem (e.g. DNS failure, refused connection, etc). Timeout: A request times out. HTTPError: HTTP request returned an unsuccessful status code.
codesearchnet
def __contains__(self, item): if self is item: return True elif self.package is item and self.name == '__init__': return True return False
Whether given item is contained inside this module. Args: item (Package/Module): a package or module. Returns: bool: True if self is item or item is self's package and self if an ``__init__`` module.
juraj-google-style
def reduce_and_verify(self, inputs, expect, options): def replica_fn(): CollectiveReplicaLauncher._prefer_unique_instance_key = options.prefer_unique_instance_key collective, devices, pid = self.make_collective(options.num_processes, options.gpus_per_process) def reduce_fn(): value_fn = lambda device_idx: inputs[pid * len(devices) + device_idx] per_replica_value = make_per_replica_value(value_fn, devices) reduced_values = collective.reduce(options.reduce_op, per_replica_value, per_replica_value, options.communication_options) if options.gpus_per_process > 1: self.assertIsInstance(reduced_values, value_lib.Mirrored) reduced_values = self.as_list(reduced_values) self.assertAllEqual(devices, [v.device for v in reduced_values]) return [ops.convert_to_tensor(v) for v in reduced_values] per_replica_expect = [ops.convert_to_tensor(expect)] * len(devices) if 'eager' in options.mode: got = reduce_fn() self.assertAllClose(got, per_replica_expect) if 'func_graph' in options.mode: got = def_function.function(reduce_fn)() self.assertAllClose(got, per_replica_expect) get_global_mpr(options.num_processes).run(replica_fn)
Reduce the given `inputs` and verify the output matches `expect`. Args: inputs: a list of `Tensor` or `IndexedSlices`, where i-th value will be fed to i-th replica. expect: a `Tensor` or `IndexedSlices`. This should be the expected value for one replica. options: a `RunOpotions` instance.
github-repos
def shift(self, time: int) -> 'Interval': return Interval((self._begin + time), (self._end + time))
Return a new interval shifted by `time` from self Args: time: time to be shifted Returns: Interval: interval shifted by `time`
codesearchnet
def _create_grad_func(ys, xs, grads, cond_graph, body_graph, name, while_op, maximum_iterations): assert len(ys) == len(grads) total_iters = while_op.outputs[0] counter = constant_op.constant(0, dtype=total_iters.dtype, name='grad_counter') body_graph_inputs = object_identity.ObjectIdentitySet(body_graph.inputs) body_graph_outputs = object_identity.ObjectIdentitySet(body_graph.outputs) args = [counter, maximum_iterations, total_iters] + list(grads) grad_func_graph = func_graph_module.func_graph_from_py_func(name, lambda *args: _grad_fn(ys, xs, args, body_graph), args, {}, func_graph=_WhileBodyGradFuncGraph(name, cond_graph, body_graph, maximum_iterations, while_op, body_graph_inputs, body_graph_outputs)) for external_capture, internal_capture in grad_func_graph.captures: if ops.tensor_id(internal_capture) in grad_func_graph.internal_capture_to_output: new_output = grad_func_graph.internal_capture_to_output[ops.tensor_id(internal_capture)] else: raise ValueError(f'Tensor {str(internal_capture)} which captures {str(external_capture)} is in list of internal_captures but not in internal_capture_to_output.') grad_func_graph.outputs.append(new_output) grad_func_graph.structured_outputs.append(new_output) return (grad_func_graph, args)
Builds and returns the gradient FuncGraph of `func_graph` and its args. The returned grad_func_graph must be called with the returned args + grad_func_graph.captures. Args: ys: A `Tensor` or list of tensors to be differentiated. xs: A `Tensor` or list of tensors to be used for differentiation. grads: The incoming grads for `ys`. cond_graph: FuncGraph for the forward cond function. body_graph: FuncGraph for the forward body function. name: Name of the returned gradient function. while_op: The forward While op. maximum_iterations: Tensor. The maximum number of iterations. Returns: 2-tuple of (grad_func_graph, args).
github-repos
def update(self, data): if data.state['Name'] == 'terminated': self.delete(auto_commit=False) return True updated = self.set_property('launch_date', to_utc_date(data.launch_time).isoformat()) updated |= self.set_property('state', data.state['Name']) updated |= self.set_property('instance_type', data.instance_type) updated |= self.set_property('public_ip', data.public_ip_address or None) updated |= self.set_property('public_dns', data.public_dns_name or None) tags = {x['Key']: x['Value'] for x in data.tags or {}} existing_tags = {x.key: x for x in self.tags} for key, value in list(tags.items()): updated |= self.set_tag(key, value) for key in list(existing_tags.keys()): if key not in tags: updated |= self.delete_tag(key) return updated
Updates the object information based on live data, if there were any changes made. Any changes will be automatically applied to the object, but will not be automatically persisted. You must manually call `db.session.add(instance)` on the object. Args: data (:obj:): AWS API Resource object fetched from AWS API Returns: True if there were any changes to the object, else false
juraj-google-style
def create_seq(character, action_metadata, direction, length=8, start=0): sprite_start = ((action_metadata[0] + direction) * FRAME_SIZE) sprite_end = (((action_metadata[0] + direction) + 1) * FRAME_SIZE) sprite_line = character[(sprite_start:sprite_end, ...)] frames = tf.stack(tf.split(sprite_line, 13, axis=1)) frames = frames[0:action_metadata[1]] frames = tf.roll(frames, shift=(- start), axis=0) frames = tf.tile(frames, [2, 1, 1, 1]) frames = frames[:length] frames = tf.cast(frames, dtype=tf.float32) frames.set_shape([length, FRAME_SIZE, FRAME_SIZE, CHANNELS]) return frames
Creates a sequence. Args: character: A character sprite tensor. action_metadata: An action metadata tuple. direction: An integer representing the direction, i.e., the row offset within each action group corresponding to a particular direction. length: Desired length of the sequence. If this is longer than the number of available frames, it will roll over to the beginning. start: Index of possible frames at which to start the sequence. Returns: A sequence tensor.
codesearchnet
def from_filename(filename, require=None): with io.open(filename, 'r', encoding='utf-8') as json_file: data = json.load(json_file) return (data, from_dict(data, require=require))
Reads a Google service account JSON file and returns its parsed info. Args: filename (str): The path to the service account .json file. require (Sequence[str]): List of keys required to be present in the info. Returns: Tuple[ Mapping[str, str], google.auth.crypt.Signer ]: The verified info and a signer instance.
codesearchnet
def get_nowait(self, name, default=_MISSING, autoremove=False): self._ensure_declared(name) try: future = self._data[name] if future.done(): return future.result() if (default is _MISSING): raise KeyError('Key {} has not been assigned a value and no default given'.format(name)) return default finally: if autoremove: self._data[name].cancel() del self._data[name]
Get the value of a key if it is already set. This method allows you to check if a key has already been set without blocking. If the key has not been set you will get the default value you pass in or KeyError() if no default is passed. When this method returns the key is automatically removed unless you pass ``autoremove=False``. This method is not a coroutine and does not block. Args: name (str): The name of the key to wait on. default (object): The default value to return if the key has not yet been set. Defaults to raising KeyError(). autoremove (bool): Whether to automatically remove the key when get() returns. Returns: object: Whatever was set in the key by :meth:`set`.
codesearchnet
def __init__(self, env): self._env = env self._observation_space = self._env.observation_space self._action_space = self._env.action_space
Cache observation and action space to not recompute them repeatedly. Args: env: OpenAI Gym environment.
juraj-google-style
def __init__(self, prefs, g, divPressureValues, kappa=2.0, omega=0.5, beta=1.0, mu=1.0,omega2=0.0, freeparams=['kappa', 'omega', 'beta', 'mu', 'omega2']): _checkParam('omega2',omega2, self.PARAMLIMITS, self.PARAMTYPES) self.omega2 = omega2 self.deltar = scipy.array(divPressureValues.copy()) assert (max(scipy.absolute(self.deltar))) <= 1, ( "A scaled deltar value is > 1 or < -1.") super(ExpCM_empirical_phi_divpressure, self).__init__(prefs, g, kappa=kappa, omega=omega, beta=beta, mu=mu, freeparams=freeparams)
Initialize an `ExpCM_empirical_phi_divpressure` object. Args: `prefs`, `kappa`, `omega`, `beta`, `mu`, `g`, `freeparams` Same meaning as for an `ExpCM_empirical_phi` `divPressureValues`, `omega2` Meaning described in the main class doc string.
juraj-google-style
def unwrap(data_type): unwrapped_nullable = False unwrapped_alias = False while is_alias(data_type) or is_nullable_type(data_type): if is_nullable_type(data_type): unwrapped_nullable = True if is_alias(data_type): unwrapped_alias = True data_type = data_type.data_type return data_type, unwrapped_nullable, unwrapped_alias
Convenience method to unwrap all Aliases and Nullables from around a DataType. This checks for nullable wrapping aliases, as well as aliases wrapping nullables. Args: data_type (DataType): The target to unwrap. Return: Tuple[DataType, bool, bool]: The underlying data type; a bool that is set if a nullable was present; a bool that is set if an alias was present.
juraj-google-style
def add(self, command, *args): cmd = Command(command, args) self.commands.append(cmd)
Add a command to this command file. Args: command (str): The command to add *args (str): The parameters to call the command with
codesearchnet
def create(self, ip_dest, next_hop, **kwargs): return self._set_route(ip_dest, next_hop, **kwargs)
Create a static route Args: ip_dest (string): The ip address of the destination in the form of A.B.C.D/E next_hop (string): The next hop interface or ip address **kwargs['next_hop_ip'] (string): The next hop address on destination interface **kwargs['distance'] (string): Administrative distance for this route **kwargs['tag'] (string): Route tag **kwargs['route_name'] (string): Route name Returns: True if the operation succeeds, otherwise False.
codesearchnet
def spawn(self, function, *args, **kwargs): assert self.state != STOPPED, "Can't spawn when process stopped" spawned = Spawned(function, args, kwargs) self._spawned.append(spawned) self._spawn_count += 1 if self._spawn_count > SPAWN_CLEAR_COUNT: self._clear_spawn_list() return spawned
Runs the function in a worker thread, returning a Result object Args: function: Function to run args: Positional arguments to run the function with kwargs: Keyword arguments to run the function with Returns: Spawned: Something you can call wait(timeout) on to see when it's finished executing
juraj-google-style
def GetSizeHint(self, context=None, **unused_kwargs): context_state = getattr(context, 'state', {}) elements_data_size = self.GetByteSize() if elements_data_size: return elements_data_size try: elements_data_size = self._CalculateElementsDataSize(context) except errors.MappingError: pass if elements_data_size is None and self._HasElementsTerminator(): size_hints = context_state.get('size_hints', {}) size_hint = size_hints.get(self._data_type_definition.name, None) elements_data_size = 0 if size_hint: elements_data_size = size_hint.byte_size if not size_hint or not size_hint.is_complete: elements_data_size += self._element_data_type_definition.GetByteSize() return elements_data_size
Retrieves a hint about the size. Args: context (Optional[DataTypeMapContext]): data type map context, used to determine the size hint. Returns: int: hint of the number of bytes needed from the byte stream or None.
juraj-google-style
def course_blocks(self, course_id, username): resp = self.requester.get(urljoin(self.base_url, '/api/courses/v1/blocks/'), params={'depth': 'all', 'username': username, 'course_id': course_id, 'requested_fields': 'children,display_name,id,type,visible_to_staff_only'}) resp.raise_for_status() return Structure(resp.json())
Fetches course blocks. Args: course_id (str): An edx course id. username (str): username of the user to query for (can reveal hidden modules) Returns: Structure
codesearchnet
def tuplesorted(items, *keys): tuple_keys = [Key(func=(lambda t, i=index, k=key: k.func(t[i])), reverse=key.reverse) for (index, key) in enumerate(keys)] return multisorted(items, *tuple_keys)
Sort by tuples with a different key for each item. Args: items: An iterable series of sequences (typically tuples) *keys: Key objects which transform individual elements of each tuple into sort keys. The zeroth object transforms the zeroth element of each tuple, the first key object transforms the first element of each tuple, and so on. Returns: A list of items sorted according to keys.
codesearchnet
def GetKeyByPath(self, key_path): key_path_upper = key_path.upper() if key_path_upper.startswith(self._key_path_prefix_upper): relative_key_path = key_path[self._key_path_prefix_length:] elif key_path.startswith(definitions.KEY_PATH_SEPARATOR): relative_key_path = key_path key_path = ''.join([self._key_path_prefix, key_path]) else: return None path_segments = key_paths.SplitKeyPath(relative_key_path) registry_key = self._root_key if not registry_key: return None for path_segment in path_segments: registry_key = registry_key.GetSubkeyByName(path_segment) if not registry_key: return None return registry_key
Retrieves the key for a specific path. Args: key_path (str): Windows Registry key path. Returns: WinRegistryKey: Windows Registry key or None if not available.
juraj-google-style
def message_factory(msg_type, msg_types=MESSAGE_TYPES, *args, **kwargs): try: return msg_types[msg_type.lower()](*args, **kwargs) except (UnknownProfileError, InvalidMessageInputError) as e: err_exit("Unable to send message: ", e) except KeyError: raise UnsupportedMessageTypeError(msg_type, msg_types)
Factory function to return the specified message instance. Args: :msg_type: (str) the type of message to send, i.e. 'Email' :msg_types: (str, list, or set) the supported message types :kwargs: (dict) keywords arguments that are required for the various message types. See docstrings for each type. i.e. help(messages.Email), help(messages.Twilio), etc.
juraj-google-style
def getPixmap(page, matrix = None, colorspace = csRGB, clip = None, alpha = True): CheckParent(page) cs = colorspace if type(colorspace) is str: if colorspace.upper() == "GRAY": cs = csGRAY elif colorspace.upper() == "CMYK": cs = csCMYK else: cs = csRGB if cs.n not in (1,3,4): raise ValueError("unsupported colorspace") dl = page.getDisplayList() if clip: scissor = Rect(clip) else: scissor = None pix = dl.getPixmap(matrix = matrix, colorspace = cs, alpha = alpha, clip = scissor) del dl return pix
Create pixmap of page. Args: matrix: Matrix for transformation (default: Identity). colorspace: (str/Colorspace) rgb, rgb, gray - case ignored, default csRGB. clip: (irect-like) restrict rendering to this area. alpha: (bool) include alpha channel
juraj-google-style
def _log_effective_mass_data(data, is_spin_polarized, mass_type='m_e'): s = (' ({})'.format(data['spin'].name) if is_spin_polarized else '') band_str = 'band {}{}'.format((data['band_id'] + 1), s) start_kpoint = data['start_kpoint'] end_kpoint = data['end_kpoint'] eff_mass = data['effective_mass'] kpoint_str = kpt_str.format(k=start_kpoint.frac_coords) if start_kpoint.label: kpoint_str += ' ({})'.format(start_kpoint.label) kpoint_str += ' -> ' kpoint_str += kpt_str.format(k=end_kpoint.frac_coords) if end_kpoint.label: kpoint_str += ' ({})'.format(end_kpoint.label) logging.info(' {}: {:.3f} | {} | {}'.format(mass_type, eff_mass, band_str, kpoint_str))
Log data about the effective masses and their directions. Args: data (dict): The effective mass data. Formatted as a :obj:`dict` with the keys: 'effective_mass' (:obj:`float`) The effective mass in units of electron rest mass, :math:`m_0`. 'energies' (:obj:`numpy.ndarray`) Band eigenvalues in eV. 'band_id' (:obj:`int`) The index of the band, 'spin' (:obj:`~pymatgen.electronic_structure.core.Spin`) The spin channel 'start_kpoint' (:obj:`int`) The index of the k-point at which the band extrema occurs 'end_kpoint' (:obj:`int`) The k-point towards which the data has been sampled. is_spin_polarized (bool): Whether the system is spin polarized.
codesearchnet
def stations(self, station, limit=10): query = { 'start': 1, 'S': station + '?', 'REQ0JourneyStopsB': limit } rsp = requests.get('http: return parse_stations(rsp.text)
Find stations for given queries Args: station (str): search query limit (int): limit number of results
juraj-google-style
def generate_srt_from_sjson(sjson_subs): output = '' equal_len = len(sjson_subs['start']) == len(sjson_subs['end']) == len(sjson_subs['text']) if not equal_len: return output for i in range(len(sjson_subs['start'])): item = SubRipItem( index=i, start=SubRipTime(milliseconds=sjson_subs['start'][i]), end=SubRipTime(milliseconds=sjson_subs['end'][i]), text=sjson_subs['text'][i] ) output += (six.text_type(item)) output += '\n' return output
Generate transcripts from sjson to SubRip (*.srt). Arguments: sjson_subs (dict): `sjson` subs. Returns: Subtitles in SRT format.
juraj-google-style