code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def get_next(self): raise NotImplementedError('Iterator.get_next()')
Returns the next element. >>> dataset = tf.data.Dataset.from_tensors(42) >>> iterator = iter(dataset) >>> print(iterator.get_next()) tf.Tensor(42, shape=(), dtype=int32) Returns: A (nested) structure of values matching `tf.data.Iterator.element_spec`. Raises: `tf.errors.OutOfRangeError`: If the end of the iterator has been reached.
github-repos
async def get_json(self, url, json_callback=None, **kwargs): if not json_callback: json_callback = json.loads response = await self.request(method='get', url=url, **kwargs) return json_callback(response)
Get a URL and return its JSON response. Args: url (str): URL to be requested. json_callback (func): Custom JSON loader function. Defaults to :meth:`json.loads`. kwargs (dict): Additional arguments to pass through to the request. Returns: response body returned by :func:`json_callback` function.
juraj-google-style
def _controller_buffer(self, port): address = _LIB.Controller(self._env, port) buffer_ = ctypes.cast(address, ctypes.POINTER(CONTROLLER_VECTOR)).contents return np.frombuffer(buffer_, dtype='uint8')
Find the pointer to a controller and setup a NumPy buffer. Args: port: the port of the controller to setup Returns: a NumPy buffer with the controller's binary data
codesearchnet
def _flatten_dict(original_dict): flat_dict = {} for key, value in original_dict.items(): if isinstance(value, dict): for name, tensor in value.items(): if isinstance(tensor, dict): raise ValueError("flatten_dict only handles 2 levels of nesting.") flat_key = "__" + key + "_" + name flat_dict[flat_key] = tensor else: flat_dict[key] = value return flat_dict
Flatten dict of dicts into a single dict with appropriate prefixes. Handles only 2 levels of nesting in the original dict. Args: original_dict: Dict which may contain one or more dicts. Returns: flat_dict: Dict without any nesting. Any dicts in the original dict have their keys as prefixes in the new dict. Raises: ValueError if the original dict has more than two levels of nesting.
juraj-google-style
def stats_per_utterance(self): all_stats = {} for utterance in self.utterances.values(): data = utterance.read_samples() all_stats[utterance.idx] = stats.DataStats(float(np.mean(data)), float(np.var(data)), np.min(data), np.max(data), data.size) return all_stats
Return statistics calculated for all samples of each utterance in the corpus. Returns: dict: A dictionary containing a DataStats object for each utt.
codesearchnet
def __init__(self, runner_results): super(DataflowJob, self).__init__(runner_results._job.name) self._runner_results = runner_results
Initializes an instance of a DataFlow Job. Args: runner_results: a DataflowPipelineResult returned from Pipeline.run().
juraj-google-style
def _pull_out_perm_lhs(lhs, rest, out_port, in_port): out_inv, lhs_red = lhs._factor_lhs(out_port) return lhs_red << Feedback.create(SeriesProduct.create(*rest), out_port=out_inv, in_port=in_port)
Pull out a permutation from the Feedback of a SeriesProduct with itself. Args: lhs (CPermutation): The permutation circuit rest (tuple): The other SeriesProduct operands out_port (int): The feedback output port index in_port (int): The feedback input port index Returns: Circuit: The simplified circuit
juraj-google-style
def __init__(self, tpu_hardware_feature_proto): self.tpu_hardware_feature_proto = tpu_hardware_feature_proto
Store TPU hardware feature info. Args: tpu_hardware_feature_proto: protobuf which describe the tpu hardware feature.
github-repos
def unsafe_peek(init): def peek(store, container, _stack=None): return init(*[ store.peek(attr, container, _stack=_stack) for attr in container ]) return peek
Deserialize all the attributes available in the container and pass them in the same order as they come in the container. This is a factory function; returns the actual `peek` routine. Arguments: init: type constructor. Returns: callable: deserializer (`peek` routine).
juraj-google-style
def cross_entropy_loss(weights: Array, x: Array, y: Array) -> Array: pred = 1 / (1 + jnp.exp(-x.dot(weights))) return -jnp.mean(y * jnp.log(pred) + (1 - y) * jnp.log(1 - pred))
Calcurates a cross entropy loss with a prediction by a sigmoid function. Args: weights: A weight vector. x: An input array. y: A target output array. Returns: A cross entropy loss.
github-repos
def _load_yaml_credentials(filename=None, yaml_key=None): try: with open(os.path.expanduser(filename)) as f: search_creds = yaml.safe_load(f)[yaml_key] except FileNotFoundError: logger.error('cannot read file {}'.format(filename)) search_creds = {} except KeyError: logger.error('{} is missing the provided key: {}'.format(filename, yaml_key)) search_creds = {} return search_creds
Loads and parses credentials in a YAML file. Catches common exceptions and returns an empty dict on error, which will be handled downstream. Returns: dict: parsed credentials or {}
codesearchnet
def dbmin05years(self, value=None): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `dbmin05years`'.format(value)) self._dbmin05years = value
Corresponds to IDD Field `dbmin05years` 5-year return period values for minimum extreme dry-bulb temperature Args: value (float): value for IDD Field `dbmin05years` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def trace(self, predicate): self._handler = predicate if ((self.threading_support is None) or self.threading_support): self._threading_previous = getattr(threading, '_trace_hook', None) threading.settrace(self) self._previous = sys.gettrace() sys.settrace(self) return self
Starts tracing with the given callable. Args: predicate (callable that accepts a single :obj:`hunter.Event` argument): Return: self
codesearchnet
def _parse_package(cls, package_string): pkg, arch = rsplit(package_string, cls._arch_sep(package_string)) if arch not in KNOWN_ARCHITECTURES: pkg, arch = (package_string, None) pkg, release = rsplit(pkg, '-') name, version = rsplit(pkg, '-') epoch, version = version.split(':', 1) if ":" in version else ['0', version] if name.startswith('oracleasm') and name.endswith('.el5'): name, version2 = name.split('-', 1) version = version2 + '-' + version return { 'name': name, 'version': version, 'release': release, 'arch': arch, 'epoch': epoch }
Helper method for parsing package string. Args: package_string (str): dash separated package string such as 'bash-4.2.39-3.el7' Returns: dict: dictionary containing 'name', 'version', 'release' and 'arch' keys
juraj-google-style
def _old_init(cls, fields, shape, nrows, row_partitions, internal=False): assert isinstance(fields, dict), fields assert isinstance(shape, tensor_shape.TensorShape), shape assert nrows is None or isinstance(nrows, tensor.Tensor), nrows assert row_partitions is None or isinstance(row_partitions, tuple), row_partitions return StructuredTensor(fields=fields, ragged_shape=_dynamic_ragged_shape_init(fields, shape, nrows, row_partitions))
Private constructor -- use factory methods to create StructuredTensors. This constructor builds a `StructuredTensor` from the given attributes, performing minimal validation. Args: fields: A dictionary mapping from string to `Tensor`, `RaggedTensor`, or `StructuredTensor`. (This dict is not copied, so the caller must ensure that it does not get mutated via leaked references.) shape: `tf.TensorShape` with statically known rank. nrows: scalar integer `tf.Tensor`, or `None` if `shape.rank==0`. row_partitions: tuple of `RowPartition`s, with length `shape.rank-1`. internal: ignored argument. Returns: a StructuredTensor.
github-repos
def mme_match(case_obj, match_type, mme_base_url, mme_token, nodes=None, mme_accepts=None): query_patients = [] server_responses = [] url = None query_patients = case_obj['mme_submission']['patients'] if match_type=='internal': url = ''.join([mme_base_url,'/match']) for patient in query_patients: json_resp = matchmaker_request(url=url, token=mme_token, method='POST', content_type=mme_accepts, accept=mme_accepts, data={'patient':patient}) resp_obj = { 'server' : 'Local MatchMaker node', 'patient_id' : patient['id'], 'results' : json_resp.get('results'), 'status_code' : json_resp.get('status_code'), 'message' : json_resp.get('message') } server_responses.append(resp_obj) else: query_patients = [ patient['id'] for patient in query_patients] node_ids = [ node['id'] for node in nodes ] if match_type in node_ids: node_ids = [match_type] for patient in query_patients: for node in node_ids: url = ''.join([mme_base_url,'/match/external/', patient, '?node=', node]) json_resp = matchmaker_request(url=url, token=mme_token, method='POST') resp_obj = { 'server' : node, 'patient_id' : patient, 'results' : json_resp.get('results'), 'status_code' : json_resp.get('status_code'), 'message' : json_resp.get('message') } server_responses.append(resp_obj) return server_responses
Initiate a MatchMaker match against either other Scout patients or external nodes Args: case_obj(dict): a scout case object already submitted to MME match_type(str): 'internal' or 'external' mme_base_url(str): base url of the MME server mme_token(str): auth token of the MME server mme_accepts(str): request content accepted by MME server (only for internal matches) Returns: matches(list): a list of eventual matches
juraj-google-style
def extract(self, tokens: List[Token]) -> List[Extraction]: results = list() if len(tokens) > 0: if self._case_sensitive: new_tokens = [x.orth_ if isinstance(x, Token) else x for x in tokens] else: new_tokens = [x.lower_ if isinstance(x, Token) else x.lower() for x in tokens] else: return results try: ngrams_iter = self._generate_ngrams_with_context(new_tokens) results.extend(map(lambda term: self._wrap_value_with_context(tokens, term[1], term[2]), filter(lambda term: isinstance(term[0], str), map(lambda term: (self._glossary.get(term[0]), term[1], term[2]), map(lambda term: ( self._combine_ngrams(term[0], self._joiner), term[1], term[2]), ngrams_iter))))) except Exception as e: raise ExtractorError('GlossaryExtractor: Failed to extract with ' + self.name + '. Catch ' + str(e) + '. ') return results
Extracts information from a string(TEXT) with the GlossaryExtractor instance Args: token (List[Token]): list of spaCy token to be processed. Returns: List[Extraction]: the list of extraction or the empty list if there are no matches.
juraj-google-style
def backup(self, backup_name, folder_key=None, folder_name=None): folder = self._find_or_create_folder(folder_key, folder_name) drive_service = self.drive_service try: source_rsrc = drive_service.files().get(fileId=self.document_key).execute() except Exception, e: logger.exception("Google API error. %s", e) raise e backup = self._create_new_or_copy(source_doc=source_rsrc, target_name=backup_name, folder=folder, sheet_description="backup") backup_key = backup['id'] return backup_key
Copies the google spreadsheet to the backup_name and folder specified. Args: backup_name (str): The name of the backup document to create. folder_key (Optional) (str): The key of a folder that the new copy will be moved to. folder_name (Optional) (str): Like folder_key, references the folder to move a backup to. If the folder can't be found, sheetsync will create it.
juraj-google-style
def __call__(self, state: Sequence[tf.Tensor], timestep: tf.Tensor) -> Sequence[tf.Tensor]: action, _, _ = self._sample_actions(state) return action
Returns sampled action fluents for the current `state` and `timestep`. Args: state (Sequence[tf.Tensor]): The current state fluents. timestep (tf.Tensor): The current timestep. Returns: Sequence[tf.Tensor]: A tuple of action fluents.
juraj-google-style
def connect(self, **kwargs): self.app = self._app.connect(**kwargs) try: self._top_window = self.app.top_window().wrapper_object() self.set_foreground() except RuntimeError: self._top_window = None
Connect to window and set it foreground Args: **kwargs: optional arguments Returns: None
juraj-google-style
def _find_methods(cls, *names, **kwds): reverse = kwds.pop('reverse', False) assert (not kwds), repr(kwds) cache = cls.__dict__.get('_find_methods_cache') if cache: hit = cache.get(names) if (hit is not None): return hit else: cls._find_methods_cache = cache = {} methods = [] for c in cls.__mro__: for name in names: method = c.__dict__.get(name) if (method is not None): methods.append(method) if reverse: methods.reverse() cache[names] = methods return methods
Compute a list of composable methods. Because this is a common operation and the class hierarchy is static, the outcome is cached (assuming that for a particular list of names the reversed flag is either always on, or always off). Args: *names: One or more method names. reverse: Optional flag, default False; if True, the list is reversed. Returns: A list of callable class method objects.
codesearchnet
def getAsGrassAsciiGrid(self, session): if (type(self.raster) != type(None)): converter = RasterConverter(sqlAlchemyEngineOrSession=session) return converter.getAsGrassAsciiRaster(tableName=self.tableName, rasterIdFieldName='id', rasterId=self.id, rasterFieldName=self.rasterColumnName)
Retrieve the raster in the GRASS ASCII Grid format. Args: session (:mod:`sqlalchemy.orm.session.Session`): SQLAlchemy session object bound to PostGIS enabled database. Returns: str: GRASS ASCII string.
codesearchnet
def Match(self, registry_key): key_path = registry_key.path.upper() if self._key_path_prefix and self._key_path_suffix: if (key_path.startswith(self._key_path_prefix) and key_path.endswith(self._key_path_suffix)): key_path_segment = key_path[ len(self._key_path_prefix):-len(self._key_path_suffix)] if key_path_segment.startswith('ControlSet'.upper()): try: control_set = int(key_path_segment[10:], 10) except ValueError: control_set = None return control_set is not None return key_path in (self._key_path_upper, self._wow64_key_path_upper)
Determines if a Windows Registry key matches the filter. Args: registry_key (dfwinreg.WinRegistryKey): Windows Registry key. Returns: bool: True if the keys match.
juraj-google-style
def _add_weight(self, name, initial_value, dtype=None): variable = variable_v1.VariableV1(initial_value=initial_value, name=name, dtype=dtype, trainable=False, use_resource=True, synchronization=variables.VariableSynchronization.AUTO, aggregation=variables.VariableAggregation.NONE) if context.executing_eagerly(): graph_key = None else: graph = ops.get_default_graph() graph_key = graph._graph_key key = (name, graph_key) self._weights[key] = variable self._handle_deferred_dependencies(name=name, trackable=variable) backend.track_variable(variable) return variable
Adds a weight to this loss scale. Args: name: Variable name. initial_value: The variable's initial value. dtype: The type of the variable. Returns: A variable. Raises: RuntimeError: If a weight with `name` has already been added.
github-repos
def find(self, collection, query): obj = getattr(self.db, collection) result = obj.find(query) return result
Search a collection for the query provided. Just a raw interface to mongo to do any query you want. Args: collection: The db collection. See main class documentation. query: A mongo find query. Returns: pymongo Cursor object with the results.
codesearchnet
def get_help(sakefile): full_string = "You can 'sake' one of the following...\n\n" errmes = "target '{}' is not allowed to not have help message\n" outerlines = [] for target in sakefile: if target == "all": continue middle_lines = [] if "formula" not in sakefile[target]: innerstr = "{}:\n - {}\n\n".format(escp(target), sakefile[target]["help"]) inner = [] for atom_target in sakefile[target]: if atom_target == "help": continue inner.append(" {}:\n - {}\n\n".format(escp(atom_target), sakefile[target][atom_target]["help"])) if inner: innerstr += '\n'.join(sorted(inner)) middle_lines.append(innerstr) else: middle_lines.append("{}:\n - {}\n\n".format(escp(target), sakefile[target]["help"])) if middle_lines: outerlines.append('\n'.join(sorted(middle_lines))) if outerlines: full_string += '\n'.join(sorted(outerlines)) what_clean_does = "remove all targets' outputs and start from scratch" full_string += "\nclean:\n - {}\n\n".format(what_clean_does) what_visual_does = "output visual representation of project's dependencies" full_string += "visual:\n - {}\n".format(what_visual_does) full_string = re.sub("\n{3,}", "\n\n", full_string) return full_string
Returns the prettily formatted help strings (for printing) Args: A dictionary that is the parsed Sakefile (from sake.py) NOTE: the list sorting in this function is required for this function to be deterministic
juraj-google-style
def asdim(dimension): if isinstance(dimension, Dimension): return dimension elif isinstance(dimension, (tuple, dict, basestring)): return Dimension(dimension) else: raise ValueError('%s type could not be interpreted as Dimension. ' 'Dimensions must be declared as a string, tuple, ' 'dictionary or Dimension type.')
Convert the input to a Dimension. Args: dimension: tuple, dict or string type to convert to Dimension Returns: A Dimension object constructed from the dimension spec. No copy is performed if the input is already a Dimension.
juraj-google-style
def _query(queue_name=None, build_id=None, release_id=None, run_id=None, count=None): assert queue_name or build_id or release_id or run_id q = WorkQueue.query if queue_name: q = q.filter_by(queue_name=queue_name) if build_id: q = q.filter_by(build_id=build_id) if release_id: q = q.filter_by(release_id=release_id) if run_id: q = q.filter_by(run_id=run_id) q = q.order_by(WorkQueue.created.desc()) if count is not None: q = q.limit(count) return q.all()
Queries for work items based on their criteria. Args: queue_name: Optional queue name to restrict to. build_id: Optional build ID to restrict to. release_id: Optional release ID to restrict to. run_id: Optional run ID to restrict to. count: How many tasks to fetch. Defaults to None, which means all tasks are fetch that match the query. Returns: List of WorkQueue items.
juraj-google-style
def sanitize_git_path(self, uri, ref=None): if uri.endswith('.git'): dir_name = uri[:-4] else: dir_name = uri dir_name = self.sanitize_uri_path(dir_name) if ref is not None: dir_name += "-%s" % ref return dir_name
Take a git URI and ref and converts it to a directory safe path. Args: uri (string): git URI (e.g. git@github.com:foo/bar.git) ref (string): optional git ref to be appended to the path Returns: str: Directory name for the supplied uri
juraj-google-style
def get_fba_flux(self, objective): flux_result = self.solve_fba(objective) fba_fluxes = {} for key in self._model.reactions: fba_fluxes[key] = flux_result.get_value(self._v_wt[key]) return fba_fluxes
Return a dictionary of all the fluxes solved by FBA. Dictionary of fluxes is used in :meth:`.lin_moma` and :meth:`.moma` to minimize changes in the flux distributions following model perturbation. Args: objective: The objective reaction that is maximized. Returns: Dictionary of fluxes for each reaction in the model.
codesearchnet
def trailing_stop_loss_replace(self, accountID, orderID, **kwargs): return self.replace( accountID, orderID, order=TrailingStopLossOrderRequest(**kwargs) )
Shortcut to replace a pending Trailing Stop Loss Order in an Account Args: accountID : The ID of the Account orderID : The ID of the Take Profit Order to replace kwargs : The arguments to create a TrailingStopLossOrderRequest Returns: v20.response.Response containing the results from submitting the request
juraj-google-style
def __init__(self, host_url, username, password): self.host_url = host_url self.api_base_url = '{0:s}/api/v1'.format(self.host_url) self.username = username self.session = self._create_session(username, password)
Initialize the Timesketch API client object. Args: host_url (str): URL of Timesketch instance username (str): Timesketch username password (str): Timesketch password
juraj-google-style
def download_software_version(version=None, synch=False): if (not version): raise CommandExecutionError('Version option must not be none.') if (not isinstance(synch, bool)): raise CommandExecutionError('Synch option must be boolean..') if (synch is True): query = {'type': 'op', 'cmd': '<request><system><software><download><version>{0}</version></download></software></system></request>'.format(version)} else: query = {'type': 'op', 'cmd': '<request><system><software><download><sync-to-peer>yes</sync-to-peer><version>{0}</version></download></software></system></request>'.format(version)} return _get_job_results(query)
Download software packages by version number. Args: version(str): The version of the PANOS file to download. synch (bool): If true then the file will synch to the peer unit. CLI Example: .. code-block:: bash salt '*' panos.download_software_version 8.0.0 salt '*' panos.download_software_version 8.0.0 True
codesearchnet
def _covert_to_hashable(data): r if isinstance(data, six.binary_type): hashable = data prefix = b'TXT' elif util_type.HAVE_NUMPY and isinstance(data, np.ndarray): if data.dtype.kind == 'O': msg = '[ut] hashing ndarrays with dtype=object is unstable' warnings.warn(msg, RuntimeWarning) hashable = data.dumps() else: hashable = data.tobytes() prefix = b'NDARR' elif isinstance(data, six.text_type): hashable = data.encode('utf-8') prefix = b'TXT' elif isinstance(data, uuid.UUID): hashable = data.bytes prefix = b'UUID' elif isinstance(data, int): hashable = _int_to_bytes(data) prefix = b'INT' elif util_type.HAVE_NUMPY and isinstance(data, np.int64): return _covert_to_hashable(int(data)) elif util_type.HAVE_NUMPY and isinstance(data, np.float64): a, b = float(data).as_integer_ratio() hashable = (a.to_bytes(8, byteorder='big') + b.to_bytes(8, byteorder='big')) prefix = b'FLOAT' else: raise TypeError('unknown hashable type=%r' % (type(data))) prefix = b'' return prefix, hashable
r""" Args: data (?): Returns: ?: CommandLine: python -m utool.util_hash _covert_to_hashable Example: >>> # DISABLE_DOCTEST >>> from utool.util_hash import * # NOQA >>> from utool.util_hash import _covert_to_hashable # NOQA >>> import utool as ut >>> data = np.array([1], dtype=np.int64) >>> result = _covert_to_hashable(data) >>> print(result)
juraj-google-style
def get_users_by_email(cls, emails): users = User.objects.filter(email__in=emails) present_emails = users.values_list('email', flat=True) missing_emails = list((set(emails) - set(present_emails))) return (users, missing_emails)
Accept a list of emails, and separate them into users that exist on OpenEdX and users who don't. Args: emails: An iterable of email addresses to split between existing and nonexisting Returns: users: Queryset of users who exist in the OpenEdX platform and who were in the list of email addresses missing_emails: List of unique emails which were in the original list, but do not yet exist as users
codesearchnet
def sk_log_loss(y_true: Union[(List[List[float]], List[List[int]], np.ndarray)], y_predicted: Union[(List[List[float]], List[List[int]], np.ndarray)]) -> float: return log_loss(y_true, y_predicted)
Calculates log loss. Args: y_true: list or array of true values y_predicted: list or array of predicted values Returns: Log loss
codesearchnet
def _create_pseudo_names(tensors, prefix): def one_index(ele): if isinstance(ele, int): return ele + 1 return ele flat_paths = list(nest.yield_flat_paths(tensors)) flat_paths = nest.map_structure(one_index, flat_paths) names = [] for path in flat_paths: if not path: name = prefix + '1' else: name = '_'.join((str(p) for p in path)) if isinstance(path[0], int): name = prefix + name names.append(name) return names
Creates pseudo {input | output} names for subclassed Models. Warning: this function should only be used to define default names for `Metrics` and `SavedModel`. No other use cases should rely on a `Model`'s input or output names. Example with dict: `{'a': [x1, x2], 'b': x3}` becomes: `['a_1', 'a_2', 'b']` Example with list: `[x, y]` becomes: `['output_1', 'output_2']` Args: tensors: `Model`'s outputs or inputs. prefix: 'output_' for outputs, 'input_' for inputs. Returns: Flattened list of pseudo names.
github-repos
def variants_export_header(case_obj): header = [] header = (header + EXPORT_HEADER) for individual in case_obj['individuals']: display_name = str(individual['display_name']) header.append(('AD_reference_' + display_name)) header.append(('AD_alternate_' + display_name)) header.append(('GT_quality_' + display_name)) return header
Returns a header for the CSV file with the filtered variants to be exported. Args: case_obj(scout.models.Case) Returns: header: includes the fields defined in scout.constants.variants_export EXPORT_HEADER + AD_reference, AD_alternate, GT_quality for each sample analysed for a case
codesearchnet
def group_entities(self, entities: List[dict]) -> List[dict]: entity_groups = [] entity_group_disagg = [] for entity in entities: if not entity_group_disagg: entity_group_disagg.append(entity) continue bi, tag = self.get_tag(entity['entity']) last_bi, last_tag = self.get_tag(entity_group_disagg[-1]['entity']) if tag == last_tag and bi != 'B': entity_group_disagg.append(entity) else: entity_groups.append(self.group_sub_entities(entity_group_disagg)) entity_group_disagg = [entity] if entity_group_disagg: entity_groups.append(self.group_sub_entities(entity_group_disagg)) return entity_groups
Find and group together the adjacent tokens with the same entity predicted. Args: entities (`dict`): The entities predicted by the pipeline.
github-repos
def clear_redis(self, variable, clear_type): if (variable is None): return if (variable in self._clear_redis_tracker): return if (not re.match(self._vars_match, variable)): return self.log.info('[{}] Deleting redis variable: {}.'.format(clear_type, variable)) print('Clearing Variables: {}{}{}'.format(c.Style.BRIGHT, c.Fore.MAGENTA, variable)) self.tcex.playbook.delete(variable) self._clear_redis_tracker.append(variable)
Delete Redis data for provided variable. Args: variable (str): The Redis variable to delete. clear_type (str): The type of clear action.
codesearchnet
def _bfd_multiplier(self, **kwargs): int_type = kwargs['int_type'] method_name = 'interface_%s_bfd_interval_multiplier' % int_type bfd_multiplier = getattr(self._interface, method_name) config = bfd_multiplier(**kwargs) if kwargs['delete']: tag = 'multiplier' config.find('. return config
Return the BFD multiplier XML. You should not use this method. You probably want `BGP.bfd`. Args: min_tx (str): BFD transmit interval in milliseconds (300, 500, etc) delete (bool): Remove the configuration if ``True``. Returns: XML to be passed to the switch. Raises: None
juraj-google-style
def get_stream_action_type(stream_arn): stream_type_map = { "kinesis": awacs.kinesis.Action, "dynamodb": awacs.dynamodb.Action, } stream_type = stream_arn.split(":")[2] try: return stream_type_map[stream_type] except KeyError: raise ValueError( "Invalid stream type '%s' in arn '%s'" % (stream_type, stream_arn) )
Returns the awacs Action for a stream type given an arn Args: stream_arn (str): The Arn of the stream. Returns: :class:`awacs.aws.Action`: The appropriate stream type awacs Action class Raises: ValueError: If the stream type doesn't match kinesis or dynamodb.
juraj-google-style
def status(self, workflow_id): self.logger.debug('Get status of workflow: ' + workflow_id) url = '%(wf_url)s/%(wf_id)s' % { 'wf_url': self.workflows_url, 'wf_id': workflow_id } r = self.gbdx_connection.get(url) r.raise_for_status() return r.json()['state']
Checks workflow status. Args: workflow_id (str): Workflow id. Returns: Workflow status (str).
juraj-google-style
def is_placeholder(x): try: if ops.executing_eagerly_outside_functions(): return hasattr(x, '_is_backend_placeholder') from tensorflow.python.keras.utils import tf_utils if tf_utils.is_extension_type(x): flat_components = nest.flatten(x, expand_composites=True) return py_any((is_placeholder(c) for c in flat_components)) else: return x.op.type == 'Placeholder' except AttributeError: return False
Returns whether `x` is a placeholder. Args: x: A candidate placeholder. Returns: Boolean.
github-repos
def isloaded(self, name): if name is None: return True if isinstance(name, str): return (name in [x.__module__ for x in self]) if isinstance(name, Iterable): return set(name).issubset([x.__module__ for x in self]) return False
Checks if given hook module has been loaded Args: name (str): The name of the module to check Returns: bool. The return code:: True -- Loaded False -- Not Loaded
juraj-google-style
def get_key(self, key, request_only=False): values = {} requested_names = [x.name for x in self._package_requests if not x.conflict] for pkg in self.resolved_packages: if (not request_only) or (pkg.name in requested_names): value = getattr(pkg, key) if value is not None: values[pkg.name] = (pkg, value) return values
Get a data key value for each resolved package. Args: key (str): String key of property, eg 'tools'. request_only (bool): If True, only return the key from resolved packages that were also present in the request. Returns: Dict of {pkg-name: (variant, value)}.
juraj-google-style
def Get(self, key): if (key not in self._hash): raise KeyError(key) node = self._hash[key] self._age.Unlink(node) self._age.AppendNode(node) return node.data
Fetch the object from cache. Objects may be flushed from cache at any time. Callers must always handle the possibility of KeyError raised here. Args: key: The key used to access the object. Returns: Cached object. Raises: KeyError: If the object is not present in the cache.
codesearchnet
def require(self, entity_type, attribute_name=None): if not attribute_name: attribute_name = entity_type self.requires += [(entity_type, attribute_name)] return self
The intent parser should require an entity of the provided type. Args: entity_type(str): an entity type attribute_name(str): the name of the attribute on the parsed intent. Defaults to match entity_type. Returns: self: to continue modifications.
juraj-google-style
def unique_bitstrings_with_counts(bitstrings, out_idx=tf.dtypes.int32): y, idx, count = tf.raw_ops.UniqueWithCountsV2(x=bitstrings, axis=[0], out_idx=out_idx) return (y, idx, count)
Extract the unique bitstrings in the given bitstring tensor. Args: bitstrings: 2-D `tf.Tensor`, interpreted as a list of bitstrings. out_idx: An optional `tf.DType` from: `tf.int32`, `tf.int64`. Defaults to `tf.int32`. Specifies the type of `count` output. Returns: y: 2-D `tf.Tensor` of same dtype as `bitstrings`, containing the unique 0-axis entries of `bitstrings`. idx: The index of each value of the input in the unique output `y`. count: 1-D `tf.Tensor` of dtype `out_idx` such that `count[i]` is the number of occurences of `y[i]` in `bitstrings`.
github-repos
def _add_namespace(marc_xml): dom = marc_xml if isinstance(dom, basestring): dom = dhtmlparser.parseString(marc_xml) root = dom.find('root') if root: root[0].params = {} for record in dom.find('record'): record.params = {} collections = dom.find('collection') if (not collections): record = dom.find('record')[0] return XML_TEMPLATE.replace('$CONTENT', str(record)) for col in collections: col.params['xmlns'] = 'http: col.params['xmlns:xsi'] = 'http: col.params['xsi:schemaLocation'] = ('http: return str(dom)
Add proper XML namespace to the `marc_xml` record. Args: marc_xml (str): String representation of the XML record. Returns: str: XML with namespace.
codesearchnet
async def do_cmd(self, *args, success=None): if success is None: success = (250,) cmd = " ".join(args) await self.writer.send_command(cmd) code, message = await self.reader.read_reply() if code not in success: raise SMTPCommandFailedError(code, message, cmd) return code, message
Sends the given command to the server. Args: *args: Command and arguments to be sent to the server. Raises: ConnectionResetError: If the connection with the server is unexpectedely lost. SMTPCommandFailedError: If the command fails. Returns: (int, str): A (code, message) 2-tuple containing the server response.
juraj-google-style
def Deserialize(self, reader): usage = reader.ReadByte() self.Usage = usage if usage == TransactionAttributeUsage.ContractHash or usage == TransactionAttributeUsage.Vote or \ (usage >= TransactionAttributeUsage.Hash1 and usage <= TransactionAttributeUsage.Hash15): self.Data = reader.ReadBytes(32) elif usage == TransactionAttributeUsage.ECDH02 or usage == TransactionAttributeUsage.ECDH03: self.Data = bytearray(usage) + bytearray(reader.ReadBytes(32)) elif usage == TransactionAttributeUsage.Script: self.Data = reader.ReadBytes(20) elif usage == TransactionAttributeUsage.DescriptionUrl: self.Data = reader.ReadBytes(reader.ReadByte()) elif usage == TransactionAttributeUsage.Description or usage >= TransactionAttributeUsage.Remark: self.Data = reader.ReadVarBytes(max=self.MAX_ATTR_DATA_SIZE) else: logger.error("format error!!!")
Deserialize full object. Args: reader (neo.IO.BinaryReader):
juraj-google-style
def format_level_1_memory(memory): formatted_memory = _list_to_complex_array(memory) if not 1 <= len(formatted_memory.shape) <= 2: raise QiskitError('Level one memory is not of correct shape.') return formatted_memory
Format an experiment result memory object for measurement level 1. Args: memory (list): Memory from experiment with `meas_level==1`. `avg` or `single` will be inferred from shape of result memory. Returns: np.ndarray: Measurement level 1 complex numpy array Raises: QiskitError: If the returned numpy array does not have 1 (avg) or 2 (single) indicies.
juraj-google-style
def snapped_slice(size, frac, n): if (size < n): n = size start = (int(((size * frac) - ceil((n / 2)))) + 1) stop = (int(((size * frac) + floor((n / 2)))) + 1) buf = 0 if (stop >= size): buf = (size - stop) elif (start < 0): buf = (0 - start) stop += buf start += buf assert (stop <= size), ('out of bounds [%r, %r]' % (stop, start)) sl = slice(start, stop) return sl
r""" Creates a slice spanning `n` items in a list of length `size` at position `frac`. Args: size (int): length of the list frac (float): position in the range [0, 1] n (int): number of items in the slice Returns: slice: slice object that best fits the criteria SeeAlso: take_percentile_parts Example: Example: >>> # DISABLE_DOCTEST >>> from utool.util_list import * # NOQA >>> import utool as ut >>> print(snapped_slice(0, 0, 10)) >>> print(snapped_slice(1, 0, 10)) >>> print(snapped_slice(100, 0, 10)) >>> print(snapped_slice(9, 0, 10)) >>> print(snapped_slice(100, 1, 10)) pass
codesearchnet
def num_lineages_at(self, distance): if ((not isinstance(distance, float)) and (not isinstance(distance, int))): raise TypeError('distance must be an int or a float') if (distance < 0): raise RuntimeError('distance cannot be negative') d = dict() q = deque() q.append(self.root) count = 0 while (len(q) != 0): node = q.popleft() if node.is_root(): d[node] = 0 else: d[node] = d[node.parent] if (node.edge_length is not None): d[node] += node.edge_length if (d[node] < distance): q.extend(node.children) elif ((node.parent is None) or (d[node.parent] < distance)): count += 1 return count
Returns the number of lineages of this ``Tree`` that exist ``distance`` away from the root Args: ``distance`` (``float``): The distance away from the root Returns: ``int``: The number of lineages that exist ``distance`` away from the root
codesearchnet
def _createBitpattern(functioncode, value): _checkFunctioncode(functioncode, [5, 15]) _checkInt(value, minvalue=0, maxvalue=1, description='inputvalue') if (functioncode == 5): if (value == 0): return '\x00\x00' else: return 'ÿ\x00' elif (functioncode == 15): if (value == 0): return '\x00' else: return '\x01'
Create the bit pattern that is used for writing single bits. This is basically a storage of numerical constants. Args: * functioncode (int): can be 5 or 15 * value (int): can be 0 or 1 Returns: The bit pattern (string). Raises: TypeError, ValueError
codesearchnet
def _pack_sequence_as(structured_outputs, op_outputs): outputs_with_nones = [] counter = 0 for output in nest.flatten(structured_outputs, expand_composites=True): if output is None: outputs_with_nones.append(None) else: outputs_with_nones.append(op_outputs[counter]) counter += 1 return func_graph_module.pack_sequence_as(structured_outputs, outputs_with_nones)
Packs the outputs of the gradient If/Case op. The branch functions may contain None's in the list of `structured_outputs`. `op_outputs` has those outputs missing. So we need to add those Nones to the list of `op_outputs` and then pack it in the same structure as `structured_outputs`. Args: structured_outputs: structured_outputs from one of the branch functions. op_outputs: List of output tensors of the op. Returns: `op_outputs` packed like `structured_outputs`.
github-repos
def parse_GPL(filepath, entry_name=None, partial=None): gsms = {} gses = {} gpl_soft = [] has_table = False gpl_name = entry_name database = None if isinstance(filepath, str): with utils.smart_open(filepath) as soft: groupper = groupby(soft, (lambda x: x.startswith('^'))) for (is_new_entry, group) in groupper: if is_new_entry: (entry_type, entry_name) = __parse_entry(next(group)) logger.debug(('%s: %s' % (entry_type.upper(), entry_name))) if (entry_type == 'SERIES'): (is_data, data_group) = next(groupper) gse_metadata = parse_metadata(data_group) gses[entry_name] = GSE(name=entry_name, metadata=gse_metadata) elif (entry_type == 'SAMPLE'): if (partial and (entry_name not in partial)): continue (is_data, data_group) = next(groupper) gsms[entry_name] = parse_GSM(data_group, entry_name) elif (entry_type == 'DATABASE'): (is_data, data_group) = next(groupper) database_metadata = parse_metadata(data_group) database = GEODatabase(name=entry_name, metadata=database_metadata) elif ((entry_type == 'PLATFORM') or (entry_type == 'Annotation')): gpl_name = entry_name (is_data, data_group) = next(groupper) has_gpl_name = (gpl_name or (gpl_name is None)) for line in data_group: if (('_table_begin' in line) or (not line.startswith(('^', '!', ' has_table = True if (not has_gpl_name): if match('!Annotation_platform\\s*=\\s*', line): gpl_name = split('\\s*=\\s*', line)[(- 1)].strip() has_gpl_name = True gpl_soft.append(line) else: raise RuntimeError('Cannot parse {etype}. Unknown for GPL.'.format(etype=entry_type)) else: for line in filepath: if (('_table_begin' in line) or (not line.startswith(('^', '!', ' has_table = True gpl_soft.append(line.rstrip()) columns = None try: columns = parse_columns(gpl_soft) except Exception: pass metadata = parse_metadata(gpl_soft) if has_table: table_data = parse_table_data(gpl_soft) else: table_data = DataFrame() gpl = GPL(name=gpl_name, gses=gses, gsms=gsms, table=table_data, metadata=metadata, columns=columns, database=database) for (gse_id, gse) in gpl.gses.items(): for gsm_id in gse.metadata.get('sample_id', []): if (gsm_id in gpl.gsms): gpl.gses[gse_id].gsms[gsm_id] = gpl.gsms[gsm_id] return gpl
Parse GPL entry from SOFT file. Args: filepath (:obj:`str` or :obj:`Iterable`): Path to file with 1 GPL entry or list of lines representing GPL from GSE file. entry_name (:obj:`str`, optional): Name of the entry. By default it is inferred from the data. partial (:obj:'iterable', optional): A list of accession IDs of GSMs to be partially extracted from GPL, works only if a file/accession is a GPL. Returns: :obj:`GEOparse.GPL`: A GPL object.
codesearchnet
def _VerifyExplicitPaddings(self, tensor_in_sizes, filter_in_sizes, strides, padding, data_format, dtype, use_gpu, op_name, dilations=(1, 1), test_grappler_layout_optimizer=False, tol=1e-05): input_tensor = self._CreateNumpyTensor(tensor_in_sizes) filter_tensor = self._CreateNumpyTensor(filter_in_sizes) input_tensor = array_ops.pad(input_tensor, [(0, 0)] + padding + [(0, 0)]) dilations = list(dilations) conv2d_result = nn_ops.conv2d(input_tensor, filter_tensor, [1] + list(strides) + [1], 'VALID', dilations=[1] + dilations + [1]) expected = list(self.evaluate(array_ops.reshape(conv2d_result, [-1]))) self._VerifyValuesParameters(tensor_in_sizes, filter_in_sizes, strides, padding, expected, data_format, dtype, use_gpu, op_name, dilations, test_grappler_layout_optimizer=test_grappler_layout_optimizer, tol=tol)
Verifies Conv2D with explicit padding generates correct values. It does this by comparing with Conv2D without explicit padding. This function assumes Conv2D without explicit padding works correctly. Args: tensor_in_sizes: Input tensor dimensions in [batch, input_rows, input_cols, input_depth]. filter_in_sizes: Filter tensor dimensions in [kernel_rows, kernel_cols, input_depth, output_depth]. strides: [row_stride, col_stride] for the convolution; padding: Explicit padding amounts. data_format: "NCHW" or "NHWC" dtype: data type to perform test use_gpu: True if testing on the GPU op_name: "Conv" or "Conv2D" dilations: Dilation values test_grappler_layout_optimizer: If True, allow the Grappler layout optimizer to run, which turns NHWC Conv2Ds on the GPU to NCHW Conv2Ds. tol: The absolute and relative tolerance.
github-repos
def anonymize_column(self, col): column = col[self.col_name] generator = self.get_generator() original_values = column[(~ pd.isnull(column))].unique() new_values = [generator() for x in range(len(original_values))] if (len(new_values) != len(set(new_values))): raise ValueError('There are not enought different values on faker providerfor category {}'.format(self.category)) value_map = dict(zip(original_values, new_values)) column = column.apply(value_map.get) return column.to_frame()
Map the values of column to new ones of the same type. It replaces the values from others generated using `faker`. It will however, keep the original distribution. That mean that the generated `probability_map` for both will have the same values, but different keys. Args: col (pandas.DataFrame): Dataframe containing the column to anonymize. Returns: pd.DataFrame: DataFrame with its values mapped to new ones, keeping the original distribution. Raises: ValueError: A `ValueError` is raised if faker is not able to provide enought different values.
codesearchnet
def __init__(self, filenames, index=0, buffer_size=None, _account_id=None, delimiter=None, path_filter=None): super(GCSInputReader, self).__init__() self._filenames = filenames self._index = index self._buffer_size = buffer_size self._account_id = _account_id self._delimiter = delimiter self._bucket = None self._bucket_iter = None self._path_filter = path_filter self._slice_ctx = None
Initialize a GoogleCloudStorageInputReader instance. Args: filenames: A list of Google Cloud Storage filenames of the form '/bucket/objectname'. index: Index of the next filename to read. buffer_size: The size of the read buffer, None to use default. _account_id: Internal use only. See cloudstorage documentation. delimiter: Delimiter used as path separator. See class doc. path_filter: An instance of PathFilter.
juraj-google-style
def stream_stderr(self, processes, print_only_first=False): def _stream_stderr_single_process(process, type_string, index, print_to_stdout): while True: output = process.stderr.readline() if not output and process.poll() is not None: break if output and print_to_stdout: print('{}{} {}'.format(type_string, index, output.strip())) sys.stdout.flush() stream_threads = [] for process_type, process_list in six.iteritems(processes): for i in range(len(process_list)): print_to_stdout = not print_only_first or i == 0 thread = threading.Thread(target=_stream_stderr_single_process, args=(process_list[i], process_type, i, print_to_stdout)) thread.start() stream_threads.append(thread) for thread in stream_threads: thread.join()
Consume stderr of all processes and print to stdout. To reduce the amount of logging, caller can set print_only_first to True. In that case, this function only prints stderr from the first process of each type. Args: processes: A dictionary from process type string -> list of processes. print_only_first: If true, only print output from first process of each type.
github-repos
def hdfs_path(ctx, path): HADOOP_SCHEMES = ['adl: 'file: 'hdfs: 'oss: 's3: 's3a: 's3n: 'swift: 'viewfs: 'wasb: if (any(path.startswith(scheme) for scheme in HADOOP_SCHEMES)): return path elif path.startswith("/"): return ctx.defaultFS + path else: if ctx.defaultFS.startswith("hdfs: return "{0}/user/{1}/{2}".format(ctx.defaultFS, getpass.getuser(), path) elif ctx.defaultFS.startswith("file: return "{0}/{1}/{2}".format(ctx.defaultFS, ctx.working_dir[1:], path) else: logging.warn("Unknown scheme {0} with relative path: {1}".format(ctx.defaultFS, path)) return "{0}/{1}".format(ctx.defaultFS, path)
Convenience function to create a Tensorflow-compatible absolute HDFS path from relative paths Args: :ctx: TFNodeContext containing the metadata specific to this node in the cluster. :path: path to convert Returns: An absolute path prefixed with the correct filesystem scheme.
juraj-google-style
def probe(filename, cmd='ffprobe', **kwargs): args = [cmd, '-show_format', '-show_streams', '-of', 'json'] args += convert_kwargs_to_cmd_line_args(kwargs) args += [filename] p = subprocess.Popen(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) (out, err) = p.communicate() if (p.returncode != 0): raise Error('ffprobe', out, err) return json.loads(out.decode('utf-8'))
Run ffprobe on the specified file and return a JSON representation of the output. Raises: :class:`ffmpeg.Error`: if ffprobe returns a non-zero exit code, an :class:`Error` is returned with a generic error message. The stderr output can be retrieved by accessing the ``stderr`` property of the exception.
codesearchnet
def get_info_line(self, **kwargs): select_date = ('%02d/%02d/%d' % (kwargs.get('day', '01'), kwargs.get('month', '01'), kwargs.get('year', '1970'))) params = {'fecha': select_date, 'line': util.ints_to_string(kwargs.get('lines', [])), 'cultureInfo': util.language_code(kwargs.get('lang'))} result = self.make_request('geo', 'get_info_line', **params) if (not util.check_result(result, 'Line')): return (False, 'UNKNOWN ERROR') values = util.response_list(result, 'Line') return (True, [emtype.Line(**a) for a in values])
Obtain basic information on a bus line on a given date. Args: day (int): Day of the month in format DD. The number is automatically padded if it only has one digit. month (int): Month number in format MM. The number is automatically padded if it only has one digit. year (int): Year number in format YYYY. lines (list[int] | int): Lines to query, may be empty to get all the lines. lang (str): Language code (*es* or *en*). Returns: Status boolean and parsed response (list[Line]), or message string in case of error.
codesearchnet
def call(self, x): return ops.rms_normalization(x, scale=self.scale, axis=self.axis, epsilon=self.epsilon)
Applies RMS normalization to the input tensor. Args: x: Input tensor of shape (batch_size, input_dim). Returns: The RMS-normalized tensor of the same shape (batch_size, input_dim), scaled by the learned `scale` parameter.
github-repos
def extract(self, file_path, is_drum=False): midi_data = pretty_midi.PrettyMIDI(file_path) note_tuple_list = [] for instrument in midi_data.instruments: if (is_drum is False and instrument.is_drum is False) or (is_drum is True and instrument.is_drum is True): for note in instrument.notes: note_tuple_list.append((instrument.program, note.start, note.end, note.pitch, note.velocity)) note_df = pd.DataFrame(note_tuple_list, columns=["program", "start", "end", "pitch", "velocity"]) note_df = note_df.sort_values(by=["program", "start", "end"]) note_df["duration"] = note_df.end - note_df.start return note_df
Extract MIDI file. Args: file_path: File path of MIDI. is_drum: Extract drum data or not. Returns: pd.DataFrame(columns=["program", "start", "end", "pitch", "velocity", "duration"])
juraj-google-style
def get_num_bytes(self, batch: Sequence[ExampleT]) -> int: return len(pickle.dumps(batch))
Returns: The number of bytes of data for a batch.
github-repos
def getVarianceComps(self, univariance=False): RV = sp.zeros((self.P, self.n_randEffs)) for term_i in range(self.n_randEffs): RV[(:, term_i)] = self.getTraitCovar(term_i).diagonal() if univariance: RV /= RV.sum(1)[(:, sp.newaxis)] return RV
Return the estimated variance components Args: univariance: Boolean indicator, if True variance components are normalized to sum up to 1 for each trait Returns: variance components of all random effects on all phenotypes [P, n_randEffs matrix]
codesearchnet
def history(self, hash): txs = self._t.get(hash, max_transactions=10000)['transactions'] tree = defaultdict(list) number_editions = 0 for tx in txs: _tx = self._t.get(tx['txid']) txid = _tx['txid'] verb_str = BlockchainSpider.check_script(_tx['vouts']) verb = Spoolverb.from_verb(verb_str) (from_address, to_address, piece_address) = BlockchainSpider._get_addresses(_tx) timestamp_utc = _tx['time'] action = verb.action edition_number = 0 if (action != 'EDITIONS'): edition_number = verb.edition_number else: number_editions = verb.num_editions tree[edition_number].append({'txid': txid, 'verb': verb_str, 'from_address': from_address, 'to_address': to_address, 'piece_address': piece_address, 'timestamp_utc': timestamp_utc, 'action': action, 'number_editions': number_editions, 'edition_number': edition_number}) for (edition, chain) in tree.items(): [d.update({'number_editions': number_editions}) for d in chain] return dict(tree)
Retrieve the ownership tree of all editions of a piece given the hash. Args: hash (str): Hash of the file to check. Can be created with the :class:`File` class Returns: dict: Ownsership tree of all editions of a piece. .. note:: For now we only support searching the blockchain by the piece hash.
codesearchnet
def assert_current_path(self, path, **kwargs): query = CurrentPathQuery(path, **kwargs) @self.document.synchronize def assert_current_path(): if not query.resolves_for(self): raise ExpectationNotMet(query.failure_message) assert_current_path() return True
Asserts that the page has the given path. By default this will compare against the path+query portion of the full URL. Args: path (str | RegexObject): The string or regex that the current "path" should match. **kwargs: Arbitrary keyword arguments for :class:`CurrentPathQuery`. Returns: True Raises: ExpectationNotMet: If the assertion hasn't succeeded during the wait time.
juraj-google-style
def _create_outbound_stream(self, config=None): if config is None: raise ValueError('No stream config to create stream from.') name = self._get_stream_name(config) stream_handlers = self._get_stream_handlers(config, name) stream_input = config.get('input', None) stream_output = config.get('output', None) if type(stream_output) is int: return PortOutputStream(name, stream_input, stream_output, stream_handlers, zmq_args={'zmq_context': self.broker.context, 'zmq_proxy_xsub_url': self.broker.XSUB_URL, 'zmq_proxy_xpub_url': self.broker.XPUB_URL}) else: if stream_output is not None: log.warn("Output of stream {} is not an integer port. " "Stream outputs can only be ports.".format(name)) return ZMQStream(name, stream_input, stream_handlers, zmq_args={'zmq_context': self.broker.context, 'zmq_proxy_xsub_url': self.broker.XSUB_URL, 'zmq_proxy_xpub_url': self.broker.XPUB_URL})
Creates an outbound stream from its config. Params: config: stream configuration as read by ait.config Returns: stream: a Stream Raises: ValueError: if any of the required config values are missing
juraj-google-style
def _write_class_markdown_to_file(self, f, name, cls): methods = dict(self.get_class_members(name, cls)) num_methods = len(methods) try: self._write_docstring_markdown_to_file(f, " methods, {}) except ValueError as e: raise ValueError(str(e) + " in class `%s`" % cls.__name__) any_method_called_out = (len(methods) != num_methods) if any_method_called_out: other_methods = {n: m for n, m in methods.items() if n in cls.__dict__} if other_methods: print("\n else: other_methods = methods for name in sorted(other_methods): self._write_member_markdown_to_file(f, "
Write the class doc to `f`. Args: f: File to write to. prefix: Prefix for names. cls: class object. name: name to use.
juraj-google-style
async def send_rpc_command(self, short_name, rpc_id, payload, sender_client, timeout=1.0): rpc_tag = str(uuid.uuid4()) self.rpc_results.declare(rpc_tag) if ((short_name in self.services) and (short_name in self.agents)): agent_tag = self.agents[short_name] rpc_message = {'rpc_id': rpc_id, 'payload': payload, 'response_uuid': rpc_tag} self.in_flight_rpcs[rpc_tag] = InFlightRPC(sender_client, short_name, monotonic(), timeout) (await self._notify_update(short_name, 'rpc_command', rpc_message, directed_client=agent_tag)) else: response = dict(result='service_not_found', response=b'') self.rpc_results.set(rpc_tag, response) return rpc_tag
Send an RPC to a service using its registered agent. Args: short_name (str): The name of the service we would like to send and RPC to rpc_id (int): The rpc id that we would like to call payload (bytes): The raw bytes that we would like to send as an argument sender_client (str): The uuid of the sending client timeout (float): The maximum number of seconds before we signal a timeout of the RPC Returns: str: A unique id that can used to identify the notified response of this RPC.
codesearchnet
def read_var_bytes(self, max_size=sys.maxsize) -> bytes: length = self.read_var_int(max_size) return self.read_bytes(length)
Read a variable length of bytes from the stream. Args: max_size (int): (Optional) maximum number of bytes to read. Returns: bytes:
juraj-google-style
def GetEventTagByIdentifier(self, storage_file, event_identifier): if not self._index: self._Build(storage_file) lookup_key = event_identifier.CopyToString() event_tag_identifier = self._index.get(lookup_key, None) if not event_tag_identifier: return None return storage_file.GetEventTagByIdentifier(event_tag_identifier)
Retrieves the most recently updated event tag for an event. Args: storage_file (BaseStorageFile): storage file. event_identifier (AttributeContainerIdentifier): event attribute container identifier. Returns: EventTag: event tag or None if the event has no event tag.
juraj-google-style
def _BuildOobLink(self, param, mode): code = self.rpc_helper.GetOobCode(param) if code: parsed = list(parse.urlparse(self.widget_url)) query = dict(parse.parse_qsl(parsed[4])) query.update({'mode': mode, 'oobCode': code}) try: parsed[4] = parse.urlencode(query) except AttributeError: parsed[4] = urllib.urlencode(query) return code, parse.urlunparse(parsed) raise errors.GitkitClientError('invalid request')
Builds out-of-band URL. Gitkit API GetOobCode() is called and the returning code is combined with Gitkit widget URL to building the out-of-band url. Args: param: dict of request. mode: string, Gitkit widget mode to handle the oob action after user clicks the oob url in the email. Raises: GitkitClientError: if oob code is not returned. Returns: A string of oob url.
juraj-google-style
def determinant(self, name='det'): if self.is_square is False: raise NotImplementedError('Determinant not implemented for an operator that is expected to not be square.') with self._name_scope(name): return self._determinant()
Determinant for every batch member. Args: name: A name for this `Op`. Returns: `Tensor` with shape `self.batch_shape` and same `dtype` as `self`. Raises: NotImplementedError: If `self.is_square` is `False`.
github-repos
def get_service_alias_by_class(self, service_class): aliases = [] for alias, service_object in self._service_objects.items(): if isinstance(service_object, service_class): aliases.append(alias) return aliases
Gets the aslias name of a registered service. The same service class can be registered multiple times with different aliases. When not well managed, duplication and race conditions can arise. One can use this API to de-duplicate as needed. Args: service_class: class, the class of a service type. Returns: list of strings, the aliases the service is registered with.
github-repos
def __init__(self, statediag=[], thebiggestid=None): self.statediag = [] self.quickresponse = {} self.quickresponse_types = {} self.toadd = [] self.biggestid = 0 if thebiggestid is None: for state in statediag: if statediag[state].id > self.biggestid: self.biggestid = statediag[state].id else: self.biggestid = thebiggestid self.statediag = statediag
Find the biggest State ID Args: statediag (list): The states of the PDA thebiggestid (int): The binggest state identifier Returns: None
juraj-google-style
def _list_certs(certificate_store='My'): ret = dict() blacklist_keys = ['DnsNameList', 'Thumbprint'] ps_cmd = ['Get-ChildItem', '-Path', "'Cert:\\LocalMachine\\{0}'".format(certificate_store), '|', 'Select-Object DnsNameList, SerialNumber, Subject, Thumbprint, Version'] cmd_ret = _srvmgr(cmd=ps_cmd, return_json=True) try: items = salt.utils.json.loads(cmd_ret['stdout'], strict=False) except ValueError: raise CommandExecutionError('Unable to parse return data as Json.') for item in items: cert_info = dict() for key in item: if (key not in blacklist_keys): cert_info[key.lower()] = item[key] cert_info['dnsnames'] = [] if item['DnsNameList']: cert_info['dnsnames'] = [name['Unicode'] for name in item['DnsNameList']] ret[item['Thumbprint']] = cert_info return ret
List details of available certificates in the LocalMachine certificate store. Args: certificate_store (str): The name of the certificate store on the local machine. Returns: dict: A dictionary of certificates found in the store
codesearchnet
def __init__(self, num_participants): self._num_participants = num_participants self._counter = 0 self._flag = False self._local_sense = threading.local() self._lock = threading.Lock() self._condition = threading.Condition()
Initializes the barrier object. Args: num_participants: an integer which is the expected number of calls of `wait` pass to through this barrier.
github-repos
def resize(self, image: 'torch.Tensor', size: Dict[str, int], crop_pct: float, interpolation: PILImageResampling=PILImageResampling.BICUBIC, **kwargs) -> 'torch.Tensor': if not size.shortest_edge: raise ValueError(f"Size dictionary must contain 'shortest_edge' key. Got {size.keys()}") shortest_edge = size['shortest_edge'] if shortest_edge < 384: resize_shortest_edge = int(shortest_edge / crop_pct) resize_size = get_resize_output_image_size(image, size=resize_shortest_edge, default_to_square=False, input_data_format=ChannelDimension.FIRST) image = F.resize(image, resize_size, interpolation=interpolation, **kwargs) return F.center_crop(image, (shortest_edge, shortest_edge), **kwargs) else: return F.resize(image, (shortest_edge, shortest_edge), interpolation=interpolation, **kwargs)
Resize an image. Args: image (`torch.Tensor`): Image to resize. size (`Dict[str, int]`): Dictionary of the form `{"shortest_edge": int}`, specifying the size of the output image. If `size["shortest_edge"]` >= 384 image is resized to `(size["shortest_edge"], size["shortest_edge"])`. Otherwise, the smaller edge of the image will be matched to `int(size["shortest_edge"] / crop_pct)`, after which the image is cropped to `(size["shortest_edge"], size["shortest_edge"])`. crop_pct (`float`): Percentage of the image to crop. Only has an effect if size < 384. resample (`PILImageResampling`, *optional*, defaults to `PILImageResampling.BICUBIC`): Resampling filter to use when resizing the image. Returns: `torch.Tensor`: Resized image.
github-repos
def downstream_index_dinf(dinfdir_value, i, j): down_dirs = DinfUtil.dinf_downslope_direction(dinfdir_value) down_coors = [] for dir_code in down_dirs: row, col = D8Util.downstream_index(dir_code, i, j) down_coors.append([row, col]) return down_coors
find downslope coordinate for Dinf of TauDEM Args: dinfdir_value: dinf direction value i: current row j: current col Returns: downstream (row, col)s
juraj-google-style
def _CreateExpandedDSA(client, ad_group_id): ad_group_ad_service = client.GetService('AdGroupAdService') operations = [{ 'operator': 'ADD', 'operand': { 'xsi_type': 'AdGroupAd', 'adGroupId': ad_group_id, 'ad': { 'xsi_type': 'ExpandedDynamicSearchAd', 'description': 'Buy your tickets now!', 'description2': 'Discount ends soon' }, 'status': 'PAUSED', } }] ad = ad_group_ad_service.mutate(operations)['value'][0]['ad'] print ('Expanded dynamic search ad with ID "%d", description "%s", and ' 'description 2 "%s" was added' % (ad['id'], ad['description'], ad['description2']))
Creates the expanded Dynamic Search Ad. Args: client: an AdwordsClient instance. ad_group_id: an integer ID of the ad group in which the DSA is added.
juraj-google-style
def log_value(self, name, value, step=None): if isinstance(value, six.string_types): raise TypeError('"value" should be a number, got {}'.format(type(value))) value = float(value) self._check_step(step) tf_name = self._ensure_tf_name(name) summary = self._scalar_summary(tf_name, value, step) self._log_summary(tf_name, summary, value, step=step)
Log new value for given name on given step. Args: name (str): name of the variable (it will be converted to a valid tensorflow summary name). value (float): this is a real number to be logged as a scalar. step (int): non-negative integer used for visualization: you can log several different variables on one step, but should not log different values of the same variable on the same step (this is not checked).
codesearchnet
def _ParseShVariables(self, lines): paths = {} for line in lines: for entry in line: if "=" in entry: target, vals = (entry.split("=", 1) + [""])[:2] if vals: path_vals = vals.split(":") else: path_vals = [] self._ExpandPath(target, path_vals, paths) elif entry not in self._SH_CONTINUATION: break return paths
Extract env_var and path values from sh derivative shells. Iterates over each line, word by word searching for statements that set the path. These are either variables, or conditions that would allow a variable to be set later in the line (e.g. export). Args: lines: A list of lines, each of which is a list of space separated words. Returns: a dictionary of path names and values.
juraj-google-style
def deroot(self, label='OLDROOT'): if self.root.edge_length is not None: self.root.add_child(Node(edge_length=self.root.edge_length,label=label)) self.root.edge_length = None
If the tree has a root edge, drop the edge to be a child of the root node Args: ``label`` (``str``): The desired label of the new child
juraj-google-style
def __init__(self, *args, **kwargs): self.args = args self.kwargs = kwargs self.outputs = None self.backoff_seconds = _DEFAULT_BACKOFF_SECONDS self.backoff_factor = _DEFAULT_BACKOFF_FACTOR self.max_attempts = _DEFAULT_MAX_ATTEMPTS self.target = None self.task_retry = False self._current_attempt = 0 self._root_pipeline_key = None self._pipeline_key = None self._context = None self._result_status = None self._set_class_path() self.target = mr_util._get_task_target() if _TEST_MODE: self._context = _PipelineContext('', 'default', '') self._root_pipeline_key = _TEST_ROOT_PIPELINE_KEY self._pipeline_key = db.Key.from_path( _PipelineRecord.kind(), uuid.uuid4().hex) self.outputs = PipelineFuture(self.output_names) self._context.evaluate_test(self)
Initializer. Args: *args: The positional arguments for this function-object. **kwargs: The keyword arguments for this function-object.
juraj-google-style
def implemented(cls, for_type): for function in cls.required(): if (not function.implemented_for_type(for_type)): raise TypeError(("%r doesn't implement %r so it cannot participate in the protocol %r." % (for_type, function.func.__name__, cls))) cls.register(for_type)
Assert that protocol 'cls' is implemented for type 'for_type'. This will cause 'for_type' to be registered with the protocol 'cls'. Subsequently, protocol.isa(for_type, cls) will return True, as will isinstance, issubclass and others. Raises: TypeError if 'for_type' doesn't implement all required functions.
codesearchnet
def from_music_service(cls, music_service, content_dict): quoted_id = quote_url(content_dict['id'].encode('utf-8')) item_id = '0fffffff{}'.format(quoted_id) is_track = cls == get_class('MediaMetadataTrack') uri = form_uri(item_id, music_service, is_track) resources = [DidlResource(uri=uri, protocol_info="DUMMY")] desc = music_service.desc return cls(item_id, desc, resources, uri, content_dict, music_service=music_service)
Return an element instantiated from the information that a music service has (alternative constructor) Args: music_service (MusicService): The music service that content_dict originated from content_dict (OrderedDict): The data to instantiate the music service item from Returns: MusicServiceItem: A MusicServiceItem instance
juraj-google-style
def delete_endpoint(self, endpoint_name): LOGGER.info('Deleting endpoint with name: {}'.format(endpoint_name)) self.sagemaker_client.delete_endpoint(EndpointName=endpoint_name)
Delete an Amazon SageMaker ``Endpoint``. Args: endpoint_name (str): Name of the Amazon SageMaker ``Endpoint`` to delete.
juraj-google-style
def put_rpc(self, address, rpc_id, arg_payload, response): self._rpc_queue.put_nowait((address, rpc_id, arg_payload, response))
Place an RPC onto the RPC queue. The rpc will be dispatched asynchronously by the background dispatch task. This method must be called from the event loop. This method does not block. Args: address (int): The address of the tile with the RPC rpc_id (int): The id of the rpc you want to call arg_payload (bytes): The RPC payload respones (GenericResponse): The object to use to signal the result.
codesearchnet
def read_value(self): raise NotImplementedError
Returns the value of this variable, read in the current context. Can be different from value() if it's on another device, with control dependencies, etc. Returns: A `Tensor` containing the value of the variable.
github-repos
def _example_short_number(region_code): metadata = PhoneMetadata.short_metadata_for_region(region_code) if metadata is None: return U_EMPTY_STRING desc = metadata.short_code if desc.example_number is not None: return desc.example_number return U_EMPTY_STRING
Gets a valid short number for the specified region. Arguments: region_code -- the region for which an example short number is needed. Returns a valid short number for the specified region. Returns an empty string when the metadata does not contain such information.
juraj-google-style
def get_smeared_densities(self, sigma): from scipy.ndimage.filters import gaussian_filter1d diff = [(self.frequencies[(i + 1)] - self.frequencies[i]) for i in range((len(self.frequencies) - 1))] avgdiff = (sum(diff) / len(diff)) smeared_dens = gaussian_filter1d(self.densities, (sigma / avgdiff)) return smeared_dens
Returns the densities, but with a Gaussian smearing of std dev sigma applied. Args: sigma: Std dev of Gaussian smearing function. Returns: Gaussian-smeared densities.
codesearchnet
def process_opened_file(self, in_filename, in_file, out_filename, out_file): lines = in_file.readlines() processed_file, new_file_content, log, process_errors = self.update_string_pasta(''.join(lines), in_filename) if out_file and processed_file: out_file.write(new_file_content) return (processed_file, self._format_log(log, in_filename, out_filename), process_errors)
Process the given python file for incompatible changes. This function is split out to facilitate StringIO testing from tf_upgrade_test.py. Args: in_filename: filename to parse in_file: opened file (or StringIO) out_filename: output file to write to out_file: opened file (or StringIO) Returns: A tuple representing number of files processed, log of actions, errors
github-repos
def _executeMassiveMethod(path, method, args=None, classArgs = None): response = {} if args is None: args = {} if classArgs is None: classArgs = {} sys.path.append(path) exclude = ["__init__.py", "base.py"] for f in AtomShieldsScanner._getFiles(path, "*.py", exclude=exclude): try: instance = AtomShieldsScanner._getClassInstance(path = f, args = classArgs) if instance is not None: if callable(method): args["instance"] = instance output = method(**args) response[instance.__class__.NAME] = output else: if hasattr(instance, method): output = getattr(instance, method)(**args) response[instance.__class__.NAME] = output else: continue except Exception as e: AtomShieldsScanner._debug("[!] %s" % e) sys.path.remove(path) return response
Execute an specific method for each class instance located in path Args: path (str): Absolute path which contains the .py files method (str): Method to execute into class instance Returns: dict: Dictionary which contains the response for every class instance. The dictionary keys are the value of 'NAME' class variable.
juraj-google-style
def flatten(repertoire, big_endian=False): if (repertoire is None): return None order = ('C' if big_endian else 'F') return repertoire.squeeze().ravel(order=order)
Flatten a repertoire, removing empty dimensions. By default, the flattened repertoire is returned in little-endian order. Args: repertoire (np.ndarray or None): A repertoire. Keyword Args: big_endian (boolean): If ``True``, flatten the repertoire in big-endian order. Returns: np.ndarray: The flattened repertoire.
codesearchnet
def get_compatibility_log(self): if not self._verified: raise RuntimeError("target compatibility isn't verified yet") return self._log_messages
Returns list of compatibility log messages. WARNING: This method should only be used for unit tests. Returns: The list of log messages by the recent compatibility check. Raises: RuntimeError: when the compatibility was NOT checked.
github-repos