code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def get_device_policy(): device_policy = context.context().device_policy if device_policy == context.DEVICE_PLACEMENT_SILENT: return 'silent' elif device_policy == context.DEVICE_PLACEMENT_SILENT_FOR_INT32: return 'silent_for_int32' elif device_policy == context.DEVICE_PLACEMENT_WARN: return 'warn' elif device_policy == context.DEVICE_PLACEMENT_EXPLICIT: return 'explicit' else: raise errors.InternalError(f'Got an invalid device policy: {device_policy!r}.')
Gets the current device policy. The device policy controls how operations requiring inputs on a specific device (e.g., on GPU:0) handle inputs on a different device (e.g. GPU:1). This function only gets the device policy for the current thread. Any subsequently started thread will again use the default policy. Returns: Current thread device policy
github-repos
def plot_all_stability_map(self, max_r, increments=50, delu_dict=None, delu_default=0, plt=None, labels=None, from_sphere_area=False, e_units='keV', r_units='nanometers', normalize=False, scale_per_atom=False): plt = (plt if plt else pretty_plot(width=8, height=7)) for (i, analyzer) in enumerate(self.se_analyzers): label = (labels[i] if labels else '') plt = self.plot_one_stability_map(analyzer, max_r, delu_dict, label=label, plt=plt, increments=increments, delu_default=delu_default, from_sphere_area=from_sphere_area, e_units=e_units, r_units=r_units, normalize=normalize, scale_per_atom=scale_per_atom) return plt
Returns the plot of the formation energy of a particles of different polymorphs against its effect radius Args: max_r (float): The maximum radius of the particle to plot up to. increments (int): Number of plot points delu_dict (Dict): Dictionary of the chemical potentials to be set as constant. Note the key should be a sympy Symbol object of the format: Symbol("delu_el") where el is the name of the element. delu_default (float): Default value for all unset chemical potentials plt (pylab): Plot labels (list): List of labels for each plot, corresponds to the list of se_analyzers from_sphere_area (bool): There are two ways to calculate the bulk formation energy. Either by treating the volume and thus surface area of the particle as a perfect sphere, or as a Wulff shape.
codesearchnet
def activate(self, uid=None): if uid is not None: if not isinstance(uid, six.string_types): raise TypeError("uid must be a string") result = self.proxy.activate(uid) status = result.result_status.value if status == enums.ResultStatus.SUCCESS: return else: reason = result.result_reason.value message = result.result_message.value raise exceptions.KmipOperationFailure(status, reason, message)
Activate a managed object stored by a KMIP appliance. Args: uid (string): The unique ID of the managed object to activate. Optional, defaults to None. Returns: None Raises: ClientConnectionNotOpen: if the client connection is unusable KmipOperationFailure: if the operation result is a failure TypeError: if the input argument is invalid
juraj-google-style
def register_command_handler(self, prefix, handler, help_info, prefix_aliases=None): self._command_handler_registry.register_command_handler(prefix, handler, help_info, prefix_aliases=prefix_aliases) self._tab_completion_registry.extend_comp_items('', [prefix]) if prefix_aliases: self._tab_completion_registry.extend_comp_items('', prefix_aliases)
A wrapper around CommandHandlerRegistry.register_command_handler(). In addition to calling the wrapped register_command_handler() method, this method also registers the top-level tab-completion context based on the command prefixes and their aliases. See the doc string of the wrapped method for more details on the args. Args: prefix: (str) command prefix. handler: (callable) command handler. help_info: (str) help information. prefix_aliases: (list of str) aliases of the command prefix.
github-repos
def to_csv(pipe: BeamEventSet, file_path_prefix: str, schema: Schema, timestamp_key: str='timestamp', **wargs): header_values = [timestamp_key] + schema.index_names() + schema.feature_names() header_string = io.StringIO() header_writer = csv.writer(header_string) header_writer.writerow(header_values) return add_feature_idx_and_flatten(pipe) | 'Group by features' >> beam.GroupByKey() | 'Convert to csv' >> beam.Map(_convert_to_csv) | 'Write csv' >> beam.io.textio.WriteToText(file_path_prefix=file_path_prefix, header=header_string.getvalue(), append_trailing_newlines=False, **wargs)
Writes a Beam EventSet to a file or set of csv files. Limitation: Timestamps are always stored as numerical values. TODO: Support datetime timestamps. Usage example: ``` input_node: tp.EventSetNode = ... ( p | tpb.from_csv("/input.csv", input_node.schema) | ... # processing | tpb.to_csv("/output.csv", output_node.schema) ) ``` Args: pipe: Beam pipe containing an EventSet. file_path_prefix: Path or path matching expression compatible with WriteToText. schema: Schema of the data. If you have a Temporian node, the schema is available with `node.schema`. timestamp_key: Key containing the timestamps. **wargs: Arguments passed to `beam.io.textio.WriteToText`.
github-repos
def sub_chempots(gamma_dict, chempots): coeffs = [gamma_dict[k] for k in gamma_dict.keys()] chempot_vals = [] for k in gamma_dict.keys(): if k not in chempots.keys(): chempot_vals.append(k) elif k == 1: chempot_vals.append(1) else: chempot_vals.append(chempots[k]) return np.dot(coeffs, chempot_vals)
Uses dot product of numpy array to sub chemical potentials into the surface grand potential. This is much faster than using the subs function in sympy. Args: gamma_dict (dict): Surface grand potential equation as a coefficient dictionary chempots (dict): Dictionary assigning each chemical potential (key) in gamma a value Returns: Surface energy as a float
juraj-google-style
def get_cbm_vbm(self, tol=0.001, abs_tol=False, spin=None): if spin is None: tdos = self.y if len(self.ydim) == 1 else np.sum(self.y, axis=1) elif spin == Spin.up: tdos = self.y[:, 0] else: tdos = self.y[:, 1] if not abs_tol: tol = tol * tdos.sum() / tdos.shape[0] i_fermi = 0 while self.x[i_fermi] <= self.efermi: i_fermi += 1 i_gap_start = i_fermi while i_gap_start - 1 >= 0 and tdos[i_gap_start - 1] <= tol: i_gap_start -= 1 i_gap_end = i_gap_start while i_gap_end < tdos.shape[0] and tdos[i_gap_end] <= tol: i_gap_end += 1 i_gap_end -= 1 return self.x[i_gap_end], self.x[i_gap_start]
Expects a DOS object and finds the cbm and vbm. Args: tol: tolerance in occupations for determining the gap abs_tol: An absolute tolerance (True) and a relative one (False) spin: Possible values are None - finds the gap in the summed densities, Up - finds the gap in the up spin channel, Down - finds the gap in the down spin channel. Returns: (cbm, vbm): float in eV corresponding to the gap
juraj-google-style
def get_layer_timing_signal_learned_1d(channels, layer, num_layers): shape = [num_layers, 1, 1, channels] layer_embedding = ( tf.get_variable( "layer_embedding", shape, initializer=tf.random_normal_initializer(0, channels**-0.5)) * (channels**0.5)) return layer_embedding[layer, :, :, :]
get n-dimensional embedding as the layer (vertical) timing signal. Adds embeddings to represent the position of the layer in the tower. Args: channels: dimension of the timing signal layer: layer num num_layers: total number of layers Returns: a Tensor of timing signals [1, 1, channels].
juraj-google-style
def ready(self, node_id, metadata_priority=True): self.maybe_connect(node_id) return self.is_ready(node_id, metadata_priority=metadata_priority)
Check whether a node is connected and ok to send more requests. Arguments: node_id (int): the id of the node to check metadata_priority (bool): Mark node as not-ready if a metadata refresh is required. Default: True Returns: bool: True if we are ready to send to the given node
codesearchnet
def as_object(obj): LOGGER.debug('as_object(%s)', obj) if isinstance(obj, datetime.date): return as_date(obj) elif hasattr(obj, '__dict__'): out = {k: obj.__dict__[k] for k in obj.__dict__ if not k.startswith('_')} for k, v in ( (p, getattr(obj, p)) for p, _ in inspect.getmembers( obj.__class__, lambda x: isinstance(x, property)) ): out[k] = v return out
Return a JSON serializable type for ``o``. Args: obj (:py:class:`object`): the object to be serialized. Raises: :py:class:`AttributeError`: when ``o`` is not a Python object. Returns: (dict): JSON serializable type for the given object.
juraj-google-style
def map_seqprop_resnums_to_structprop_resnums(self, resnums, seqprop=None, structprop=None, chain_id=None, use_representatives=False): resnums = ssbio.utils.force_list(resnums) if use_representatives: seqprop = self.representative_sequence structprop = self.representative_structure chain_id = self.representative_chain if (not structprop): raise ValueError('No representative structure set, please specify sequence, structure, and chain ID') elif ((not seqprop) or (not structprop) or (not chain_id)): raise ValueError('Please specify sequence, structure, and chain ID') mapping_to_repchain_index = self._map_seqprop_resnums_to_structprop_chain_index(resnums=resnums, seqprop=seqprop, structprop=structprop, chain_id=chain_id, use_representatives=use_representatives) chain = structprop.chains.get_by_id(chain_id) chain_structure_resnum_mapping = chain.seq_record.letter_annotations['structure_resnums'] final_mapping = {} for (k, v) in mapping_to_repchain_index.items(): k = int(k) rn = chain_structure_resnum_mapping[v] if (rn == float('Inf')): log.warning('{}-{}, {}: structure file does not contain coordinates for this residue'.format(structprop.id, chain_id, k)) else: rn = int(rn) final_mapping[k] = rn index_of_structure_resnum = chain_structure_resnum_mapping.index(rn) format_data = {'seqprop_id': seqprop.id, 'seqprop_resid': seqprop[(k - 1)], 'seqprop_resnum': k, 'structprop_id': structprop.id, 'structprop_chid': chain_id, 'structprop_resid': chain.seq_record[index_of_structure_resnum], 'structprop_resnum': rn} if (seqprop[(k - 1)] != chain.seq_record[index_of_structure_resnum]): log.warning('Sequence {seqprop_id} residue {seqprop_resid}{seqprop_resnum} does not match to structure {structprop_id}-{structprop_chid} residue {structprop_resid}{structprop_resnum}. NOTE: this may be due to structural differences'.format(**format_data)) else: log.debug('Sequence {seqprop_id} residue {seqprop_resid}{seqprop_resnum} is mapped to structure {structprop_id}-{structprop_chid} residue {structprop_resid}{structprop_resnum}'.format(**format_data)) return final_mapping
Map a residue number in any SeqProp to the structure's residue number for a specified chain. Args: resnums (int, list): Residue numbers in the sequence seqprop (SeqProp): SeqProp object structprop (StructProp): StructProp object chain_id (str): Chain ID to map to use_representatives (bool): If the representative sequence and structure should be used. If True, seqprop, structprop, and chain_id do not need to be defined. Returns: dict: Mapping of sequence residue numbers to structure residue numbers
codesearchnet
def __init__(self, flag_desc, help): self.desc = flag_desc self.help = help self.default = '' self.tips = ''
Create the flag object. Args: flag_desc The command line forms this could take. (string) help The help text (string)
juraj-google-style
def scan_file(path): path = os.path.abspath(path) if settings.USE_CLAMD: return clamd.scan_file(path) else: return clamscan.scan_file(path)
Scan `path` for viruses using ``clamd`` or ``clamscan`` (depends on :attr:`settings.USE_CLAMD`. Args: path (str): Relative or absolute path of file/directory you need to scan. Returns: dict: ``{filename: ("FOUND", "virus type")}`` or blank dict. Raises: ValueError: When the server is not running. AssertionError: When the internal file doesn't exists.
juraj-google-style
def type_check(type_constraint, datum, is_input): datum_type = 'input' if is_input else 'output' try: check_constraint(type_constraint, datum) except CompositeTypeHintError as e: _, _, tb = sys.exc_info() raise TypeCheckError(e.args[0]).with_traceback(tb) except SimpleTypeHintError: error_msg = "According to type-hint expected %s should be of type %s. Instead, received '%s', an instance of type %s." % (datum_type, type_constraint, datum, type(datum)) _, _, tb = sys.exc_info() raise TypeCheckError(error_msg).with_traceback(tb)
Typecheck a PTransform related datum according to a type constraint. This function is used to optionally type-check either an input or an output to a PTransform. Args: type_constraint: An instance of a typehints.TypeContraint, one of the white-listed builtin Python types, or a custom user class. datum: An instance of a Python object. is_input: True if 'datum' is an input to a PTransform's DoFn. False otherwise. Raises: TypeError: If 'datum' fails to type-check according to 'type_constraint'.
github-repos
def account_id(self, value): if value == self._defaults['ai.user.accountId'] and 'ai.user.accountId' in self._values: del self._values['ai.user.accountId'] else: self._values['ai.user.accountId'] = value
The account_id property. Args: value (string). the property value.
juraj-google-style
def _remove_duplicate_points(points, groups): group_initial_ids = groups[(:, GPFIRST)] to_be_reduced = np.zeros(len(group_initial_ids)) to_be_removed = [] for (ig, g) in enumerate(groups): (iid, typ, pid) = (g[GPFIRST], g[GTYPE], g[GPID]) if ((pid != (- 1)) and (typ != 1) and (groups[pid][GTYPE] != 1)): to_be_removed.append(iid) to_be_reduced[(ig + 1):] += 1 groups[(:, GPFIRST)] = (groups[(:, GPFIRST)] - to_be_reduced) points = np.delete(points, to_be_removed, axis=0) return (points, groups)
Removes the duplicate points from the beginning of a section, if they are present in points-groups representation. Returns: points, groups with unique points.
codesearchnet
def get_chromosomes(self, sv=False): if sv: res = self.db.structural_variant.distinct('chrom') else: res = self.db.variant.distinct('chrom') return res
Return a list of all chromosomes found in database Args: sv(bool): if sv variants should be choosen Returns: res(iterable(str)): An iterable with all chromosomes in the database
juraj-google-style
def sample(self, size=None): self._recompute() if size is None: n = np.random.randn(len(self._t)) else: n = np.random.randn(len(self._t), size) n = self.solver.dot_L(n) if size is None: return self.mean.get_value(self._t) + n[:, 0] return self.mean.get_value(self._t)[None, :] + n.T
Sample from the prior distribution over datasets Args: size (Optional[int]): The number of samples to draw. Returns: array[n] or array[size, n]: The samples from the prior distribution over datasets.
juraj-google-style
def addon_name(self): with self.selenium.context(self.selenium.CONTEXT_CHROME): el = self.find_description() return el.find_element(By.CSS_SELECTOR, 'b').text
Provide access to the add-on name. Returns: str: Add-on name.
codesearchnet
def plot_structures(self, structures, fontsize=6, **kwargs): import matplotlib.pyplot as plt nrows = len(structures) (fig, axes) = plt.subplots(nrows=nrows, ncols=1, sharex=True, squeeze=False) for (i, (ax, structure)) in enumerate(zip(axes.ravel(), structures)): self.get_plot(structure, fontsize=fontsize, ax=ax, with_labels=(i == (nrows - 1)), **kwargs) (spg_symbol, spg_number) = structure.get_space_group_info() ax.set_title('{} {} ({}) '.format(structure.formula, spg_symbol, spg_number)) return fig
Plot diffraction patterns for multiple structures on the same figure. Args: structures (Structure): List of structures two_theta_range ([float of length 2]): Tuple for range of two_thetas to calculate in degrees. Defaults to (0, 90). Set to None if you want all diffracted beams within the limiting sphere of radius 2 / wavelength. annotate_peaks (bool): Whether to annotate the peaks with plane information. fontsize: (int) fontsize for peak labels.
codesearchnet
def _set_resultdir(name=None): resultdir_name = (name or ('enos_' + datetime.today().isoformat())) resultdir_path = os.path.abspath(resultdir_name) if os.path.isfile(resultdir_path): raise EnosFilePathError(resultdir_path, ('Result directory cannot be created due to existing file %s' % resultdir_path)) if (not os.path.isdir(resultdir_path)): os.mkdir(resultdir_path) logger.info(('Generate results directory %s' % resultdir_path)) link_path = SYMLINK_NAME if os.path.lexists(link_path): os.remove(link_path) try: os.symlink(resultdir_path, link_path) logger.info(('Symlink %s to %s' % (resultdir_path, link_path))) except OSError: logger.warning(('Symlink %s to %s failed' % (resultdir_path, link_path))) return resultdir_path
Set or get the directory to store experiment results. Looks at the `name` and create the directory if it doesn"t exist or returns it in other cases. If the name is `None`, then the function generates an unique name for the results directory. Finally, it links the directory to `SYMLINK_NAME`. Args: name (str): file path to an existing directory. It could be weather an absolute or a relative to the current working directory. Returns: the file path of the results directory.
codesearchnet
def _create_variables_and_slots(self) -> Dict[Text, Dict[Text, tf_variables.Variable]]: variables = {} for table in self._table_config: variables[table.name] = self._create_variables(table, trainable=True) return variables
Create variables for TPU embeddings. Note that this will always ensure that the variable is created under the TPUStrategy. Returns: A dict of dicts. The outer dict is keyed by the table names and the inner dicts are keyed by 'parameters' and the slot variable names.
github-repos
def is_video(mime: str) -> bool: return mime in INPUT_VIDEO_TYPES or mime.startswith('video/')
Returns whether the content is a video. Args: mime: The mime string. Returns: True of it is a video, False otherwise.
github-repos
def matches_filters(self, node): visible = self.visible if self.options['text']: if isregex(self.options['text']): regex = self.options['text'] elif (self.exact_text is True): regex = re.compile('\\A{}\\Z'.format(re.escape(self.options['text']))) else: regex = toregex(self.options['text']) text = normalize_text((node.all_text if (visible == 'all') else node.visible_text)) if (not regex.search(text)): return False if isinstance(self.exact_text, (bytes_, str_)): regex = re.compile('\\A{}\\Z'.format(re.escape(self.exact_text))) text = normalize_text((node.all_text if (visible == 'all') else node.visible_text)) if (not regex.search(text)): return False if (visible == 'visible'): if (not node.visible): return False elif (visible == 'hidden'): if node.visible: return False for (name, node_filter) in iter(self._node_filters.items()): if (name in self.filter_options): if (not node_filter.matches(node, self.filter_options[name])): return False elif node_filter.has_default: if (not node_filter.matches(node, node_filter.default)): return False if (self.options['filter'] and (not self.options['filter'](node))): return False return True
Returns whether the given node matches all filters. Args: node (Element): The node to evaluate. Returns: bool: Whether the given node matches.
codesearchnet
def no_results(channel): gui = ui_embed.UI(channel, 'No results', ':c', modulename=modulename, colour=16746496) return gui
Creates an embed UI for when there were no results Args: channel (discord.Channel): The Discord channel to bind the embed to Returns: ui (ui_embed.UI): The embed UI object
codesearchnet
def DeletePendingNotification(self, timestamp): shown_notifications = self.Get(self.Schema.SHOWN_NOTIFICATIONS) if (not shown_notifications): shown_notifications = self.Schema.SHOWN_NOTIFICATIONS() pending = self.Get(self.Schema.PENDING_NOTIFICATIONS) if (not pending): return delete_count = 0 for idx in reversed(range(0, len(pending))): if (pending[idx].timestamp == timestamp): shown_notifications.Append(pending[idx]) pending.Pop(idx) delete_count += 1 if (delete_count > 1): raise UniqueKeyError(('Multiple notifications at %s' % timestamp)) self.Set(self.Schema.PENDING_NOTIFICATIONS, pending) self.Set(self.Schema.SHOWN_NOTIFICATIONS, shown_notifications)
Deletes the pending notification with the given timestamp. Args: timestamp: The timestamp of the notification. Assumed to be unique. Raises: UniqueKeyError: Raised if multiple notifications have the timestamp.
codesearchnet
def get(self, id): for obj in self.model.db: if obj["id"] == id: return self._cast_model(obj) return None
Get a object by id Args: id (int): Object id Returns: Object: Object with specified id None: If object not found
juraj-google-style
def save(self, clean=True): ret = {} if clean: self._dirty = False else: ret['_dirty'] = self._dirty return ret
Serialize into raw representation. Clears the dirty bit by default. Args: clean (bool): Whether to clear the dirty bit. Returns: dict: Raw.
juraj-google-style
def Run(self, conf, args): try: options, args = self.parser.parse_args(args) except SystemExit as e: return e.code if options.maps: self.log.info('Setting configured maps to %s', options.maps) conf.maps = options.maps for map_name in conf.maps: if map_name == config.MAP_AUTOMOUNT: value_list = self.GetAutomountMapMetadata(conf, epoch=options.epoch) self.log.debug('Value list: %r', value_list) for value_dict in value_list: self.log.debug('Value dict: %r', value_dict) output = options.automount_template % value_dict print(output) else: for value_dict in self.GetSingleMapMetadata(map_name, conf, epoch=options.epoch): self.log.debug('Value dict: %r', value_dict) output = options.template % value_dict print(output) return os.EX_OK
Run the Status command. See Command.Run() for full documentation on the Run() method. Args: conf: nss_cache.config.Config object args: list of arguments to be parsed by this command Returns: zero on success, nonzero on error
github-repos
def _get_resource_hash(zone_name, record): record_data = defaultdict(int, record) if type(record_data['GeoLocation']) == dict: record_data['GeoLocation'] = ":".join(["{}={}".format(k, v) for k, v in record_data['GeoLocation'].items()]) args = [ zone_name, record_data['Name'], record_data['Type'], record_data['Weight'], record_data['Region'], record_data['GeoLocation'], record_data['Failover'], record_data['HealthCheckId'], record_data['TrafficPolicyInstanceId'] ] return get_resource_id('r53r', args)
Returns the last ten digits of the sha256 hash of the combined arguments. Useful for generating unique resource IDs Args: zone_name (`str`): The name of the DNS Zone the record belongs to record (`dict`): A record dict to generate the hash from Returns: `str`
juraj-google-style
def FindExecutableOnPath(executable, path=None, pathext=None, allow_extensions=False): if not allow_extensions and os.path.splitext(executable)[1]: raise ValueError('FindExecutableOnPath({0},...) failed because first argument must not have an extension.'.format(executable)) if os.path.dirname(executable): raise ValueError('FindExecutableOnPath({0},...) failed because first argument must not have a path.'.format(executable)) if path is None: effective_path = _GetSystemPath() else: effective_path = path effective_pathext = pathext if pathext is not None else _PlatformExecutableExtensions(platforms.OperatingSystem.Current()) return _FindExecutableOnPath(executable, effective_path, effective_pathext)
Searches for `executable` in the directories listed in `path` or $PATH. Executable must not contain a directory or an extension. Args: executable: The name of the executable to find. path: A list of directories to search separated by 'os.pathsep'. If None then the system PATH is used. pathext: An iterable of file name extensions to use. If None then platform specific extensions are used. allow_extensions: A boolean flag indicating whether extensions in the executable are allowed. Returns: The path of 'executable' (possibly with a platform-specific extension) if found and executable, None if not found. Raises: ValueError: if executable has a path or an extension, and extensions are not allowed, or if there's an internal error.
github-repos
def _sign_of(money): units = money.units nanos = money.nanos if units: if (units > 0): return 1 elif (units < 0): return (- 1) if nanos: if (nanos > 0): return 1 elif (nanos < 0): return (- 1) return 0
Determines the amount sign of a money instance Args: money (:class:`endpoints_management.gen.servicecontrol_v1_messages.Money`): the instance to test Return: int: 1, 0 or -1
codesearchnet
def get_html_titles(index_page): dom = dhtmlparser.parseString(index_page) title_tags = dom.find("title") return [ SourceString(tag.getContent().strip(), "HTML") for tag in title_tags if tag.getContent().strip() ]
Return list of titles parsed from HTML. Args: index_page (str): HTML content of the page you wish to analyze. Returns: list: List of :class:`.SourceString` objects.
juraj-google-style
def _get_vep_transcript(self, transcript_info): transcript = Transcript( hgnc_symbol = transcript_info.get('SYMBOL'), transcript_id = transcript_info.get('Feature'), ensembl_id = transcript_info.get('Gene'), biotype = transcript_info.get('BIOTYPE'), consequence = transcript_info.get('Consequence'), strand = transcript_info.get('STRAND'), sift = transcript_info.get('SIFT'), polyphen = transcript_info.get('PolyPhen'), exon = transcript_info.get('EXON'), HGVSc = transcript_info.get('HGVSc'), HGVSp = transcript_info.get('HGVSp'), GMAF = transcript_info.get('GMAF'), ExAC_MAF = transcript_info.get('ExAC_MAF') ) return transcript
Create a Transcript based on the vep annotation Args: transcript_info (dict): A dict with vep info Returns: transcript (puzzle.models.Transcript): A Transcripts
juraj-google-style
def filterbanks(num_filter, coefficients, sampling_freq, low_freq=None, high_freq=None): high_freq = (high_freq or (sampling_freq / 2)) low_freq = (low_freq or 300) s = 'High frequency cannot be greater than half of the sampling frequency!' assert (high_freq <= (sampling_freq / 2)), s assert (low_freq >= 0), 'low frequency cannot be less than zero!' mels = np.linspace(functions.frequency_to_mel(low_freq), functions.frequency_to_mel(high_freq), (num_filter + 2)) hertz = functions.mel_to_frequency(mels) freq_index = np.floor((((coefficients + 1) * hertz) / sampling_freq)).astype(int) filterbank = np.zeros([num_filter, coefficients]) for i in range(0, num_filter): left = int(freq_index[i]) middle = int(freq_index[(i + 1)]) right = int(freq_index[(i + 2)]) z = np.linspace(left, right, num=((right - left) + 1)) filterbank[(i, left:(right + 1))] = functions.triangle(z, left=left, middle=middle, right=right) return filterbank
Compute the Mel-filterbanks. Each filter will be stored in one rows. The columns correspond to fft bins. Args: num_filter (int): the number of filters in the filterbank, default 20. coefficients (int): (fftpoints//2 + 1). Default is 257. sampling_freq (float): the samplerate of the signal we are working with. It affects mel spacing. low_freq (float): lowest band edge of mel filters, default 0 Hz high_freq (float): highest band edge of mel filters, default samplerate/2 Returns: array: A numpy array of size num_filter x (fftpoints//2 + 1) which are filterbank
codesearchnet
def linear_interpolate_rank(tensor1, tensor2, coeffs, rank=1): _, _, _, num_channels = common_layers.shape_list(tensor1) diff_sq_sum = tf.reduce_sum((tensor1 - tensor2)**2, axis=(0, 1, 2)) _, feature_ranks = tf.math.top_k(diff_sq_sum, k=rank) feature_rank = feature_ranks[-1] channel_inds = tf.range(num_channels, dtype=tf.int32) channel_mask = tf.equal(channel_inds, feature_rank) ones_t = tf.ones(num_channels, dtype=tf.float32) zeros_t = tf.zeros(num_channels, dtype=tf.float32) interp_tensors = [] for coeff in coeffs: curr_coeff = tf.where(channel_mask, coeff * ones_t, zeros_t) interp_tensor = tensor1 + curr_coeff * (tensor2 - tensor1) interp_tensors.append(interp_tensor) return tf.concat(interp_tensors, axis=0)
Linearly interpolate channel at "rank" between two tensors. The channels are ranked according to their L2 norm between tensor1[channel] and tensor2[channel]. Args: tensor1: 4-D Tensor, NHWC tensor2: 4-D Tensor, NHWC coeffs: list of floats. rank: integer. Returns: interp_latents: list of interpolated 4-D Tensors, shape=(NHWC)
juraj-google-style
def _slice_single_param(param, param_event_ndims, slices, dist_batch_shape): param_shape = tf.shape(input=param) insert_ones = tf.ones( [tf.size(input=dist_batch_shape) + param_event_ndims - tf.rank(param)], dtype=param_shape.dtype) new_param_shape = tf.concat([insert_ones, param_shape], axis=0) full_batch_param = tf.reshape(param, new_param_shape) param_slices = [] param_dim_idx = 0 batch_dim_idx = 0 for slc in slices: if slc is tf.newaxis: param_slices.append(slc) continue if slc is Ellipsis: if batch_dim_idx < 0: raise ValueError('Found multiple `...` in slices {}'.format(slices)) param_slices.append(slc) num_remaining_non_newaxis_slices = sum( [s is not tf.newaxis for s in slices[slices.index(Ellipsis) + 1:]]) batch_dim_idx = -num_remaining_non_newaxis_slices param_dim_idx = batch_dim_idx - param_event_ndims continue param_dim_size = new_param_shape[param_dim_idx] batch_dim_size = dist_batch_shape[batch_dim_idx] is_broadcast = batch_dim_size > param_dim_size if isinstance(slc, slice): start, stop, step = slc.start, slc.stop, slc.step if start is not None: start = tf.where(is_broadcast, 0, start) if stop is not None: stop = tf.where(is_broadcast, 1, stop) if step is not None: step = tf.where(is_broadcast, 1, step) param_slices.append(slice(start, stop, step)) else: param_slices.append(tf.where(is_broadcast, 0, slc)) param_dim_idx += 1 batch_dim_idx += 1 param_slices.extend([ALL_SLICE] * param_event_ndims) return full_batch_param.__getitem__(param_slices)
Slices a single parameter of a distribution. Args: param: A `Tensor`, the original parameter to slice. param_event_ndims: `int` event parameterization rank for this parameter. slices: A `tuple` of normalized slices. dist_batch_shape: The distribution's batch shape `Tensor`. Returns: new_param: A `Tensor`, batch-sliced according to slices.
juraj-google-style
def _check_expiration(self, url: str, data: 'SavedEndpoint') -> 'SavedEndpoint': if data.expires_after < time.time(): del self.data[url] data = None return data
Checks the expiration time for data for a url. If the data has expired, it is deleted from the cache. Args: url: url to check data: page of data for that url Returns: value of either the passed data or None if it expired
juraj-google-style
def install_event_handlers(self, categories=None, handlers=None): if ((categories is not None) and (handlers is not None)): raise ValueError('categories and handlers are mutually exclusive!') from .events import get_event_handler_classes if categories: raise NotImplementedError() handlers = [cls() for cls in get_event_handler_classes(categories=categories)] else: handlers = (handlers or [cls() for cls in get_event_handler_classes()]) self._event_handlers = handlers
Install the `EventHandlers for this `Node`. If no argument is provided the default list of handlers is installed. Args: categories: List of categories to install e.g. base + can_change_physics handlers: explicit list of :class:`EventHandler` instances. This is the most flexible way to install handlers. .. note:: categories and handlers are mutually exclusive.
codesearchnet
def _replica_ctx_all_reduce(self, reduce_op, value, options=None): if options is None: options = collective_util.Options() replica_context = get_replica_context() assert replica_context, '`StrategyExtended._replica_ctx_all_reduce` must be called in a replica context' def merge_fn(_, flat_value): return self.batch_reduce_to(reduce_op, [(v, v) for v in flat_value], options) reduced = replica_context.merge_call(merge_fn, args=(nest.flatten(value),)) return nest.pack_sequence_as(value, reduced)
All-reduce `value` across all replicas so that all get the final result. If `value` is a nested structure of tensors, all-reduces of these tensors will be batched when possible. `options` can be set to hint the batching behavior. This API must be called in a replica context. Args: reduce_op: A `tf.distribute.ReduceOp` value specifying how values should be combined. value: Value to be reduced. A tensor or a nested structure of tensors. options: A `tf.distribute.experimental.CommunicationOptions`. Options to perform collective operations. This overrides the default options if the `tf.distribute.Strategy` takes one in the constructor. Returns: A tensor or a nested structure of tensors with the reduced values. The structure is the same as `value`.
github-repos
class HfDeepSpeedConfig(DeepSpeedConfig): def __init__(self, config_file_or_dict): set_hf_deepspeed_config(self) dep_version_check('accelerate') dep_version_check('deepspeed') super().__init__(config_file_or_dict)
This object contains a DeepSpeed configuration dictionary and can be quickly queried for things like zero stage. A `weakref` of this object is stored in the module's globals to be able to access the config from areas where things like the Trainer object is not available (e.g. `from_pretrained` and `_get_resized_embeddings`). Therefore it's important that this object remains alive while the program is still running. [`Trainer`] uses the `HfTrainerDeepSpeedConfig` subclass instead. That subclass has logic to sync the configuration with values of [`TrainingArguments`] by replacing special placeholder values: `"auto"`. Without this special logic the DeepSpeed configuration is not modified in any way. Args: config_file_or_dict (`Union[str, Dict]`): path to DeepSpeed config file or dict.
github-repos
def run_stages(self, stage_context: translations.TransformContext, stages: List[translations.Stage]) -> 'RunnerResult': worker_handler_manager = WorkerHandlerManager(stage_context.components.environments, self._provision_info) pipeline_metrics = MetricsContainer('') pipeline_metrics.get_counter(MetricName(str(type(self)), self.NUM_FUSED_STAGES_COUNTER, urn='internal:' + self.NUM_FUSED_STAGES_COUNTER)).update(len(stages)) monitoring_infos_by_stage: MutableMapping[str, Iterable['metrics_pb2.MonitoringInfo']] = {} runner_execution_context = execution.FnApiRunnerExecutionContext(stages, worker_handler_manager, stage_context.components, stage_context.safe_coders, stage_context.data_channel_coders, self._num_workers, split_managers=self._split_managers) try: with self.maybe_profile(): runner_execution_context.setup() bundle_counter = 0 while len(runner_execution_context.queues.ready_inputs) > 0: _LOGGER.debug('Remaining ready bundles: %s\n\tWatermark pending bundles: %s\n\tTime pending bundles: %s', len(runner_execution_context.queues.ready_inputs), len(runner_execution_context.queues.watermark_pending_inputs), len(runner_execution_context.queues.time_pending_inputs)) consuming_stage_name, bundle_input = runner_execution_context.queues.ready_inputs.deque() stage = runner_execution_context.stages[consuming_stage_name] bundle_context_manager = runner_execution_context.bundle_manager_for(stage) _BUNDLE_LOGGER.debug('Running bundle for stage %s\n\tExpected outputs: %s timers: %s', bundle_context_manager.stage.name, bundle_context_manager.stage_data_outputs, bundle_context_manager.stage_timer_outputs) assert consuming_stage_name == bundle_context_manager.stage.name bundle_counter += 1 bundle_results = self._execute_bundle(runner_execution_context, bundle_context_manager, bundle_input) if consuming_stage_name in monitoring_infos_by_stage: monitoring_infos_by_stage[consuming_stage_name] = consolidate_monitoring_infos(itertools.chain(bundle_results.process_bundle.monitoring_infos, monitoring_infos_by_stage[consuming_stage_name])) else: assert isinstance(bundle_results.process_bundle.monitoring_infos, Iterable) monitoring_infos_by_stage[consuming_stage_name] = bundle_results.process_bundle.monitoring_infos if '' not in monitoring_infos_by_stage: monitoring_infos_by_stage[''] = list(pipeline_metrics.to_runner_api_monitoring_infos('').values()) else: monitoring_infos_by_stage[''] = consolidate_monitoring_infos(itertools.chain(pipeline_metrics.to_runner_api_monitoring_infos('').values(), monitoring_infos_by_stage[''])) if len(runner_execution_context.queues.ready_inputs) == 0: self._schedule_ready_bundles(runner_execution_context) assert len(runner_execution_context.queues.ready_inputs) == 0, 'A total of %d ready bundles did not execute.' % len(runner_execution_context.queues.ready_inputs) assert len(runner_execution_context.queues.watermark_pending_inputs) == 0, 'A total of %d watermark-pending bundles did not execute.' % len(runner_execution_context.queues.watermark_pending_inputs) assert len(runner_execution_context.queues.time_pending_inputs) == 0, 'A total of %d time-pending bundles did not execute.' % len(runner_execution_context.queues.time_pending_inputs) finally: worker_handler_manager.close_all() return RunnerResult(runner.PipelineState.DONE, monitoring_infos_by_stage)
Run a list of topologically-sorted stages in batch mode. Args: stage_context (translations.TransformContext) stages (list[fn_api_runner.translations.Stage])
github-repos
def sg_summary_param(tensor, prefix=None, name=None): prefix = ('' if (prefix is None) else (prefix + '/')) name = ((prefix + _pretty_name(tensor)) if (name is None) else (prefix + name)) _scalar((name + '/abs'), tf.reduce_mean(tf.abs(tensor))) _histogram((name + '/abs-h'), tf.abs(tensor))
r"""Register `tensor` to summary report as `parameters` Args: tensor: A `Tensor` to log as parameters prefix: A `string`. A prefix to display in the tensor board web UI. name: A `string`. A name to display in the tensor board web UI. Returns: None
codesearchnet
def create_metadata(self, resource, keys_vals): self.metadata_service.set_auth(self._token_metadata) self.metadata_service.create(resource, keys_vals)
Associates new key-value pairs with the given resource. Will attempt to add all key-value pairs even if some fail. Args: resource (intern.resource.boss.BossResource) keys_vals (dictionary): Collection of key-value pairs to assign to given resource. Raises: HTTPErrorList on failure.
juraj-google-style
def __init__(self, channel): self.GetModelStatus = channel.unary_unary( '/tensorflow.serving.ModelService/GetModelStatus', request_serializer=tensorflow__serving_dot_apis_dot_get__model__status__pb2.GetModelStatusRequest.SerializeToString, response_deserializer=tensorflow__serving_dot_apis_dot_get__model__status__pb2.GetModelStatusResponse.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def create(self, python=None, system_site=False, always_copy=False): command = 'virtualenv' if python: command = '{0} --python={1}'.format(command, python) if system_site: command = '{0} --system-site-packages'.format(command) if always_copy: command = '{0} --always-copy'.format(command) command = '{0} {1}'.format(command, self.path) self._execute(command)
Create a new virtual environment. Args: python (str): The name or path of a python interpreter to use while creating the virtual environment. system_site (bool): Whether or not use use the system site packages within the virtual environment. Default is False. always_copy (bool): Whether or not to force copying instead of symlinking in the virtual environment. Default is False.
juraj-google-style
def sagemaker_auth(overrides={}, path='.'): api_key = overrides.get(env.API_KEY, Api().api_key) if (api_key is None): raise ValueError("Can't find W&B ApiKey, set the WANDB_API_KEY env variable or run `wandb login`") overrides[env.API_KEY] = api_key with open(os.path.join(path, 'secrets.env'), 'w') as file: for (k, v) in six.iteritems(overrides): file.write('{}={}\n'.format(k, v))
Write a secrets.env file with the W&B ApiKey and any additional secrets passed. Args: overrides (dict, optional): Additional environment variables to write to secrets.env path (str, optional): The path to write the secrets file.
codesearchnet
def get_settings(category='All'): if (category.lower() in ['all', '*']): category = '*' elif (category.lower() not in [x.lower() for x in categories]): raise KeyError('Invalid category: "{0}"'.format(category)) cmd = '/get /category:"{0}"'.format(category) results = _auditpol_cmd(cmd) ret = {} for line in results[3:]: if (' ' in line.strip()): ret.update(dict(list(zip(*([iter(re.split('\\s{2,}', line.strip()))] * 2))))) return ret
Get the current configuration for all audit settings specified in the category Args: category (str): One of the nine categories to return. Can also be ``All`` to return the settings for all categories. Valid options are: - Account Logon - Account Management - Detailed Tracking - DS Access - Logon/Logoff - Object Access - Policy Change - Privilege Use - System - All Default value is ``All`` Returns: dict: A dictionary containing all subcategories for the specified category along with their current configuration Raises: KeyError: On invalid category CommandExecutionError: If an error is encountered retrieving the settings Usage: .. code-block:: python import salt.utils.win_lgpo_auditpol # Get current state of all audit settings salt.utils.win_lgpo_auditpol.get_settings() # Get the current state of all audit settings in the "Account Logon" # category salt.utils.win_lgpo_auditpol.get_settings(category="Account Logon")
codesearchnet
def register_filter(self, filter_name, filter_ref, force=False): if not force and (filter_name in self.filters_list()): self.log_warning("Extension %s already exist, ignore redefinition." % ext_in) return self.__jinja2_environment.filters[filter_name] = filter_ref
Add/register one filter. Args: filter_name (str): Filter name used inside :program:`Jinja2` tags. filter_ref: Reference to the filter itself, i.e. the corresponding :program:`Python` function. force (bool): If set to ``True``, forces the registration of a filter no matter if it already exists or not. Note: The list of user added/registered filters can be retrieve with :mth:`registered_filters_list`
juraj-google-style
def __init__(self, encoding_method=None, parent=None, **kwargs): if not encoding_method or not parent: raise ValueError('Missing encoding method or parent value.') super(EncodedStreamPathSpec, self).__init__(parent=parent, **kwargs) self.encoding_method = encoding_method
Initializes a path specification. Note that the encoded stream path specification must have a parent. Args: encoding_method (Optional[str]): method used to the encode the data. parent (Optional[PathSpec]): parent path specification. Raises: ValueError: when encoding method or parent are not set.
juraj-google-style
def __tf_tensor__(self, dtype=None, name=None): pass
Converts this object to a Tensor. Args: dtype: data type for the returned Tensor name: a name for the operations which create the Tensor Returns: A Tensor.
github-repos
def add_header(self, key, value, **params): key = self.escape(key) ci_key = key.casefold() def quoted_params(items): for p in items: param_name = self.escape(p[0]) param_val = self.de_quote(self.escape(p[1])) yield param_name, param_val sorted_items = sorted(params.items()) quoted_iter = ('%s="%s"' % p for p in quoted_params(sorted_items)) param_str = ' '.join(quoted_iter) if param_str: value = "%s; %s" % (value, param_str) self._header_data[ci_key] = (key, value)
Add a header to the collection, including potential parameters. Args: key (str): The name of the header value (str): The value to store under that key params: Option parameters to be appended to the value, automatically formatting them in a standard way
juraj-google-style
def split_recursive(self, depth: int, min_width: int, min_height: int, max_horizontal_ratio: float, max_vertical_ratio: float, seed: Optional[tcod.random.Random]=None) -> None: cdata = self._as_cdata() lib.TCOD_bsp_split_recursive(cdata, (seed or ffi.NULL), depth, min_width, min_height, max_horizontal_ratio, max_vertical_ratio) self._unpack_bsp_tree(cdata)
Divide this partition recursively. Args: depth (int): The maximum depth to divide this object recursively. min_width (int): The minimum width of any individual partition. min_height (int): The minimum height of any individual partition. max_horizontal_ratio (float): Prevent creating a horizontal ratio more extreme than this. max_vertical_ratio (float): Prevent creating a vertical ratio more extreme than this. seed (Optional[tcod.random.Random]): The random number generator to use.
codesearchnet
def get_latest_package(name, range_=None, paths=None, error=False): it = iter_packages(name, range_=range_, paths=paths) try: return max(it, key=lambda x: x.version) except ValueError: if error: raise PackageFamilyNotFoundError("No such package family %r" % name) return None
Get the latest package for a given package name. Args: name (str): Package name. range_ (`VersionRange`): Version range to search within. paths (list of str, optional): paths to search for package families, defaults to `config.packages_path`. error (bool): If True, raise an error if no package is found. Returns: `Package` object, or None if no package is found.
juraj-google-style
def instrument(self, package, options=None, runner=None, handler=None) -> bytes: if runner is None: runner = DEFAULT_INSTRUMENTATION_RUNNER if options is None: options = {} options_list = [] for option_key, option_value in options.items(): options_list.append('-e %s %s' % (option_key, option_value)) options_string = ' '.join(options_list) instrumentation_command = 'am instrument -r -w %s %s/%s' % (options_string, package, runner) logging.info('AndroidDevice|%s: Executing adb shell %s', self.serial, instrumentation_command) if handler is None: return self._exec_adb_cmd('shell', instrumentation_command, shell=False, timeout=None, stderr=None) else: return self._execute_adb_and_process_stdout('shell', instrumentation_command, shell=False, handler=handler)
Runs an instrumentation command on the device. This is a convenience wrapper to avoid parameter formatting. Example: .. code-block:: python device.instrument( 'com.my.package.test', options = { 'class': 'com.my.package.test.TestSuite', }, ) Args: package: string, the package of the instrumentation tests. options: dict, the instrumentation options including the test class. runner: string, the test runner name, which defaults to DEFAULT_INSTRUMENTATION_RUNNER. handler: optional func, when specified the function is used to parse the instrumentation stdout line by line as the output is generated; otherwise, the stdout is simply returned once the instrumentation is finished. Returns: The stdout of instrumentation command or the stderr if the handler is set.
github-repos
def add_residues_highlight_to_nglview(view, structure_resnums, chain, res_color='red'): chain = ssbio.utils.force_list(chain) if isinstance(structure_resnums, list): structure_resnums = list(set(structure_resnums)) elif isinstance(structure_resnums, int): structure_resnums = ssbio.utils.force_list(structure_resnums) else: raise ValueError('Input must either be a residue number of a list of residue numbers') to_show_chains = '( ' for c in chain: to_show_chains += ':{} or'.format(c) to_show_chains = to_show_chains.strip(' or ') to_show_chains += ' )' to_show_res = '( ' for m in structure_resnums: to_show_res += '{} or '.format(m) to_show_res = to_show_res.strip(' or ') to_show_res += ' )' log.info('Selection: {} and not hydrogen and {}'.format(to_show_chains, to_show_res)) view.add_ball_and_stick(selection='{} and not hydrogen and {}'.format(to_show_chains, to_show_res), color=res_color)
Add a residue number or numbers to an NGLWidget view object. Args: view (NGLWidget): NGLWidget view object structure_resnums (int, list): Residue number(s) to highlight, structure numbering chain (str, list): Chain ID or IDs of which residues are a part of. If not provided, all chains in the mapped_chains attribute will be used. If that is also empty, and exception is raised. res_color (str): Color to highlight residues with
codesearchnet
def stage(self, name, pipeline_counter=None): return Stage( self.server, pipeline_name=self.name, stage_name=name, pipeline_counter=pipeline_counter, )
Helper to instantiate a :class:`gocd.api.stage.Stage` object Args: name: The name of the stage pipeline_counter: Returns:
juraj-google-style
def create_customer(self, *, full_name, email): payload = {'fullName': full_name, 'email': email} return self.client._post((self.url + 'customers'), json=payload, headers=self.get_headers())
Creation of a customer in the system. Args: full_name: Customer's complete name. Alphanumeric. Max: 255. email: Customer's email address. Alphanumeric. Max: 255. Returns:
codesearchnet
def html_for_cgi_argument(argument, form): value = (form[argument].value if (argument in form) else None) return KEY_VALUE_TEMPLATE.format(argument, value)
Returns an HTML snippet for a CGI argument. Args: argument: A string representing an CGI argument name in a form. form: A CGI FieldStorage object. Returns: String HTML representing the CGI value and variable.
codesearchnet
def get_jwt_key_data(): global __jwt_data if __jwt_data: return __jwt_data from cloud_inquisitor import config_path from cloud_inquisitor.config import dbconfig jwt_key_file = dbconfig.get('jwt_key_file_path', default='ssl/private.key') if (not os.path.isabs(jwt_key_file)): jwt_key_file = os.path.join(config_path, jwt_key_file) with open(os.path.join(jwt_key_file), 'r') as f: __jwt_data = f.read() return __jwt_data
Returns the data for the JWT private key used for encrypting the user login token as a string object Returns: `str`
codesearchnet
async def _perform_ping_timeout(self, delay: int): (await sleep(delay)) error = TimeoutError('Ping timeout: no data received from server in {timeout} seconds.'.format(timeout=self.PING_TIMEOUT)) (await self.on_data_error(error))
Handle timeout gracefully. Args: delay (int): delay before raising the timeout (in seconds)
codesearchnet
def event(self, name, **kwargs): group_obj = Event(name, **kwargs) return self._group(group_obj)
Add Event data to Batch object. Args: name (str): The name for this Group. date_added (str, kwargs): The date timestamp the Indicator was created. event_date (str, kwargs): The event datetime expression for this Group. status (str, kwargs): The status for this Group. xid (str, kwargs): The external id for this Group. Returns: obj: An instance of Event.
codesearchnet
def __init__(self, mac_addr): addr_info = mac_addr.lower().split(':') if len(addr_info) < 6: raise ValueError('Invalid mac address') addr_info[2] = 'EtherSync' self._addr = ''.join(addr_info[2:])
Construct a EtherSync object. Args: mac_addr: mac address of the Cambrionix unit for EtherSync.
juraj-google-style
def assignSchedule(self, schedule, period, hour, minute, tariff): if ((schedule not in range(Extents.Schedules)) or (period not in range(Extents.Tariffs)) or (hour < 0) or (hour > 23) or (minute < 0) or (minute > 59) or (tariff < 0)): ekm_log("Out of bounds in Schedule_" + str(schedule + 1)) return False period += 1 idx_min = "Min_" + str(period) idx_hour = "Hour_" + str(period) idx_rate = "Tariff_" + str(period) if idx_min not in self.m_schedule_params: ekm_log("Incorrect index: " + idx_min) return False if idx_hour not in self.m_schedule_params: ekm_log("Incorrect index: " + idx_hour) return False if idx_rate not in self.m_schedule_params: ekm_log("Incorrect index: " + idx_rate) return False self.m_schedule_params[idx_rate] = tariff self.m_schedule_params[idx_hour] = hour self.m_schedule_params[idx_min] = minute self.m_schedule_params['Schedule'] = schedule return True
Assign one schedule tariff period to meter bufffer. Args: schedule (int): A :class:`~ekmmeters.Schedules` value or in range(Extents.Schedules). tariff (int): :class:`~ekmmeters.Tariffs` value or in range(Extents.Tariffs). hour (int): Hour from 0-23. minute (int): Minute from 0-59. tariff (int): Rate value. Returns: bool: True on completed assignment.
juraj-google-style
def __init__(self, url): if isinstance(url, Uri): self.uri = url else: self.uri = Uri(url)
Connect to an assembly that points to the assembly specified with the url. Args: - url (str): The url of the onshape item
juraj-google-style
def add(name, beacon_data, **kwargs): ret = {'comment': 'Failed to add beacon {0}.'.format(name), 'result': False} if (name in list_(return_yaml=False, **kwargs)): ret['comment'] = 'Beacon {0} is already configured.'.format(name) return ret if any((('beacon_module' in key) for key in beacon_data)): res = next((value for value in beacon_data if ('beacon_module' in value))) beacon_name = res['beacon_module'] else: beacon_name = name if (beacon_name not in list_available(return_yaml=False, **kwargs)): ret['comment'] = 'Beacon "{0}" is not available.'.format(beacon_name) return ret if (('test' in kwargs) and kwargs['test']): ret['result'] = True ret['comment'] = 'Beacon: {0} would be added.'.format(name) else: try: eventer = salt.utils.event.get_event('minion', opts=__opts__) res = __salt__['event.fire']({'name': name, 'beacon_data': beacon_data, 'func': 'validate_beacon'}, 'manage_beacons') if res: event_ret = eventer.get_event(tag='/salt/minion/minion_beacon_validation_complete', wait=kwargs.get('timeout', 30)) valid = event_ret['valid'] vcomment = event_ret['vcomment'] if (not valid): ret['result'] = False ret['comment'] = 'Beacon {0} configuration invalid, not adding.\n{1}'.format(name, vcomment) return ret except KeyError: ret['result'] = False ret['comment'] = 'Event module not available. Beacon add failed.' return ret try: res = __salt__['event.fire']({'name': name, 'beacon_data': beacon_data, 'func': 'add'}, 'manage_beacons') if res: event_ret = eventer.get_event(tag='/salt/minion/minion_beacon_add_complete', wait=kwargs.get('timeout', 30)) if (event_ret and event_ret['complete']): beacons = event_ret['beacons'] if ((name in beacons) and (beacons[name] == beacon_data)): ret['result'] = True ret['comment'] = 'Added beacon: {0}.'.format(name) elif event_ret: ret['result'] = False ret['comment'] = event_ret['comment'] else: ret['result'] = False ret['comment'] = 'Did not receive the manage event before the timeout of {0}s'.format(kwargs.get('timeout', 30)) return ret except KeyError: ret['result'] = False ret['comment'] = 'Event module not available. Beacon add failed.' return ret
Add a beacon on the minion Args: name (str): Name of the beacon to configure beacon_data (dict): Dictionary or list containing configuration for beacon. Returns: dict: Boolean and status message on success or failure of add. CLI Example: .. code-block:: bash salt '*' beacons.add ps "[{'processes': {'salt-master': 'stopped', 'apache2': 'stopped'}}]"
codesearchnet
def _get_other_names(self, line): m = re.search(self.compound_regex['other_names'][0], line, re.IGNORECASE) if m: self.other_names.append(m.group(1).strip())
Parse and extract any other names that might be recorded for the compound Args: line (str): line of the msp file
codesearchnet
def build_backend(self, backend_node): proxy_name = backend_node.backend_header.proxy_name.text config_block_lines = self.__build_config_block( backend_node.config_block) return config.Backend(name=proxy_name, config_block=config_block_lines)
parse `backend` sections Args: backend_node (TreeNode): Description Returns: config.Backend: an object
juraj-google-style
def _FormatTag(self, event): tag = getattr(event, 'tag', None) if not tag: return '-' return ' '.join(tag.labels)
Formats the event tag. Args: event (EventObject): event. Returns: str: event tag field.
juraj-google-style
def scroll(self, direction='vertical', percent=0.6, duration=2.0): if direction not in ('vertical', 'horizontal'): raise ValueError('Argument `direction` should be one of "vertical" or "horizontal". Got {}' .format(repr(direction))) start = [0.5, 0.5] half_distance = percent / 2 if direction == 'vertical': start[1] += half_distance direction = [0, -percent] else: start[0] += half_distance direction = [-percent, 0] return self.swipe(start, direction=direction, duration=duration)
Scroll from the lower part to the upper part of the entire screen. Args: direction (:py:obj:`str`): scrolling direction. "vertical" or "horizontal" percent (:py:obj:`float`): scrolling distance percentage of the entire screen height or width according to direction duration (:py:obj:`float`): time interval in which the action is performed
juraj-google-style
def __init__(self, path, **kwargs): self.error_context = kwargs.pop('error_context', None) self.error_context = self.error_context or StatikErrorContext() if 'config' in kwargs and isinstance(kwargs['config'], dict): logger.debug("Loading project configuration from constructor arguments") self.config = kwargs['config'] else: self.config = None self.safe_mode = kwargs.pop('safe_mode', False) self.path, self.config_file_path = get_project_config_file(path, StatikProject.CONFIG_FILE) if (self.path is None or self.config_file_path is None) and self.config is None: raise MissingProjectConfig(context=self.error_context) logger.debug("Project path configured as: %s", self.path) self.models = {} self.template_engine = None self.views = {} self.db = None self.project_context = None
Constructor. Args: path: The full filesystem path to the base of the project.
juraj-google-style
def licenses(self): buf_size = self.MAX_BUF_SIZE buf = (ctypes.c_char * buf_size)() res = self._dll.JLINK_GetAvailableLicense(buf, buf_size) if res < 0: raise errors.JLinkException(res) return ctypes.string_at(buf).decode()
Returns a string of the built-in licenses the J-Link has. Args: self (JLink): the ``JLink`` instance Returns: String of the contents of the built-in licenses the J-Link has.
juraj-google-style
def AddVSSProcessingOptions(self, argument_group): argument_group.add_argument('--no_vss', '--no-vss', dest='no_vss', action='store_true', default=False, help='Do not scan for Volume Shadow Snapshots (VSS). This means that Volume Shadow Snapshots (VSS) are not processed.') argument_group.add_argument('--vss_only', '--vss-only', dest='vss_only', action='store_true', default=False, help='Do not process the current volume if Volume Shadow Snapshots (VSS) have been selected.') argument_group.add_argument('--vss_stores', '--vss-stores', dest='vss_stores', action='store', type=str, default=None, help='Define Volume Shadow Snapshots (VSS) (or stores that need to be processed. A range of stores can be defined as: "3..5". Multiple stores can be defined as: "1,3,5" (a list of comma separated values). Ranges and lists can also be combined as: "1,3..5". The first store is 1. All stores can be defined as: "all".')
Adds the VSS processing options to the argument group. Args: argument_group (argparse._ArgumentGroup): argparse argument group.
codesearchnet
def save(self, vleaf, fpath, cleanup=False, format=None): graph = self.create_graphviz_digraph(vleaf, format=format) graph.render(fpath, cleanup=cleanup)
Save the graph to a given file path. Args: vleaf (`nnabla.Variable`): End variable. All variables and functions which can be traversed from this variable are shown in the reuslt. fpath (`str`): The file path used to save. cleanup (`bool`): Clean up the source file after rendering. Default is False. format (str): Force overwrite ``format`` (``'pdf', 'png', ...)``) configuration.
juraj-google-style
def _CheckIsDirectory(self, file_entry): if (definitions.FILE_ENTRY_TYPE_DIRECTORY not in self._file_entry_types): return False return file_entry.IsDirectory()
Checks the is_directory find specification. Args: file_entry (FileEntry): file entry. Returns: bool: True if the file entry matches the find specification, False if not.
codesearchnet
def convert_to_python_types(args): if isinstance(args, dict): return {k: convert_to_python_type(v) for k, v in args.items()} else: return [convert_to_python_type(v) for v in args]
Convert the given list or dictionary of args to python types. Args: args: Either an iterable of types, or a dictionary where the values are types. Returns: If given an iterable, a list of converted types. If given a dictionary, a dictionary with the same keys, and values which have been converted.
github-repos
def copy_rec(source, dest): if os.path.isdir(source): for child in os.listdir(source): new_dest = os.path.join(dest, child) os.makedirs(new_dest, exist_ok=True) copy_rec(os.path.join(source, child), new_dest) elif os.path.isfile(source): logging.info(' Copy "{}" to "{}"'.format(source, dest)) shutil.copy(source, dest) else: logging.info(' Ignoring "{}"'.format(source))
Copy files between diferent directories. Copy one or more files to an existing directory. This function is recursive, if the source is a directory, all its subdirectories are created in the destination. Existing files in destination are overwrited without any warning. Args: source (str): File or directory name. dest (str): Directory name. Raises: FileNotFoundError: Destination directory doesn't exist.
juraj-google-style
def maximum(x1, x2): if any_symbolic_tensors((x1, x2)): return Maximum().symbolic_call(x1, x2) return backend.numpy.maximum(x1, x2)
Element-wise maximum of `x1` and `x2`. Args: x1: First tensor. x2: Second tensor. Returns: Output tensor, element-wise maximum of `x1` and `x2`.
github-repos
def count(self, val=True): return sum((elem.count(val) for elem in self._iter_components()))
Get the number of bits in the array with the specified value. Args: val: A boolean value to check against the array's value. Returns: An integer of the number of bits in the array equal to val.
codesearchnet
def auth_middleware(policy): assert isinstance(policy, AbstractAuthentication) async def _auth_middleware_factory(app, handler): async def _middleware_handler(request): request[POLICY_KEY] = policy response = await handler(request) await policy.process_response(request, response) return response return _middleware_handler return _auth_middleware_factory
Returns a aiohttp_auth middleware factory for use by the aiohttp application object. Args: policy: A authentication policy with a base class of AbstractAuthentication.
juraj-google-style
def get_package_hashes(filename): log.debug('Getting package hashes') filename = os.path.abspath(filename) with open(filename, 'rb') as f: data = f.read() _hash = hashlib.sha256(data).hexdigest() log.debug('Hash for file %s: %s', filename, _hash) return _hash
Provides hash of given filename. Args: filename (str): Name of file to hash Returns: (str): sha256 hash
codesearchnet
def _prepare_headers(self, additional_headers=None, **kwargs): user_agent = "pyseaweed/{version}".format(version=__version__) headers = {"User-Agent": user_agent} if additional_headers is not None: headers.update(additional_headers) return headers
Prepare headers for http communication. Return dict of header to be used in requests. Args: .. versionadded:: 0.3.2 **additional_headers**: (optional) Additional headers to be used with request Returns: Headers dict. Key and values are string
juraj-google-style
def _url_format(self, service): base_service_url = '{base}{service}'.format(base=self.urlbase, service=service) return base_service_url
Generate URL from urlbase and service. Args: service (str): The endpoint service to use, i.e. gradebook Returns: str: URL to where the request should be made
codesearchnet
def _get_non_space_email(self, doc) -> List: result_lst = [] for e in doc: if "mail:" in e.text.lower(): idx = e.text.lower().index("mail:") + 5 value = e.text[idx:] tmp_doc = self._nlp(value) tmp_email_matches = self._like_email_matcher(tmp_doc) for match_id, start, end in tmp_email_matches: span = tmp_doc[start:end] if self._check_domain(self._tokenizer.tokenize(span.text)): result_lst.append((span.text, idx+e.idx, idx+e.idx+len(value))) return result_lst
Deal with corner case that there is "email" string in text and no space around it Args: doc: List[Token] Returns: Bool
juraj-google-style
def _page_streamable(page_descriptor): def inner(a_func, settings, request, **kwargs): page_iterator = gax.PageIterator( a_func, page_descriptor, settings.page_token, request, **kwargs) if settings.flatten_pages: return gax.ResourceIterator(page_iterator) else: return page_iterator return inner
Creates a function that yields an iterable to performs page-streaming. Args: page_descriptor (:class:`PageDescriptor`): indicates the structure of page streaming to be performed. Returns: Callable: A function that returns an iterator.
juraj-google-style
def replace_list(items, match, replacement): return [replace(item, match, replacement) for item in items]
Replaces occurrences of a match string in a given list of strings and returns a list of new strings. The match string can be a regex expression. Args: items (list): the list of strings to modify. match (str): the search expression. replacement (str): the string to replace with.
juraj-google-style
def search_users(self, user): user_url = ('%s/%s/%s' % (self.url, 'user', user)) response = self.jss.get(user_url) return LDAPUsersResults(self.jss, response)
Search for LDAP users. Args: user: User to search for. It is not entirely clear how the JSS determines the results- are regexes allowed, or globbing? Returns: LDAPUsersResult object. Raises: Will raise a JSSGetError if no results are found.
codesearchnet
def initialize_resources(resource_list, name='init'): if resource_list: return control_flow_ops.group(*[r.create for r in resource_list], name=name) return control_flow_ops.no_op(name=name)
Initializes the resources in the given list. Args: resource_list: list of resources to initialize. name: name of the initialization op. Returns: op responsible for initializing all resources.
github-repos
def load_dictionary(self, filename, encoding="utf-8"): with load_file(filename, encoding) as data: self._dictionary.update(json.loads(data.lower(), encoding=encoding)) self._update_dictionary()
Load in a pre-built word frequency list Args: filename (str): The filepath to the json (optionally gzipped) \ file to be loaded encoding (str): The encoding of the dictionary
juraj-google-style
def testNoopElimination(self, init_dataset_fn, transformation, expected_name): dataset = init_dataset_fn() if expected_name: dataset = dataset.apply(testing.assert_next([expected_name, 'FiniteTake'])) else: dataset = dataset.apply(testing.assert_next(['FiniteTake'])) dataset = dataset.apply(transformation) dataset = dataset.take(1) options = options_lib.Options() options.experimental_optimization.apply_default_optimizations = False options.experimental_optimization.noop_elimination = True dataset = dataset.with_options(options) get_next = self.getNext(dataset) self.evaluate(get_next())
Runs a noop elimination test case. Args: init_dataset_fn: Function to create the initial dataset transformation: Transformation to apply expected_name: Name of the transformation if it is not eliminated
github-repos
def deref(value: base.Symbolic, recursive: bool=False) -> Any: if isinstance(value, Ref): value = value.value if recursive: def _deref(k, v, p): del k, p if isinstance(v, Ref): return deref(v.value, recursive=True) return v return value.rebind(_deref, raise_on_no_change=False) return value
Dereferences a symbolic value that may contain pg.Ref. Args: value: The input symbolic value. recursive: If True, dereference `pg.Ref` in the entire tree. Otherwise Only dereference the root node. Returns: The dereferenced root, or dereferenced tree if recursive is True.
github-repos
def sample_rate(self, value): if value == self._defaults['sampleRate'] and 'sampleRate' in self._values: del self._values['sampleRate'] else: self._values['sampleRate'] = value
The sample_rate property. Args: value (float). the property value.
juraj-google-style
def _create_field(self, uri , name, field_type, **kwargs): if not (name and (field_type in ['TEXT_INPUT', 'DATE', 'PERSON'])): return requests.codes.bad_request, {'success' : 'False', 'error': 'name needs to be provided and field_type needs to be \'TEXT_INPUT\', \'DATE\' or \'PERSON\''} kwargs.update({'name':name, 'type':field_type}) new_box = StreakField(**kwargs) code, data = self._req('put', uri, new_box.to_dict(rw = True)) return code, data
Creates a field with the provided attributes. Args: uri base uri for the field (pipeline or box uri) name required name string field_type required type string [TEXT_INPUT, DATE or PERSON] kwargs {} return (status code, field dict)
juraj-google-style
def get_value_by_xy(self, x, y): if x < self.xMin or x > self.xMax or y < self.yMin or y > self.yMax: return None else: row = self.nRows - int(numpy.ceil((y - self.yMin) / self.dx)) col = int(numpy.floor((x - self.xMin) / self.dx)) value = self.data[row][col] if value == self.noDataValue: return None else: return value
Get raster value by xy coordinates. Args: x: X Coordinate. y: Y Coordinate. Returns: raster value, None if the input are invalid.
juraj-google-style
def setZeroResettableKWH(self, password="00000000"): result = False self.setContext("setZeroResettableKWH") try: if not self.requestA(): self.writeCmdMsg("Bad read CRC on setting") else: if not self.serialCmdPwdAuth(password): self.writeCmdMsg("Password failure") else: req_str = "0157310230304433282903" req_str += self.calc_crc16(req_str[2:].decode("hex")) self.m_serial_port.write(req_str.decode("hex")) if self.m_serial_port.getResponse(self.getContext()).encode("hex") == "06": self.writeCmdMsg("Success: 06 returned.") result = True self.serialPostEnd() except: ekm_log(traceback.format_exc(sys.exc_info())) self.setContext("") return result
Serial call to zero resettable kWh registers. Args: password (str): Optional password. Returns: bool: True on completion and ACK.
juraj-google-style
def __init__(self, endpoint_name, sagemaker_session=None): super(TensorFlowPredictor, self).__init__(endpoint_name, sagemaker_session, tf_json_serializer, tf_json_deserializer)
Initialize an ``TensorFlowPredictor``. Args: endpoint_name (str): The name of the endpoint to perform inference on. sagemaker_session (sagemaker.session.Session): Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, the estimator creates one using the default AWS configuration chain.
juraj-google-style
def get_vep_info(vep_string, vep_header): vep_annotations = [dict(zip(vep_header, vep_annotation.split('|'))) for vep_annotation in vep_string.split(',')] return vep_annotations
Make the vep annotations into a dictionaries A vep dictionary will have the vep column names as keys and the vep annotations as values. The dictionaries are stored in a list Args: vep_string (string): A string with the CSQ annotation vep_header (list): A list with the vep header Return: vep_annotations (list): A list of vep dicts
codesearchnet
def _rowwise_unsorted_segment_sum(values, indices, n): (batch, k) = tf.unstack(tf.shape(indices), num=2) indices_flat = (tf.reshape(indices, [(- 1)]) + (tf.div(tf.range((batch * k)), k) * n)) ret_flat = tf.unsorted_segment_sum(tf.reshape(values, [(- 1)]), indices_flat, (batch * n)) return tf.reshape(ret_flat, [batch, n])
UnsortedSegmentSum on each row. Args: values: a `Tensor` with shape `[batch_size, k]`. indices: an integer `Tensor` with shape `[batch_size, k]`. n: an integer. Returns: A `Tensor` with the same type as `values` and shape `[batch_size, n]`.
codesearchnet
def aggregate_and_return_name_for_input(self, out_graphdef): del out_graphdef raise RuntimeError('Unimplemented abstract method.')
This adds the node(s) to out_graphdef and returns the input node name. Args: out_graphdef: A graphdef that is ready to have this input added. Returns: The output that the stub should use as an input for this operand. Raises: RuntimeError: if the method is not implemented.
github-repos
def read(self, index, name=None): return self._implementation.read(index, name=name)
Read the value at location `index` in the TensorArray. Args: index: 0-D. int32 tensor with the index to read from. name: A name for the operation (optional). Returns: The tensor at index `index`.
github-repos