code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _from_definition(fdef, grad_func=None): func = None argnames = [arg.name for arg in fdef.signature.input_arg] input_types = tuple((dtypes.as_dtype(arg.type) for arg in fdef.signature.input_arg)) func_name = fdef.signature.name python_grad_func = None out_names = [arg.name for arg in fdef.signature.output_arg] result = _DefinedFunction(func, argnames, input_types, func_name, grad_func, python_grad_func, out_names) if is_oss: serialized = fdef.SerializeToString() c_func = c_api.TF_FunctionImportFunctionDef(serialized) else: c_func = c_api.TF_FunctionImportFunctionDefNoSerialization(fdef) result._c_func = c_api_util.ScopedTFFunction(c_func, func_name) result._extra_inputs = [] result._op_def = fdef.signature return result
Creates a _DefinedFunction initialized from a FunctionDef proto. Args: fdef: a FunctionDef grad_func: a _DefinedFunction or None Returns: A _DefinedFunction representing fdef
github-repos
def traverse(self, fn=None, specs=None, full_breadth=True): if (fn is None): fn = (lambda x: x) if ((specs is not None) and (not isinstance(specs, (list, set, tuple)))): specs = [specs] accumulator = [] matches = (specs is None) if (not matches): for spec in specs: matches = self.matches(spec) if matches: break if matches: accumulator.append(fn(self)) if self._deep_indexable: for el in self: if (el is None): continue accumulator += el.traverse(fn, specs, full_breadth) if (not full_breadth): break return accumulator
Traverses object returning matching items Traverses the set of children of the object, collecting the all objects matching the defined specs. Each object can be processed with the supplied function. Args: fn (function, optional): Function applied to matched objects specs: List of specs to match Specs must be types, functions or type[.group][.label] specs to select objects to return, by default applies to all objects. full_breadth: Whether to traverse all objects Whether to traverse the full set of objects on each container or only the first. Returns: list: List of objects that matched
codesearchnet
def mtf_slice(x, begin, size, slice_dim_name, name=None): return SliceOperation( x, begin, size, slice_dim_name, name=name).outputs[0]
Slice operation. Call externally as mtf.slice() Args: x: a list of Tensors begin: integer, where to begin slicing from along the axis size: integer, size to slice from axis. slice_dim_name: string, dimension name of slicing axis. name: an optional string Returns: a Tensor with shape extended by output_shape for the last axis.
juraj-google-style
def wait_for_plug_update(self, plug_name, remote_state, timeout_s): plug = self._plugs_by_name.get(plug_name) if plug is None: raise InvalidPlugError('Cannot wait on unknown plug "%s".' % plug_name) if not isinstance(plug, FrontendAwareBasePlug): raise InvalidPlugError('Cannot wait on a plug %s that is not an subclass ' 'of FrontendAwareBasePlug.' % plug_name) state, update_event = plug.asdict_with_event() if state != remote_state: return state if update_event.wait(timeout_s): return plug._asdict()
Wait for a change in the state of a frontend-aware plug. Args: plug_name: Plug name, e.g. 'openhtf.plugs.user_input.UserInput'. remote_state: The last observed state. timeout_s: Number of seconds to wait for an update. Returns: An updated state, or None if the timeout runs out. Raises: InvalidPlugError: The plug can't be waited on either because it's not in use or it's not a frontend-aware plug.
juraj-google-style
def _tf_data_packed_nest_with_indices(structure, flat, index): packed = [] for s in _tf_data_yield_value(structure): if _tf_data_is_nested(s): new_index, child = _tf_data_packed_nest_with_indices(s, flat, index) packed.append(sequence_like(s, child)) index = new_index else: packed.append(flat[index]) index += 1 return (index, packed)
Helper function for pack_nest_as. Args: structure: Substructure (tuple of elements and/or tuples) to mimic flat: Flattened values to output substructure for. index: Index at which to start reading from flat. Returns: The tuple (new_index, child), where: * new_index - the updated index into `flat` having processed `structure`. * packed - the subset of `flat` corresponding to `structure`, having started at `index`, and packed into the same nested format. Raises: ValueError: if `structure` contains more elements than `flat` (assuming indexing starts from `index`).
github-repos
def add_constant(self, stream, value): if (stream in self.constant_database): raise ArgumentError('Attempted to set the same constant twice', stream=stream, old_value=self.constant_database[stream], new_value=value) self.constant_database[stream] = value
Store a constant value for use in this sensor graph. Constant assignments occur after all sensor graph nodes have been allocated since they must be propogated to all appropriate virtual stream walkers. Args: stream (DataStream): The constant stream to assign the value to value (int): The value to assign.
codesearchnet
def GreaterThan(self, value): self._awql = self._CreateSingleValueCondition(value, '>') return self._query_builder
Sets the type of the WHERE clause as "greater than". Args: value: The value to be used in the WHERE condition. Returns: The query builder that this WHERE builder links to.
codesearchnet
def _validate_sub(claims, subject=None): if ('sub' not in claims): return if (not isinstance(claims['sub'], string_types)): raise JWTClaimsError('Subject must be a string.') if (subject is not None): if (claims.get('sub') != subject): raise JWTClaimsError('Invalid subject')
Validates that the 'sub' claim is valid. The "sub" (subject) claim identifies the principal that is the subject of the JWT. The claims in a JWT are normally statements about the subject. The subject value MUST either be scoped to be locally unique in the context of the issuer or be globally unique. The processing of this claim is generally application specific. The "sub" value is a case-sensitive string containing a StringOrURI value. Use of this claim is OPTIONAL. Args: claims (dict): The claims dictionary to validate. subject (str): The subject of the token.
codesearchnet
def distribution(self, start=None, end=None, normalized=True, mask=None): (start, end, mask) = self._check_boundaries(start, end, mask=mask) counter = histogram.Histogram() for (start, end, _) in mask.iterperiods(value=True): for (t0, t1, value) in self.iterperiods(start, end): duration = utils.duration_to_number((t1 - t0), units='seconds') try: counter[value] += duration except histogram.UnorderableElements as e: counter = histogram.Histogram.from_dict(dict(counter), key=hash) counter[value] += duration if normalized: return counter.normalized() else: return counter
Calculate the distribution of values over the given time range from `start` to `end`. Args: start (orderable, optional): The lower time bound of when to calculate the distribution. By default, the first time point will be used. end (orderable, optional): The upper time bound of when to calculate the distribution. By default, the last time point will be used. normalized (bool): If True, distribution will sum to one. If False and the time values of the TimeSeries are datetimes, the units will be seconds. mask (:obj:`TimeSeries`, optional): A domain on which to calculate the distribution. Returns: :obj:`Histogram` with the results.
codesearchnet
def consume(self, key, amount=1, rate=None, capacity=None, **kwargs): bucket = self.get_bucket(key, rate, capacity, **kwargs) return bucket.consume(amount)
Consume an amount for a given key. Non-default rate/capacity can be given to override Throttler defaults. Returns: bool: whether the units could be consumed
codesearchnet
def list_deployment_operations(access_token, subscription_id, rg_name, deployment_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourcegroups/', rg_name, '/providers/Microsoft.Resources/deployments/', deployment_name, '/operations', '?api-version=', BASE_API]) return do_get(endpoint, access_token)
List all operations involved in a given deployment. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rg_name (str): Azure resource group name. Returns: HTTP response. JSON body.
codesearchnet
def NormalizePath(path): path = os.path.normpath(path) for sys_path in sys.path: if (not sys_path): continue sys_path = os.path.join(sys_path, '') if path.startswith(sys_path): return path[len(sys_path):] return path
Removes any Python system path prefix from the given path. Python keeps almost all paths absolute. This is not what we actually want to return. This loops through system paths (directories in which Python will load modules). If "path" is relative to one of them, the directory prefix is removed. Args: path: absolute path to normalize (relative paths will not be altered) Returns: Relative path if "path" is within one of the sys.path directories or the input otherwise.
codesearchnet
def load_panel(panel_path, adapter, date=None, display_name=None, version=None, panel_type=None, panel_id=None, institute=None): panel_lines = get_file_handle(panel_path) try: panel_info = get_panel_info( panel_lines=panel_lines, panel_id=panel_id, institute=institute, version=version, date=date, display_name=display_name ) except Exception as err: raise err version = None if panel_info.get('version'): version = float(panel_info['version']) panel_id = panel_info['panel_id'] display_name = panel_info['display_name'] or panel_id institute = panel_info['institute'] date = panel_info['date'] if not institute: raise SyntaxError("A Panel has to belong to a institute") if not adapter.institute(institute): raise SyntaxError("Institute {0} does not exist in database".format(institute)) if not panel_id: raise SyntaxError("A Panel has to have a panel id") if version: existing_panel = adapter.gene_panel(panel_id, version) else: existing_panel = adapter.gene_panel(panel_id) version = 1.0 LOG.info("Set version to %s", version) if existing_panel: LOG.info("found existing panel") if version == existing_panel['version']: LOG.warning("Panel with same version exists in database") LOG.info("Reload with updated version") raise SyntaxError() display_name = display_name or existing_panel['display_name'] institute = institute or existing_panel['institute'] parsed_panel = parse_gene_panel( path=panel_path, institute=institute, panel_type=panel_type, date=date, version=version, panel_id=panel_id, display_name=display_name, ) try: adapter.load_panel(parsed_panel=parsed_panel) except Exception as err: raise err
Load a manually curated gene panel into scout Args: panel_path(str): path to gene panel file adapter(scout.adapter.MongoAdapter) date(str): date of gene panel on format 2017-12-24 display_name(str) version(float) panel_type(str) panel_id(str) institute(str)
juraj-google-style
def from_backbone_and_decoder_configs(cls, backbone_config: PretrainedConfig, decoder_config: PretrainedConfig, **kwargs): return cls(backbone_config=backbone_config, decoder_config=decoder_config, **kwargs)
Instantiate a [`MaskFormerConfig`] (or a derived class) from a pre-trained backbone model configuration and DETR model configuration. Args: backbone_config ([`PretrainedConfig`]): The backbone configuration. decoder_config ([`PretrainedConfig`]): The transformer decoder configuration to use. Returns: [`MaskFormerConfig`]: An instance of a configuration object
github-repos
def _zero_out_grad(op, grad): to_zero = op.inputs[0] shape = array_ops.shape(to_zero) index = array_ops.zeros_like(shape) first_grad = array_ops.reshape(grad, [-1])[0] to_zero_grad = sparse_ops.sparse_to_dense([index], shape, first_grad, 0) return [to_zero_grad]
The gradients for `zero_out`. Args: op: The `zero_out` `Operation` that we are differentiating, which we can use to find the inputs and outputs of the original op. grad: Gradient with respect to the output of the `zero_out` op. Returns: Gradients with respect to the input of `zero_out`.
github-repos
def load(url_or_handle, cache=None, **kwargs): ext = get_extension(url_or_handle) try: loader = loaders[ext.lower()] message = "Using inferred loader '%s' due to passed file extension '%s'." log.debug(message, loader.__name__[6:], ext) return load_using_loader(url_or_handle, loader, cache, **kwargs) except KeyError: log.warning("Unknown extension '%s', attempting to load as image.", ext) try: with read_handle(url_or_handle, cache=cache) as handle: result = _load_img(handle) except Exception as e: message = 'Could not load resource %s as image. Supported extensions: %s' log.error(message, url_or_handle, list(loaders)) raise RuntimeError(message.format(url_or_handle, list(loaders))) else: log.info("Unknown extension '%s' successfully loaded as image.", ext) return result
Load a file. File format is inferred from url. File retrieval strategy is inferred from URL. Returned object type is inferred from url extension. Args: url_or_handle: a (reachable) URL, or an already open file handle Raises: RuntimeError: If file extension or URL is not supported.
codesearchnet
def get_item(env, name, default=None): for key in name.split('.'): if isinstance(env, dict) and key in env: env = env[key] elif isinstance(env, types.ModuleType) and key in env.__dict__: env = env.__dict__[key] else: return default return env
Get an item from a dictionary, handling nested lookups with dotted notation. Args: env: the environment (dictionary) to use to look up the name. name: the name to look up, in dotted notation. default: the value to return if the name if not found. Returns: The result of looking up the name, if found; else the default.
juraj-google-style
def ref_for_message_type(self, message_type): name = self.__normalized_name(message_type) if (name not in self.__schemas): raise KeyError('Message has not been parsed: %s', name) return name
Returns the JSON Schema id for the given message. Args: message_type: protorpc.message.Message class to be parsed. Returns: string, The JSON Schema id. Raises: KeyError: if the message hasn't been parsed via add_message().
codesearchnet
def completely_parse_reader(parser: Parser[Input, Output], reader: Reader[Input]) -> Result[Output]: result = (parser << eof).consume(reader) if isinstance(result, Continue): return Success(result.value) else: used = set() unique_expected = [] for expected_lambda in result.expected: expected = expected_lambda() if expected not in used: used.add(expected) unique_expected.append(expected) return Failure(result.farthest.expected_error(' or '.join(unique_expected)))
Consume reader and return Success only on complete consumption. This is a helper function for ``parse`` methods, which return ``Success`` when the input is completely consumed and ``Failure`` with an appropriate message otherwise. Args: parser: The parser doing the consuming reader: The input being consumed Returns: A parsing ``Result``
juraj-google-style
def add_noise_curve(self, name, noise_type='ASD', is_wd_background=False): if is_wd_background: self.sensitivity_input.wd_noise = name self.sensitivity_input.wd_noise_type_in = noise_type else: if ('sensitivity_curves' not in self.sensitivity_input.__dict__): self.sensitivity_input.sensitivity_curves = [] if ('noise_type_in' not in self.sensitivity_input.__dict__): self.sensitivity_input.noise_type_in = [] self.sensitivity_input.sensitivity_curves.append(name) self.sensitivity_input.noise_type_in.append(noise_type) return
Add a noise curve for generation. This will add a noise curve for an SNR calculation by appending to the sensitivity_curves list within the sensitivity_input dictionary. The name of the noise curve prior to the file extension will appear as its label in the final output dataset. Therefore, it is recommended prior to running the generator that file names are renamed to simple names for later reference. Args: name (str): Name of noise curve including file extension inside input_folder. noise_type (str, optional): Type of noise. Choices are `ASD`, `PSD`, or `char_strain`. Default is ASD. is_wd_background (bool, optional): If True, this sensitivity is used as the white dwarf background noise. Default is False.
codesearchnet
def get_default_connection_info(self, provider_name): provider = self._provider_client.get_by_name(provider_name) if provider: return provider['defaultConnectionInfo'] else: return {}
Gets default connection info for a specific provider. Args: provider_name: Name of the provider. Returns: dict: Default connection information.
codesearchnet
def sonority_from_fts(self, seg): def match(m): return self.fm.match(fts(m), seg) minusHi = BoolTree(match('-hi'), 9, 8) minusNas = BoolTree(match('-nas'), 6, 5) plusVoi1 = BoolTree(match('+voi'), 4, 3) plusVoi2 = BoolTree(match('+voi'), 2, 1) plusCont = BoolTree(match('+cont'), plusVoi1, plusVoi2) plusSon = BoolTree(match('+son'), minusNas, plusCont) minusCons = BoolTree(match('-cons'), 7, plusSon) plusSyl = BoolTree(match('+syl'), minusHi, minusCons) return plusSyl.get_value()
Given a segment as features, returns the sonority on a scale of 1 to 9. Args: seg (list): collection of (value, feature) pairs representing a segment (vowel or consonant) Returns: int: sonority of `seg` between 1 and 9
juraj-google-style
def rotate_sites(self, indices=None, theta=0, axis=None, anchor=None, to_unit_cell=True): from numpy.linalg import norm from numpy import cross, eye from scipy.linalg import expm if indices is None: indices = range(len(self)) if axis is None: axis = [0, 0, 1] if anchor is None: anchor = [0, 0, 0] anchor = np.array(anchor) axis = np.array(axis) theta %= 2 * np.pi rm = expm(cross(eye(3), axis / norm(axis)) * theta) for i in indices: site = self._sites[i] coords = ((np.dot(rm, np.array(site.coords - anchor).T)).T + anchor).ravel() new_site = PeriodicSite( site.species, coords, self._lattice, to_unit_cell=to_unit_cell, coords_are_cartesian=True, properties=site.properties) self._sites[i] = new_site
Rotate specific sites by some angle around vector at anchor. Args: indices (list): List of site indices on which to perform the translation. theta (float): Angle in radians axis (3x1 array): Rotation axis vector. anchor (3x1 array): Point of rotation. to_unit_cell (bool): Whether new sites are transformed to unit cell
juraj-google-style
def resetAndRejoin(self, timeout): print '%s call resetAndRejoin' % self.port print timeout try: if self.__sendCommand(WPANCTL_CMD + 'setprop Daemon:AutoAssociateAfterReset false')[0] != 'Fail': time.sleep(0.5) if self.__sendCommand(WPANCTL_CMD + 'reset')[0] != 'Fail': self.isPowerDown = True else: return False else: return False time.sleep(timeout) if self.deviceRole == Thread_Device_Role.SED: self.setPollingRate(self.sedPollingRate) if self.__sendCommand(WPANCTL_CMD + 'attach')[0] != 'Fail': time.sleep(3) else: return False if self.__sendCommand(WPANCTL_CMD + 'setprop Daemon:AutoAssociateAfterReset true')[0] == 'Fail': return False if self.__stripValue(self.__sendCommand(WPANCTL_CMD + 'getprop -v NCP:State')[0]) != 'associated': print '[FAIL] reset and rejoin' return False return True except Exception, e: ModuleHelper.WriteIntoDebugLogger('resetAndRejoin() Error: ' + str(e))
reset and join back Thread Network with a given timeout delay Args: timeout: a timeout interval before rejoin Thread Network Returns: True: successful to reset and rejoin Thread Network False: fail to reset and rejoin the Thread Network
juraj-google-style
def read_bit(self, registeraddress, functioncode=2): _checkFunctioncode(functioncode, [1, 2]) return self._genericCommand(functioncode, registeraddress)
Read one bit from the slave. Args: * registeraddress (int): The slave register address (use decimal numbers, not hex). * functioncode (int): Modbus function code. Can be 1 or 2. Returns: The bit value 0 or 1 (int). Raises: ValueError, TypeError, IOError
codesearchnet
def get_metadata(self, handle): handle = os.path.expanduser(os.path.expandvars(handle)) with open(self._prefixed('%s.metadata' % handle)) as f: return json.load(f)
Returns the associated metadata info for the given handle, the metadata file must exist (``handle + '.metadata'``). Args: handle (str): Path to the template to get the metadata from Returns: dict: Metadata for the given handle
juraj-google-style
def transform_to_mods_periodical(marc_xml, uuid, url): marc_xml = _read_content_or_path(marc_xml) transformed = xslt_transformation(marc_xml, _absolute_template_path('MARC21toPeriodicalTitle.xsl')) return _apply_postprocessing(marc_xml=marc_xml, xml=transformed, func=mods_postprocessor.postprocess_periodical, uuid=uuid, url=url)
Convert `marc_xml` to periodical MODS data format. Args: marc_xml (str): Filename or XML string. Don't use ``\\n`` in case of filename. uuid (str): UUID string giving the package ID. url (str): URL of the publication (public or not). Returns: list: Collection of transformed xml strings.
codesearchnet
def StatEntryFromPath(path, pathspec, ext_attrs=True): try: stat = filesystem.Stat.FromPath(path) except (IOError, OSError) as error: logging.error("Failed to obtain stat for '%s': %s", pathspec, error) return rdf_client_fs.StatEntry(pathspec=pathspec) return StatEntryFromStat(stat, pathspec, ext_attrs=ext_attrs)
Builds a stat entry object from a given path. Args: path: A path (string value) to stat. pathspec: A `PathSpec` corresponding to the `path`. ext_attrs: Whether to include extended file attributes in the result. Returns: `StatEntry` object.
juraj-google-style
def victim(self, main_type, sub_type, unique_id, victim_id, params=None): params = params or {} if not sub_type: url = '/v2/{}/{}/victims/{}'.format(main_type, unique_id, victim_id) else: url = '/v2/{}/{}/{}/victims/{}'.format(main_type, sub_type, unique_id, victim_id) return self.tcex.session.get(url, params=params)
Args: main_type: sub_type: unique_id: victim_id: params: Return:
juraj-google-style
def replace_urls(status): text = status.text if not has_url(status): return text urls = [(e['indices'], e['expanded_url']) for e in status.entities['urls']] urls.sort(key=lambda x: x[0][0], reverse=True) for (start, end), url in urls: text = text[:start] + url + text[end:] return text
Replace shorturls in a status with expanded urls. Args: status (tweepy.status): A tweepy status object Returns: str
juraj-google-style
def _get_status_code(self, http_status): try: return int(http_status.split(' ', 1)[0]) except TypeError: _logger.warning('Unable to find status code in HTTP status %r.', http_status) return 500
Get the HTTP status code from an HTTP status string. Args: http_status: A string containing a HTTP status code and reason. Returns: An integer with the status code number from http_status.
juraj-google-style
def asin(cls, x: 'TensorFluent') -> 'TensorFluent': return cls._unary_op(x, tf.asin, tf.float32)
Returns a TensorFluent for the arcsin function. Args: x: The input fluent. Returns: A TensorFluent wrapping the arcsin function.
juraj-google-style
def ContainsAny(self, *values): self._awql = self._CreateMultipleValuesCondition(values, 'CONTAINS_ANY') return self._query_builder
Sets the type of the WHERE clause as "contains any". Args: *values: The values to be used in the WHERE condition. Returns: The query builder that this WHERE builder links to.
juraj-google-style
def ParseFileObject(self, parser_mediator, file_object): data = file_object.read(self._HEADER_READ_SIZE) if not data.startswith(b'<?xml'): raise errors.UnableToParseFile( 'Not an Android usage history file [not XML]') _, _, data = data.partition(b'\n') if not data.startswith(b'<usage-history'): raise errors.UnableToParseFile( 'Not an Android usage history file [wrong XML root key]') file_object.seek(0, os.SEEK_SET) xml = ElementTree.parse(file_object) root_node = xml.getroot() for application_node in root_node: package_name = application_node.get('name', None) for part_node in application_node.iter(): if part_node.tag != 'comp': continue last_resume_time = part_node.get('lrt', None) if last_resume_time is None: parser_mediator.ProduceExtractionWarning('missing last resume time.') continue try: last_resume_time = int(last_resume_time, 10) except ValueError: parser_mediator.ProduceExtractionWarning( 'unsupported last resume time: {0:s}.'.format(last_resume_time)) continue event_data = AndroidAppUsageEventData() event_data.component = part_node.get('name', None) event_data.package = package_name date_time = dfdatetime_java_time.JavaTime(timestamp=last_resume_time) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_LAST_RESUME) parser_mediator.ProduceEventWithEventData(event, event_data)
Parses an Android usage-history file-like object. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. file_object (dfvfs.FileIO): file-like object. Raises: UnableToParseFile: when the file cannot be parsed.
juraj-google-style
def predict(self, X, break_ties="random", return_probs=False, **kwargs): Y_s = self.predict_proba(X, **kwargs) self._check(Y_s, typ=list) self._check(Y_s[0], typ=np.ndarray) Y_p = [] for Y_ts in Y_s: Y_tp = self._break_ties(Y_ts, break_ties) Y_p.append(Y_tp.astype(np.int)) if return_probs: return Y_p, Y_s else: return Y_p
Predicts int labels for an input X on all tasks Args: X: The input for the predict_proba method break_ties: A tie-breaking policy return_probs: Return the predicted probabilities as well Returns: Y_p: A t-length list of n-dim np.ndarrays of predictions in [1, K_t] [Optionally: Y_s: A t-length list of [n, K_t] np.ndarrays of predicted probabilities]
juraj-google-style
def write_buffers(self, conn, locked=True): if conn is None: raise ValueError("Cannot write_buffers to connection None") sent = 0 for header, payload in self._buffers: yield conn.write_message(header, locked=locked) yield conn.write_message(payload, binary=True, locked=locked) sent += (len(header) + len(payload)) raise gen.Return(sent)
Write any buffer headers and payloads to the given connection. Args: conn (object) : May be any object with a ``write_message`` method. Typically, a Tornado ``WSHandler`` or ``WebSocketClientConnection`` locked (bool) : Returns: int : number of bytes sent
juraj-google-style
def __init__(self, filename, filename_info, filetype_info): super(VIIRSActiveFiresTextFileHandler, self).__init__(filename, filename_info, filetype_info) if not os.path.isfile(filename): return self.file_content = dd.read_csv(filename, skiprows=15, header=None, names=["latitude", "longitude", "T13", "Along-scan", "Along-track", "detection_confidence", "power"])
Makes sure filepath is valid and then reads data into a Dask DataFrame Args: filename: Filename filename_info: Filename information filetype_info: Filetype information
juraj-google-style
def weights(self): return self.trainable_weights + self.non_trainable_weights
Returns the list of all layer variables/weights. Returns: A list of variables.
github-repos
def add_phase(self, name, done, score, summary, steps, report_every=None, log_every=None, checkpoint_every=None, feed=None): done = tf.convert_to_tensor(done, tf.bool) score = tf.convert_to_tensor(score, tf.float32) summary = tf.convert_to_tensor(summary, tf.string) feed = (feed or {}) if ((done.shape.ndims is None) or (score.shape.ndims is None)): raise ValueError("Rank of 'done' and 'score' tensors must be known.") writer = (self._logdir and tf.summary.FileWriter(os.path.join(self._logdir, name), tf.get_default_graph(), flush_secs=60)) op = self._define_step(done, score, summary) batch = (1 if (score.shape.ndims == 0) else score.shape[0].value) self._phases.append(_Phase(name, writer, op, batch, int(steps), feed, report_every, log_every, checkpoint_every))
Add a phase to the loop protocol. If the model breaks long computation into multiple steps, the done tensor indicates whether the current score should be added to the mean counter. For example, in reinforcement learning we only have a valid score at the end of the episode. Score and done tensors can either be scalars or vectors, to support single and batched computations. Args: name: Name for the phase, used for the summary writer. done: Tensor indicating whether current score can be used. score: Tensor holding the current, possibly intermediate, score. summary: Tensor holding summary string to write if not an empty string. steps: Duration of the phase in steps. report_every: Yield mean score every this number of steps. log_every: Request summaries via `log` tensor every this number of steps. checkpoint_every: Write checkpoint every this number of steps. feed: Additional feed dictionary for the session run call. Raises: ValueError: Unknown rank for done or score tensors.
codesearchnet
def as_list(self, label=1, **kwargs): label_to_use = label if self.mode == "classification" else self.dummy_label ans = self.domain_mapper.map_exp_ids(self.local_exp[label_to_use], **kwargs) ans = [(x[0], float(x[1])) for x in ans] return ans
Returns the explanation as a list. Args: label: desired label. If you ask for a label for which an explanation wasn't computed, will throw an exception. Will be ignored for regression explanations. kwargs: keyword arguments, passed to domain_mapper Returns: list of tuples (representation, weight), where representation is given by domain_mapper. Weight is a float.
juraj-google-style
def napoleon_to_sphinx(docstring, **config_params): if "napoleon_use_param" not in config_params: config_params["napoleon_use_param"] = False if "napoleon_use_rtype" not in config_params: config_params["napoleon_use_rtype"] = False config = Config(**config_params) return str(GoogleDocstring(docstring, config))
Convert napoleon docstring to plain sphinx string. Args: docstring (str): Docstring in napoleon format. **config_params (dict): Whatever napoleon doc configuration you want. Returns: str: Sphinx string.
juraj-google-style
def update(self, instance, validated_data): is_primary = validated_data.pop('is_primary', False) instance = super(EmailSerializer, self).update(instance, validated_data) if is_primary: instance.set_primary() return instance
Update the instance the serializer is bound to. Args: instance: The instance the serializer is bound to. validated_data: The data to update the serializer with. Returns: The updated instance.
codesearchnet
def in_flight_request_count(self, node_id=None): if (node_id is not None): conn = self._conns.get(node_id) if (conn is None): return 0 return len(conn.in_flight_requests) else: return sum([len(conn.in_flight_requests) for conn in list(self._conns.values())])
Get the number of in-flight requests for a node or all nodes. Arguments: node_id (int, optional): a specific node to check. If unspecified, return the total for all nodes Returns: int: pending in-flight requests for the node, or all nodes if None
codesearchnet
def make_bitransformer(input_vocab_size=gin.REQUIRED, output_vocab_size=gin.REQUIRED, layout=None, mesh_shape=None): with gin.config_scope('encoder'): encoder = Unitransformer(layer_stack=make_layer_stack(), input_vocab_size=input_vocab_size, output_vocab_size=None, autoregressive=False, name='encoder', layout=layout, mesh_shape=mesh_shape) with gin.config_scope('decoder'): decoder = Unitransformer(layer_stack=make_layer_stack(), input_vocab_size=output_vocab_size, output_vocab_size=output_vocab_size, autoregressive=True, name='decoder', layout=layout, mesh_shape=mesh_shape) return Bitransformer(encoder, decoder)
Gin-configurable bitransformer constructor. In your config file you need to set the encoder and decoder layers like this: encoder/make_layer_stack.layers = [ @transformer_layers.SelfAttention, @transformer_layers.DenseReluDense, ] decoder/make_layer_stack.layers = [ @transformer_layers.SelfAttention, @transformer_layers.EncDecAttention, @transformer_layers.DenseReluDense, ] Args: input_vocab_size: a integer output_vocab_size: an integer layout: optional - an input to mtf.convert_to_layout_rules Some layers (e.g. MoE layers) cheat by looking at layout and mesh_shape mesh_shape: optional - an input to mtf.convert_to_shape Some layers (e.g. MoE layers) cheat by looking at layout and mesh_shape Returns: a Bitransformer
codesearchnet
def initialize(log_file, project_dir=None, debug=False): print_splash() log.setup_logging(log_file, print_log_location=False, debug=debug) logger = log.get_logger('pipeline') if (project_dir is not None): make_dir(os.path.normpath(project_dir)) logger.info('PROJECT DIRECTORY: {}'.format(project_dir)) logger.info('') logger.info('LOG LOCATION: {}'.format(log_file)) print('') return logger
Initializes an AbTools pipeline. Initialization includes printing the AbTools splash, setting up logging, creating the project directory, and logging both the project directory and the log location. Args: log_file (str): Path to the log file. Required. project_dir (str): Path to the project directory. If not provided, the project directory won't be created and the location won't be logged. debug (bool): If ``True``, the logging level will be set to ``logging.DEBUG``. Default is ``FALSE``, which logs at ``logging.INFO``. Returns: logger
codesearchnet
class EfficientNetBlock(nn.Module): def __init__(self, config: EfficientNetConfig, in_dim: int, out_dim: int, stride: int, expand_ratio: int, kernel_size: int, drop_rate: float, id_skip: bool, adjust_padding: bool): super().__init__() self.expand_ratio = expand_ratio self.expand = True if self.expand_ratio != 1 else False expand_in_dim = in_dim * expand_ratio if self.expand: self.expansion = EfficientNetExpansionLayer(config=config, in_dim=in_dim, out_dim=expand_in_dim, stride=stride) self.depthwise_conv = EfficientNetDepthwiseLayer(config=config, in_dim=expand_in_dim if self.expand else in_dim, stride=stride, kernel_size=kernel_size, adjust_padding=adjust_padding) self.squeeze_excite = EfficientNetSqueezeExciteLayer(config=config, in_dim=in_dim, expand_dim=expand_in_dim, expand=self.expand) self.projection = EfficientNetFinalBlockLayer(config=config, in_dim=expand_in_dim if self.expand else in_dim, out_dim=out_dim, stride=stride, drop_rate=drop_rate, id_skip=id_skip) def forward(self, hidden_states: torch.FloatTensor) -> torch.Tensor: embeddings = hidden_states if self.expand_ratio != 1: hidden_states = self.expansion(hidden_states) hidden_states = self.depthwise_conv(hidden_states) hidden_states = self.squeeze_excite(hidden_states) hidden_states = self.projection(embeddings, hidden_states) return hidden_states
This corresponds to the expansion and depthwise convolution phase of each block in the original implementation. Args: config ([`EfficientNetConfig`]): Model configuration class. in_dim (`int`): Number of input channels. out_dim (`int`): Number of output channels. stride (`int`): Stride size to be used in convolution layers. expand_ratio (`int`): Expand ratio to set the output dimensions for the expansion and squeeze-excite layers. kernel_size (`int`): Kernel size for the depthwise convolution layer. drop_rate (`float`): Dropout rate to be used in the final phase of each block. id_skip (`bool`): Whether to apply dropout and sum the final hidden states with the input embeddings during the final phase of each block. Set to `True` for the first block of each stage. adjust_padding (`bool`): Whether to apply padding to only right and bottom side of the input kernel before the depthwise convolution operation, set to `True` for inputs with odd input sizes.
github-repos
def ProcessConfigOverrides(filename): abs_filename = os.path.abspath(filename) cfg_filters = [] keep_looking = True while keep_looking: abs_path, base_name = os.path.split(abs_filename) if not base_name: break cfg_file = os.path.join(abs_path, "CPPLINT.cfg") abs_filename = abs_path if not os.path.isfile(cfg_file): continue try: with open(cfg_file) as file_handle: for line in file_handle: line, _, _ = line.partition(' if not line.strip(): continue name, _, val = line.partition('=') name = name.strip() val = val.strip() if name == 'set noparent': keep_looking = False elif name == 'filter': cfg_filters.append(val) elif name == 'exclude_files': if base_name: pattern = re.compile(val) if pattern.match(base_name): _cpplint_state.PrintInfo('Ignoring "%s": file excluded by ' '"%s". File path component "%s" matches pattern "%s"\n' % (filename, cfg_file, base_name, val)) return False elif name == 'linelength': global _line_length try: _line_length = int(val) except ValueError: _cpplint_state.PrintError('Line length must be numeric.') elif name == 'extensions': global _valid_extensions try: extensions = [ext.strip() for ext in val.split(',')] _valid_extensions = set(extensions) except ValueError: sys.stderr.write('Extensions should be a comma-separated list of values;' 'for example: extensions=hpp,cpp\n' 'This could not be parsed: "%s"' % (val,)) elif name == 'headers': global _header_extensions try: extensions = [ext.strip() for ext in val.split(',')] _header_extensions = set(extensions) except ValueError: sys.stderr.write('Extensions should be a comma-separated list of values;' 'for example: extensions=hpp,cpp\n' 'This could not be parsed: "%s"' % (val,)) elif name == 'root': global _root _root = val else: _cpplint_state.PrintError( 'Invalid configuration option (%s) in file %s\n' % (name, cfg_file)) except IOError: _cpplint_state.PrintError( "Skipping config file '%s': Can't open for reading\n" % cfg_file) keep_looking = False for cfg_filter in reversed(cfg_filters): _AddFilters(cfg_filter) return True
Loads the configuration files and processes the config overrides. Args: filename: The name of the file being processed by the linter. Returns: False if the current |filename| should not be processed further.
juraj-google-style
def forward(self, hidden_states: torch.Tensor, attention_mask: torch.Tensor, output_attentions: Optional[bool]=False) -> Tuple[torch.Tensor, ...]: residual = hidden_states hidden_states = self.layer_norm1(hidden_states) hidden_states, attn_weights = self.self_attn(hidden_states=hidden_states, attention_mask=attention_mask, output_attentions=output_attentions) hidden_states = residual + hidden_states residual = hidden_states hidden_states = self.layer_norm2(hidden_states) hidden_states = self.mlp(hidden_states) hidden_states = residual + hidden_states outputs = (hidden_states,) if output_attentions: outputs += (attn_weights,) return outputs
Args: hidden_states (`torch.FloatTensor`): Input to the layer of shape `(batch, seq_len, embed_dim)`. attention_mask (`torch.FloatTensor`): Attention mask of shape `(batch, 1, q_len, k_v_seq_len)` where padding elements are indicated by very large negative values. output_attentions (`bool`, *optional*, defaults to `False`): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail.
github-repos
def prepare_stack_for_update(self, stack, tags): if self.is_stack_destroyed(stack): return False elif self.is_stack_completed(stack): return True stack_name = self.get_stack_name(stack) stack_status = self.get_stack_status(stack) if self.is_stack_in_progress(stack): raise exceptions.StackUpdateBadStatus(stack_name, stack_status, 'Update already in-progress') if (not self.is_stack_recreatable(stack)): raise exceptions.StackUpdateBadStatus(stack_name, stack_status, 'Unsupported state for re-creation') if (not self.recreate_failed): raise exceptions.StackUpdateBadStatus(stack_name, stack_status, 'Stack re-creation is disabled. Run stacker again with the --recreate-failed option to force it to be deleted and created from scratch.') stack_tags = self.get_stack_tags(stack) if (not check_tags_contain(stack_tags, tags)): raise exceptions.StackUpdateBadStatus(stack_name, stack_status, 'Tags differ from current configuration, possibly not created with stacker') if self.interactive: sys.stdout.write(('The "%s" stack is in a failed state (%s).\nIt cannot be updated, but it can be deleted and re-created.\nAll its current resources will IRREVERSIBLY DESTROYED.\nProceed carefully!\n\n' % (stack_name, stack_status))) sys.stdout.flush() ask_for_approval(include_verbose=False) logger.warn('Destroying stack "%s" for re-creation', stack_name) self.destroy_stack(stack) return False
Prepare a stack for updating It may involve deleting the stack if is has failed it's initial creation. The deletion is only allowed if: - The stack contains all the tags configured in the current context; - The stack is in one of the statuses considered safe to re-create - ``recreate_failed`` is enabled, due to either being explicitly enabled by the user, or because interactive mode is on. Args: stack (dict): a stack object returned from get_stack tags (list): list of expected tags that must be present in the stack if it must be re-created Returns: bool: True if the stack can be updated, False if it must be re-created
codesearchnet
def performSearch(emails=[], nThreads=16, secondsBeforeTimeout=5): _startTime = time.time() def hasRunOutOfTime(oldEpoch): now = time.time() return now - oldEpoch >= secondsBeforeTimeout results = [] args = [] for e in emails: if weCanCheckTheseDomains(e): args.append((e)) if len(args) == 0: return results if nThreads <= 0 or nThreads > len(args): nThreads = len(args) try: original_sigint_handler = signal.signal(signal.SIGINT, signal.SIG_IGN) pool = Pool(nThreads) signal.signal(signal.SIGINT, original_sigint_handler) except ValueError: pool = Pool(nThreads) poolResults = [] try: def log_result(result): poolResults.append(result) for m in emails: parameters = ( m, ) res = pool.apply_async(pool_function, args=parameters, callback=log_result) try: res.get(3) except TimeoutError as e: general.warning("\n[!] Process timeouted for '{}'.\n".format(parameters)) pool.close() except KeyboardInterrupt: print(general.warning("\n[!] Process manually stopped by the user. Terminating workers.\n")) pool.terminate() pending = "" print(general.warning("[!] The following email providers were not processed:")) for m in emails: processed = False for result in poolResults: if str(m) in json.dumps(result["data"]): processed = True break if not processed: print("\t- " + str(p)) pending += " " + str(m) print("\n") print(general.warning("If you want to relaunch the app with these platforms you can always run the command with: ")) print("\t mailfy.py ... -p " + general.emphasis(pending)) print("\n") print(general.warning("If you prefer to avoid these platforms you can manually evade them for whatever reason with: ")) print("\t mailfy.py ... -x " + general.emphasis(pending)) print("\n") pool.join() for serArray in poolResults: data = serArray["data"] if data != None and data != {}: results.append(data) pool.close() return results
Method to perform the mail verification process. Args: ----- emails: list of emails to be verified. platforms: list of strings representing the wrappers to be used. nThreads: the number of threads to be used. Default: 16 threads. secondsBeforeTimeout: number of seconds to wait before raising a timeout. Default: 5 seconds. Returns: -------- The results collected.
juraj-google-style
def shift(schedule: ScheduleComponent, time: int, name: str=None) -> Schedule: if (name is None): name = schedule.name return union((time, schedule), name=name)
Return schedule shifted by `time`. Args: schedule: The schedule to shift time: The time to shift by name: Name of shifted schedule. Defaults to name of `schedule`
codesearchnet
def build_eval_session(module_spec, class_count): eval_graph, bottleneck_tensor, resized_input_tensor, wants_quantization = ( create_module_graph(module_spec)) eval_sess = tf.Session(graph=eval_graph) with eval_graph.as_default(): (_, _, bottleneck_input, ground_truth_input, final_tensor) = add_final_retrain_ops( class_count, FLAGS.final_tensor_name, bottleneck_tensor, wants_quantization, is_training=False) tf.train.Saver().restore(eval_sess, CHECKPOINT_NAME) evaluation_step, prediction = add_evaluation_step(final_tensor, ground_truth_input) return (eval_sess, resized_input_tensor, bottleneck_input, ground_truth_input, evaluation_step, prediction)
Builds an restored eval session without train operations for exporting. Args: module_spec: The hub.ModuleSpec for the image module being used. class_count: Number of classes Returns: Eval session containing the restored eval graph. The bottleneck input, ground truth, eval step, and prediction tensors.
juraj-google-style
def update_variant(self, variant_obj): LOG.debug('Updating variant %s', variant_obj.get('simple_id')) new_variant = self.variant_collection.find_one_and_replace({'_id': variant_obj['_id']}, variant_obj, return_document=pymongo.ReturnDocument.AFTER) return new_variant
Update one variant document in the database. This means that the variant in the database will be replaced by variant_obj. Args: variant_obj(dict) Returns: new_variant(dict)
codesearchnet
def find_elb_dns_zone_id(name='', env='dev', region='us-east-1'): LOG.info('Find %s ELB DNS Zone ID in %s [%s].', name, env, region) client = boto3.Session(profile_name=env).client('elb', region_name=region) elbs = client.describe_load_balancers(LoadBalancerNames=[name]) return elbs['LoadBalancerDescriptions'][0]['CanonicalHostedZoneNameID']
Get an application's AWS elb dns zone id. Args: name (str): ELB name env (str): Environment/account of ELB region (str): AWS Region Returns: str: elb DNS zone ID
codesearchnet
def add_capability(capability, source=None, limit_access=False, image=None, restart=False): if (salt.utils.versions.version_cmp(__grains__['osversion'], '10') == (- 1)): raise NotImplementedError('`install_capability` is not available on this version of Windows: {0}'.format(__grains__['osversion'])) cmd = ['DISM', '/Quiet', ('/Image:{0}'.format(image) if image else '/Online'), '/Add-Capability', '/CapabilityName:{0}'.format(capability)] if source: cmd.append('/Source:{0}'.format(source)) if limit_access: cmd.append('/LimitAccess') if (not restart): cmd.append('/NoRestart') return __salt__['cmd.run_all'](cmd)
Install a capability Args: capability (str): The capability to install source (Optional[str]): The optional source of the capability. Default is set by group policy and can be Windows Update. limit_access (Optional[bool]): Prevent DISM from contacting Windows Update for the source package image (Optional[str]): The path to the root directory of an offline Windows image. If `None` is passed, the running operating system is targeted. Default is None. restart (Optional[bool]): Reboot the machine if required by the install Raises: NotImplementedError: For all versions of Windows that are not Windows 10 and later. Server editions of Windows use ServerManager instead. Returns: dict: A dictionary containing the results of the command CLI Example: .. code-block:: bash salt '*' dism.add_capability Tools.Graphics.DirectX~~~~0.0.1.0
codesearchnet
def processor_coordinates_to_pnum(mesh_shape, coord): ret = 0 multiplier = 1 for (c, d) in zip(coord[::(- 1)], mesh_shape.to_integer_list[::(- 1)]): ret += (multiplier * c) multiplier *= d return ret
Inverse of pnum_to_processor_coordinates. Args: mesh_shape: a Shape coord: a list of integers with length len(mesh_shape) Returns: an integer less than len(mesh_shape)
codesearchnet
def set_parent(self, parent): if (not isinstance(parent, Node)): raise TypeError('parent must be a Node') self.parent = parent
Set the parent of this ``Node`` object. Use this carefully, otherwise you may damage the structure of this ``Tree`` object. Args: ``Node``: The new parent of this ``Node``
codesearchnet
def typed_returnvalue(self, type_name, formatter=None): self.return_info = ReturnInfo(type_name, formatter, True, None)
Add type information to the return value of this function. Args: type_name (str): The name of the type of the return value. formatter (str): An optional name of a formatting function specified for the type given in type_name.
juraj-google-style
def update_box_field(self, box_key, field): self._raise_unimplemented_error() uri = '/'.join([self.api_uri, self.boxes_suffix, box_key, self.fields_suffix ]) return self._update_field(uri, field)
Upates box field as specified Args: box_key key for pipeline where the fields lives field StreakField object with fresh data returns (status code, updated field dict)
juraj-google-style
def are_symmetrically_equivalent(self, sites1, sites2, symm_prec=0.001): def in_sites(site): for test_site in sites1: if test_site.is_periodic_image(site, symm_prec, False): return True return False for op in self: newsites2 = [PeriodicSite(site.species, op.operate(site.frac_coords), site.lattice) for site in sites2] for site in newsites2: if (not in_sites(site)): break else: return True return False
Given two sets of PeriodicSites, test if they are actually symmetrically equivalent under this space group. Useful, for example, if you want to test if selecting atoms 1 and 2 out of a set of 4 atoms are symmetrically the same as selecting atoms 3 and 4, etc. One use is in PartialRemoveSpecie transformation to return only symmetrically distinct arrangements of atoms. Args: sites1 ([Site]): 1st set of sites sites2 ([Site]): 2nd set of sites symm_prec (float): Tolerance in atomic distance to test if atoms are symmetrically similar. Returns: (bool): Whether the two sets of sites are symmetrically equivalent.
codesearchnet
def write_version_and_dims(version, dims, f): f.write((" f.write((dims[0] + "\t" + dims[1] + "\t" + dims[2] + "\t" + dims[3] + "\n"))
Write first two lines of gct file. Args: version (string): 1.3 by default dims (list of strings): length = 4 f (file handle): handle of output file Returns: nothing
juraj-google-style
def generate_poisson_data(centers, n_cells, cluster_probs=None): genes, clusters = centers.shape output = np.zeros((genes, n_cells)) if cluster_probs is None: cluster_probs = np.ones(clusters)/clusters labels = [] for i in range(n_cells): c = np.random.choice(range(clusters), p=cluster_probs) labels.append(c) output[:,i] = np.random.poisson(centers[:,c]) return output, np.array(labels)
Generates poisson-distributed data, given a set of means for each cluster. Args: centers (array): genes x clusters matrix n_cells (int): number of output cells cluster_probs (array): prior probability for each cluster. Default: uniform. Returns: output - array with shape genes x n_cells labels - array of cluster labels
juraj-google-style
def update_reserved_vlan_range(self, id_or_uri, vlan_pool, force=False): uri = (self._client.build_uri(id_or_uri) + '/reserved-vlan-range') return self._client.update(resource=vlan_pool, uri=uri, force=force, default_values=self.DEFAULT_VALUES)
Updates the reserved vlan ID range for the fabric. Note: This method is only available on HPE Synergy. Args: id_or_uri: ID or URI of fabric. vlan_pool (dict): vlan-pool data to update. force: If set to true, the operation completes despite any problems with network connectivity or errors on the resource itself. The default is false. Returns: dict: The fabric
codesearchnet
def new_contract_proxy(self, contract_interface, contract_address: Address): return ContractProxy( self, contract=self.new_contract(contract_interface, contract_address), )
Return a proxy for interacting with a smart contract. Args: contract_interface: The contract interface as defined by the json. address: The contract's address.
juraj-google-style
def get_savable_components(self): components = self.get_components() components = [components[name] for name in sorted(components)] return set(filter((lambda x: isinstance(x, util.SavableComponent)), components))
Returns the list of all of the components this model consists of that can be individually saved and restored. For instance the network or distribution. Returns: List of util.SavableComponent
codesearchnet
def run_example(example_coroutine, *extra_args): args = _get_parser(extra_args).parse_args() logging.basicConfig(level=(logging.DEBUG if args.debug else logging.WARNING)) cookies = hangups.auth.get_auth_stdin(args.token_path) client = hangups.Client(cookies) loop = asyncio.get_event_loop() task = asyncio.ensure_future(_async_main(example_coroutine, client, args), loop=loop) try: loop.run_until_complete(task) except KeyboardInterrupt: task.cancel() loop.run_until_complete(task) finally: loop.close()
Run a hangups example coroutine. Args: example_coroutine (coroutine): Coroutine to run with a connected hangups client and arguments namespace as arguments. extra_args (str): Any extra command line arguments required by the example.
codesearchnet
def _absolute_template_path(fn): return os.path.join(os.path.dirname(__file__), "xslt", fn)
Return absolute path for filename from local ``xslt/`` directory. Args: fn (str): Filename. ``MARC21slim2MODS3-4-NDK.xsl`` for example. Returns: str: Absolute path to `fn` in ``xslt`` dicretory..
juraj-google-style
def get_cross_replica_context(): return _get_per_thread_mode().cross_replica_context
Returns the current tf.distribute.Strategy if in a cross-replica context. DEPRECATED: Please use `in_cross_replica_context()` and `get_strategy()` instead. Returns: Returns the current `tf.distribute.Strategy` object in a cross-replica context, or `None`. Exactly one of `get_replica_context()` and `get_cross_replica_context()` will return `None` in a particular block.
github-repos
class PoolerStartLogits(nn.Module): def __init__(self, config: PretrainedConfig): super().__init__() self.dense = nn.Linear(config.hidden_size, 1) logger.warning_once('[DEPRECATION WARNING] `PoolerStartLogits` is deprecated and will be removed in v4.53. Please use model-specific class, e.g. `XLMPoolerStartLogits`.') def forward(self, hidden_states: torch.FloatTensor, p_mask: Optional[torch.FloatTensor]=None) -> torch.FloatTensor: x = self.dense(hidden_states).squeeze(-1) if p_mask is not None: if get_parameter_dtype(self) == torch.float16: x = x * (1 - p_mask) - 65500 * p_mask else: x = x * (1 - p_mask) - 1e+30 * p_mask return x
Compute SQuAD start logits from sequence hidden states. Args: config ([`PretrainedConfig`]): The config used by the model, will be used to grab the `hidden_size` of the model.
github-repos
def wavelength_match(a, b): if (type(a) == (type(b) or (isinstance(a, numbers.Number) and isinstance(b, numbers.Number)))): return (a == b) elif ((a is None) or (b is None)): return False elif (isinstance(a, (list, tuple)) and (len(a) == 3)): return (a[0] <= b <= a[2]) elif (isinstance(b, (list, tuple)) and (len(b) == 3)): return (b[0] <= a <= b[2]) else: raise ValueError('Can only compare wavelengths of length 1 or 3')
Return if two wavelengths are equal. Args: a (tuple or scalar): (min wl, nominal wl, max wl) or scalar wl b (tuple or scalar): (min wl, nominal wl, max wl) or scalar wl
codesearchnet
def lbest_idx(state, idx): swarm = state.swarm n_s = state.params['n_s'] cmp = comparator(swarm[0].best_fitness) indices = __lbest_indices__(len(swarm), n_s, idx) best = None for i in indices: if best is None or cmp(swarm[i].best_fitness, swarm[best].best_fitness): best = i return best
lbest Neighbourhood topology function. Neighbourhood size is determined by state.params['n_s']. Args: state: cipy.algorithms.pso.State: The state of the PSO algorithm. idx: int: index of the particle in the swarm. Returns: int: The index of the lbest particle.
juraj-google-style
def as_dict(self): tags_dict = dict(self) tags_dict['@module'] = self.__class__.__module__ tags_dict['@class'] = self.__class__.__name__ return tags_dict
Dict representation. Returns: Dictionary of parameters from fefftags object
codesearchnet
def mark_done(task_id): task = Task.get_by_id(task_id) if task is None: raise ValueError('Task with id %d does not exist' % task_id) task.done = True task.put()
Marks a task as done. Args: task_id: The integer id of the task to update. Raises: ValueError: if the requested task doesn't exist.
juraj-google-style
def _AvailableString(variables, verbose=False): modules = [] other = [] for name, value in variables.items(): if not verbose and name.startswith('_'): continue if '-' in name or '/' in name: continue if inspect.ismodule(value): modules.append(name) else: other.append(name) lists = [('Modules', modules), ('Objects', other)] list_strs = [] for name, varlist in lists: if varlist: items_str = ', '.join(sorted(varlist)) list_strs.append(f'{name}: {items_str}') lists_str = '\n'.join(list_strs) return f'Fire is starting a Python REPL with the following objects:\n{lists_str}\n'
Returns a string describing what objects are available in the Python REPL. Args: variables: A dict of the object to be available in the REPL. verbose: Whether to include 'hidden' members, those keys starting with _. Returns: A string fit for printing at the start of the REPL, indicating what objects are available for the user to use.
github-repos
def drop(self, items): self._manager.leaser.remove(items) self._manager.maybe_resume_consumer()
Remove the given messages from lease management. Args: items(Sequence[DropRequest]): The items to drop.
codesearchnet
def pack_sequence_as(structure, flat_sequence): return nest_util.pack_sequence_as(nest_util.Modality.DATA, structure, flat_sequence, expand_composites=False)
Returns a given flattened sequence packed into a nest. If `structure` is a scalar, `flat_sequence` must be a single-element list; in this case the return value is `flat_sequence[0]`. Args: structure: tuple or list constructed of scalars and/or other tuples/lists, or a scalar. Note: numpy arrays are considered scalars. flat_sequence: flat sequence to pack. Returns: packed: `flat_sequence` converted to have the same recursive structure as `structure`. Raises: ValueError: If nest and structure have different element counts.
github-repos
def AddEnumDescriptor(self, enum_desc): if (not isinstance(enum_desc, descriptor.EnumDescriptor)): raise TypeError('Expected instance of descriptor.EnumDescriptor.') self._enum_descriptors[enum_desc.full_name] = enum_desc self.AddFileDescriptor(enum_desc.file)
Adds an EnumDescriptor to the pool. This method also registers the FileDescriptor associated with the message. Args: enum_desc: An EnumDescriptor.
codesearchnet
def stop_apppool(name): ps_cmd = ['Stop-WebAppPool', r"'{0}'".format(name)] cmd_ret = _srvmgr(ps_cmd) return cmd_ret['retcode'] == 0
Stop an IIS application pool. .. versionadded:: 2017.7.0 Args: name (str): The name of the App Pool to stop. Returns: bool: True if successful, otherwise False CLI Example: .. code-block:: bash salt '*' win_iis.stop_apppool name='MyTestPool'
juraj-google-style
def register_backend(name, backend, allow_overwrite=False): if hasattr(Circuit, ('run_with_' + name)): if allow_overwrite: warnings.warn(f'Circuit has attribute `run_with_{name}`.') else: raise ValueError(f'Circuit has attribute `run_with_{name}`.') if (not allow_overwrite): if (name in BACKENDS): raise ValueError(f"Backend '{name}' is already registered as backend.") BACKENDS[name] = backend
Register new backend. Args: name (str): The name of backend. gateclass (type): The type object of backend allow_overwrite (bool, optional): If True, allow to overwrite the existing backend. Otherwise, raise the ValueError. Raises: ValueError: The name is duplicated with existing backend. When `allow_overwrite=True`, this error is not raised.
codesearchnet
def _resolve_subkeys(key, separator='.'): parts = key.split(separator, 1) if (len(parts) > 1): return parts else: return (parts[0], None)
Resolve a potentially nested key. If the key contains the ``separator`` (e.g. ``.``) then the key will be split on the first instance of the subkey:: >>> _resolve_subkeys('a.b.c') ('a', 'b.c') >>> _resolve_subkeys('d|e|f', separator='|') ('d', 'e|f') If not, the subkey will be :data:`None`:: >>> _resolve_subkeys('foo') ('foo', None) Args: key (str): A string that may or may not contain the separator. separator (str): The namespace separator. Defaults to `.`. Returns: Tuple[str, str]: The key and subkey(s).
codesearchnet
def mrc_to_marc(mrc): lines = [ line for line in mrc.splitlines() if line.strip() ] def split_to_parts(lines): for line in lines: first_part, second_part = line.split(" L ", 1) yield line, first_part, second_part.lstrip() control_lines = [] data_lines = [] for line, first_part, second_part in split_to_parts(lines): if second_part.startswith("$"): data_lines.append(line) else: control_lines.append(line) record = MARCXMLRecord() record.oai_marc = True for line, descr, content in split_to_parts(control_lines): record.controlfields[descr.strip()[:3]] = content def get_subfield_dict(line): fields = ( (field[0], field[1:]) for field in line.split("$$")[1:] ) fields_dict = defaultdict(list) for key, val in fields: fields_dict[key].append(val) return fields_dict for line, descr, content_line in split_to_parts(data_lines): name = descr[:3] i1 = descr[3] i2 = descr[4] record.add_data_field( name, i1, i2, get_subfield_dict(content_line) ) return record.to_XML()
Convert MRC data format to MARC XML. Args: mrc (str): MRC as string. Returns: str: XML with MARC.
juraj-google-style
def __init__(self, thresholds=np.arange(0, 1.1, 0.1), obs_threshold=1.0, input_str=None): self.thresholds = thresholds self.obs_threshold = obs_threshold self.contingency_tables = pd.DataFrame(np.zeros((thresholds.size, 4), dtype=int), columns=["TP", "FP", "FN", "TN"]) if input_str is not None: self.from_str(input_str)
Initializes the DistributedROC object. If input_str is not None, then the DistributedROC object is initialized with the contents of input_str. Otherwise an empty contingency table is created. Args: thresholds (numpy.array): Array of thresholds in increasing order. obs_threshold (float): Split threshold (>= is positive event) (< is negative event) input_str (None or str): String containing information for DistributedROC
juraj-google-style
def convert_clip(params, w_name, scope_name, inputs, layers, weights, names): print('Converting clip ...') if (params['min'] == 0): print('using ReLU({0})'.format(params['max'])) layer = keras.layers.ReLU(max_value=params['max']) else: def target_layer(x, vmin=params['min'], vmax=params['max']): import tensorflow as tf return tf.clip_by_value(x, vmin, vmax) layer = keras.layers.Lambda(target_layer) layers[scope_name] = layer(layers[inputs[0]])
Convert clip operation. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
codesearchnet
def delete_existing_policy(self, scaling_policy, server_group): self.log.info('Deleting policy %s on %s', scaling_policy['policyName'], server_group) delete_dict = {'application': self.app, 'description': 'Delete scaling policy', 'job': [{'policyName': scaling_policy['policyName'], 'serverGroupName': server_group, 'credentials': self.env, 'region': self.region, 'provider': 'aws', 'type': 'deleteScalingPolicy', 'user': 'foremast-autoscaling-policy'}]} wait_for_task(json.dumps(delete_dict))
Given a scaling_policy and server_group, deletes the existing scaling_policy. Scaling policies need to be deleted instead of upserted for consistency. Args: scaling_policy (json): the scaling_policy json from Spinnaker that should be deleted server_group (str): the affected server_group
codesearchnet
def zenith_luminance(self, value=9999.0): if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `zenith_luminance`'.format(value)) if value < 0.0: raise ValueError('value need to be greater or equal 0.0 ' 'for field `zenith_luminance`') self._zenith_luminance = value
Corresponds to IDD Field `zenith_luminance` will be missing if >= 9999 Args: value (float): value for IDD Field `zenith_luminance` Unit: Cd/m2 value >= 0.0 Missing value: 9999.0 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
juraj-google-style
def noise_get(n: tcod.noise.Noise, f: Sequence[float], typ: int=NOISE_DEFAULT) -> float: return float(lib.TCOD_noise_get_ex(n.noise_c, ffi.new('float[4]', f), typ))
Return the noise value sampled from the ``f`` coordinate. ``f`` should be a tuple or list with a length matching :any:`Noise.dimensions`. If ``f`` is shoerter than :any:`Noise.dimensions` the missing coordinates will be filled with zeros. Args: n (Noise): A Noise instance. f (Sequence[float]): The point to sample the noise from. typ (int): The noise algorithm to use. Returns: float: The sampled noise value.
codesearchnet
def propagate(self, token, channel): if (self.get_propagate_status(token, channel) != u'0'): return url = self.url('sd/{}/{}/setPropagate/1/'.format(token, channel)) req = self.remote_utils.get_url(url) if (req.status_code is not 200): raise RemoteDataUploadError('Propagate fail: {}'.format(req.text)) return True
Kick off the propagate function on the remote server. Arguments: token (str): The token to propagate channel (str): The channel to propagate Returns: boolean: Success
codesearchnet
def Deserialize(self, reader): super(AssetState, self).Deserialize(reader) self.AssetId = reader.ReadUInt256() self.AssetType = reader.ReadByte() self.Name = reader.ReadVarString() position = reader.stream.tell() try: self.Amount = reader.ReadFixed8() except Exception as e: reader.stream.seek(position) self.Amount = reader.ReadFixed8() self.Available = reader.ReadFixed8() self.Precision = reader.ReadByte() reader.ReadByte() self.Fee = reader.ReadFixed8() self.FeeAddress = reader.ReadUInt160() self.Owner = ECDSA.Deserialize_Secp256r1(reader) self.Admin = reader.ReadUInt160() self.Issuer = reader.ReadUInt160() self.Expiration = reader.ReadUInt32() self.IsFrozen = reader.ReadBool()
Deserialize full object. Args: reader (neocore.IO.BinaryReader):
juraj-google-style
def List(self, request, global_params=None): config = self.GetMethodConfig('List') return self._RunMethod(config, request, global_params=global_params)
List all GitHubEnterpriseConfigs for a given project. Args: request: (CloudbuildProjectsLocationsGithubEnterpriseConfigsListRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (ListGithubEnterpriseConfigsResponse) The response message.
github-repos
def __init__(self, name: Union[str, Sequence[str]], _sql_data_type: StandardSqlDataType, _sql_alias: Optional[str]=None) -> None: if isinstance(name, str): self.dotted_path = (name,) else: self.dotted_path = name self._sql_data_type = _sql_data_type self._sql_alias = _sql_alias
Builds an identifier. Args: name: Either a single name or a sequence of names representing a dotted path. A sequence like ('a', 'b') will result in SQL like 'SELECT a.b'. _sql_data_type: The type of the values behind the identifier. _sql_alias: The alias of the identifier. Defaults to the last element in the dotted identifier path.
github-repos
def strip_unused(input_graph_def, input_node_names, output_node_names, placeholder_type_enum): for name in input_node_names: if ':' in name: raise ValueError(f"Name '{name}' appears to refer to a Tensor, not an Operation.") not_found = {name for name in input_node_names} inputs_replaced_graph_def = graph_pb2.GraphDef() for node in input_graph_def.node: if node.name in input_node_names: not_found.remove(node.name) placeholder_node = node_def_pb2.NodeDef() placeholder_node.op = 'Placeholder' placeholder_node.name = node.name if isinstance(placeholder_type_enum, list): input_node_index = input_node_names.index(node.name) placeholder_node.attr['dtype'].CopyFrom(attr_value_pb2.AttrValue(type=placeholder_type_enum[input_node_index])) else: placeholder_node.attr['dtype'].CopyFrom(attr_value_pb2.AttrValue(type=placeholder_type_enum)) if '_output_shapes' in node.attr: placeholder_node.attr['_output_shapes'].CopyFrom(node.attr['_output_shapes']) if 'shape' in node.attr: placeholder_node.attr['shape'].CopyFrom(node.attr['shape']) inputs_replaced_graph_def.node.extend([placeholder_node]) else: inputs_replaced_graph_def.node.extend([copy.deepcopy(node)]) if not_found: raise KeyError(f'The following input nodes were not found: {not_found}.') output_graph_def = graph_util.extract_sub_graph(inputs_replaced_graph_def, output_node_names) return output_graph_def
Removes unused nodes from a GraphDef. Args: input_graph_def: A graph with nodes we want to prune. input_node_names: A list of the nodes we use as inputs. output_node_names: A list of the output nodes. placeholder_type_enum: The AttrValue enum for the placeholder data type, or a list that specifies one value per input node name. Returns: A `GraphDef` with all unnecessary ops removed. Raises: ValueError: If any element in `input_node_names` refers to a tensor instead of an operation. KeyError: If any element in `input_node_names` is not found in the graph.
github-repos
def launchQueryForMode(self, query=None, mode=None): qURL = self.createURL(word=query, mode=mode) i3Browser = browser.Browser() try: if self.needsCredentials[mode]: self._getAuthenticated(i3Browser, qURL) data = i3Browser.recoverURL(qURL) else: data = i3Browser.recoverURL(qURL) return data except KeyError: print(general.error("[*] '{}' is not a valid mode for this wrapper ({}).".format(mode, self.__class__.__name__))) return None
Method that launches an i3Browser to collect data. Args: ----- query: The query to be performed mode: The mode to be used to build the query. Return: ------- A string containing the recovered data or None.
codesearchnet
def branch_lengths(self, terminal=True, internal=True): if (not isinstance(terminal, bool)): raise TypeError('terminal must be a bool') if (not isinstance(internal, bool)): raise TypeError('internal must be a bool') for node in self.traverse_preorder(): if ((internal and (not node.is_leaf())) or (terminal and node.is_leaf())): if (node.edge_length is None): (yield 0) else: (yield node.edge_length)
Generator over the lengths of the selected branches of this ``Tree``. Edges with length ``None`` will be output as 0-length Args: ``terminal`` (``bool``): ``True`` to include terminal branches, otherwise ``False`` ``internal`` (``bool``): ``True`` to include internal branches, otherwise ``False``
codesearchnet
def parse_raw_fact(raw_fact): def at_split(string): "\n Return everything in front of the (leftmost) '@'-symbol, if it was used.\n\n Args:\n string (str): The string to be parsed.\n\n Returns:\n tuple: (front, back) representing the substrings before and after the\n most left ``@`` symbol. If no such symbol was present at all,\n ``back=None``. Both substrings have been trimmed of any leading\n and trailing whitespace.\n\n Note:\n If our string contains multiple ``@`` symbols, all but the most left\n one will be treated as part of the regular ``back`` string.\n This allows for usage of the symbol in descriptions, categories and tags.\n\n Also note that *no tags are extracted* any tags included will be considered\n part of the ``category`` string. We are likely to remove this parsing function\n in ``0.14.0`` in favour of a regex based solution so we will not spend\n time on tags for now\n " result = string.split('@', 1) length = len(result) if (length == 1): (front, back) = (result[0].strip(), None) else: (front, back) = result (front, back) = (front.strip(), back.strip()) return (front, back) def comma_split(string): '\n Split string at the most left comma.\n\n Args:\n string (str): String to be processed. At this stage this should\n look something like ``<Category> and <tags>, <Description>\n\n\n Returns\n tuple: (category_and_tags, description). Both substrings have their\n leading/trailing whitespace removed.\n ``category_and_tags`` may include >=0 tags indicated by a leading `` result = string.split(',', 1) length = len(result) if (length == 1): (category, description) = (result[0].strip(), None) else: (category, description) = tuple(result) (category, description) = (category.strip(), description.strip()) return (category.strip(), description) (time_info, rest) = time_helpers.extract_time_info(raw_fact) (activity_name, back) = at_split(rest) if back: (category_name, description) = comma_split(back) else: (category_name, description) = (None, None) return {'timeinfo': time_info, 'category': category_name, 'activity': activity_name, 'description': description}
Extract semantically meaningful sub-components from a ``raw fact`` text. Args: raw_fact (text_type): ``raw fact`` text to be parsed. Returns: dict: dict with sub-components as values.
codesearchnet
def unused(node): cfg.forward(node, cfg.ReachingDefinitions()) unused_obj = Unused() unused_obj.visit(node) return unused_obj.unused
Find unused definitions that can be remove. This runs reaching definitions analysis followed by a walk over the AST to find all variable definitions that are not used later on. Args: node: The AST of e.g. a function body to find unused variable definitions. Returns: unused: After visiting all the nodes, this attribute contanis a set of definitions in the form of `(variable_name, node)` pairs which are unused in this AST.
codesearchnet
def set_max_freq(self, max_freq=None): if max_freq: self['max_freq'] = max_freq else: for frequency in self['frequencies']: if self['max_freq']: if (frequency['value'] > self['max_freq']): self['max_freq'] = frequency['value'] else: self['max_freq'] = frequency['value'] return
Set the max frequency for the variant If max_freq use this, otherwise go through all frequencies and set the highest as self['max_freq'] Args: max_freq (float): The max frequency
codesearchnet
def read(self, istream, kmip_version=enums.KMIPVersion.KMIP_1_0): super(ExtensionInformation, self).read( istream, kmip_version=kmip_version ) tstream = BytearrayStream(istream.read(self.length)) self.extension_name.read(tstream, kmip_version=kmip_version) if self.is_tag_next(Tags.EXTENSION_TAG, tstream): self.extension_tag = ExtensionTag() self.extension_tag.read(tstream, kmip_version=kmip_version) if self.is_tag_next(Tags.EXTENSION_TYPE, tstream): self.extension_type = ExtensionType() self.extension_type.read(tstream, kmip_version=kmip_version) self.is_oversized(tstream) self.validate()
Read the data encoding the ExtensionInformation object and decode it into its constituent parts. Args: istream (Stream): A data stream containing encoded object data, supporting a read method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be decoded. Optional, defaults to KMIP 1.0.
juraj-google-style
def get_interpolated_value(self, x): if len(self.ydim) == 1: return get_linear_interpolated_value(self.x, self.y, x) else: return [get_linear_interpolated_value(self.x, self.y[:, k], x) for k in range(self.ydim[1])]
Returns an interpolated y value for a particular x value. Args: x: x value to return the y value for Returns: Value of y at x
juraj-google-style
def FromJson(json): type = ContractParameterType.FromString(json['type']) value = json['value'] param = ContractParameter(type=type, value=None) if ((type == ContractParameterType.Signature) or (type == ContractParameterType.ByteArray)): param.Value = bytearray.fromhex(value) elif (type == ContractParameterType.Boolean): param.Value = bool(value) elif (type == ContractParameterType.Integer): param.Value = int(value) elif (type == ContractParameterType.Hash160): param.Value = UInt160.ParseString(value) elif (type == ContractParameterType.Hash256): param.Value = UInt256.ParseString(value) elif (type == ContractParameterType.PublicKey): param.Value = ECDSA.decode_secp256r1(value).G elif (type == ContractParameterType.String): param.Value = str(value) elif (type == ContractParameterType.Array): val = [ContractParameter.FromJson(item) for item in value] param.Value = val return param
Convert a json object to a ContractParameter object Args: item (dict): The item to convert to a ContractParameter object Returns: ContractParameter
codesearchnet
def set_lacp_fallback(self, name, mode=None): if (mode not in ['disabled', 'static', 'individual']): return False disable = (True if (mode == 'disabled') else False) commands = [('interface %s' % name)] commands.append(self.command_builder('port-channel lacp fallback', value=mode, disable=disable)) return self.configure(commands)
Configures the Port-Channel lacp_fallback Args: name(str): The Port-Channel interface name mode(str): The Port-Channel LACP fallback setting Valid values are 'disabled', 'static', 'individual': * static - Fallback to static LAG mode * individual - Fallback to individual ports * disabled - Disable LACP fallback Returns: True if the operation succeeds otherwise False is returned
codesearchnet