code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def main(pipeline_name, pipeline_context_input, working_dir, log_level, log_path): pypyr.log.logger.set_root_logger(log_level, log_path) logger.debug('starting pypyr') pypyr.moduleloader.set_working_directory(working_dir) load_and_run_pipeline(pipeline_name=pipeline_name, pipeline_context_input=pipeline_context_input, working_dir=working_dir) logger.debug('pypyr done')
Entry point for pypyr pipeline runner. Call this once per pypyr run. Call me if you want to run a pypyr pipeline from your own code. This function does some one-off 1st time initialization before running the actual pipeline. pipeline_name.yaml should be in the working_dir/pipelines/ directory. Args: pipeline_name: string. Name of pipeline, sans .yaml at end. pipeline_context_input: string. Initialize the pypyr context with this string. working_dir: path. looks for ./pipelines and modules in this directory. log_level: int. Standard python log level enumerated value. log_path: os.path. Append log to this path. Returns: None
codesearchnet
def list(): kbs = [] ret = _pshell_json('Get-HotFix | Select HotFixID') for item in ret: kbs.append(item['HotFixID']) return kbs
Get a list of updates installed on the machine Returns: list: A list of installed updates CLI Example: .. code-block:: bash salt '*' wusa.list
codesearchnet
def sbi_ids(self) -> List[str]: return ast.literal_eval(DB.get_hash_value(self._key, 'sbi_ids'))
Get the list of SBI Ids. Returns: list, list of SBI ids associated with this subarray.
codesearchnet
def create_public_ip(access_token, subscription_id, resource_group, public_ip_name, dns_label, location): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', resource_group, '/providers/Microsoft.Network/publicIPAddresses/', public_ip_name, '?api-version=', NETWORK_API]) ip_body = {'location': location} properties = {'publicIPAllocationMethod': 'Dynamic'} properties['dnsSettings'] = {'domainNameLabel': dns_label} ip_body['properties'] = properties body = json.dumps(ip_body) return do_put(endpoint, body, access_token)
Create a public ip address. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. public_ip_name (str): Name of the new public ip address resource. dns_label (str): DNS label to apply to the IP address. location (str): Azure data center location. E.g. westus. Returns: HTTP response. Public IP address JSON body.
codesearchnet
def copy2(src, dst, metadata=None, retry_params=None): common.validate_file_path(src) common.validate_file_path(dst) if metadata is None: metadata = {} copy_meta = 'COPY' else: copy_meta = 'REPLACE' metadata.update({'x-goog-copy-source': src, 'x-goog-metadata-directive': copy_meta}) api = storage_api._get_storage_api(retry_params=retry_params) status, resp_headers, content = api.put_object( api_utils._quote_filename(dst), headers=metadata) errors.check_status(status, [200], src, metadata, resp_headers, body=content)
Copy the file content from src to dst. Args: src: /bucket/filename dst: /bucket/filename metadata: a dict of metadata for this copy. If None, old metadata is copied. For example, {'x-goog-meta-foo': 'bar'}. retry_params: An api_utils.RetryParams for this call to GCS. If None, the default one is used. Raises: errors.AuthorizationError: if authorization failed. errors.NotFoundError: if an object that's expected to exist doesn't.
juraj-google-style
def validate(self, value): if (value == ''): if self.kwargs.get('nullable', __nullable__): value = None else: value = 0 if (not isinstance(value, Model)): return super(ReferenceProperty, self).validate(value) if (not value.is_saved()): raise BadValueError(('%s instance must be saved before it can be stored as a reference' % self.reference_class.__class__.__name__)) if (not isinstance(value, self.reference_class)): raise KindError(('Property %s must be an instance of %s' % (self.name, self.reference_class.__class__.__name__))) return value
Validate reference. Returns: A valid value. Raises: BadValueError for the following reasons: - Value is not saved. - Object not of correct model type for reference.
codesearchnet
def get_list_subtask_positions_objs(client, list_id): params = {'list_id': int(list_id)} response = client.authenticated_request(client.api.Endpoints.SUBTASK_POSITIONS, params=params) return response.json()
Gets all subtask positions objects for the tasks within a given list. This is a convenience method so you don't have to get all the list's tasks before getting subtasks, though I can't fathom how mass subtask reordering is useful. Returns: List of SubtaskPositionsObj-mapped objects representing the order of subtasks for the tasks within the given list
codesearchnet
def __random_density_hs(N, rank=None, seed=None): G = __ginibre_matrix(N, rank, seed) G = G.dot(G.conj().T) return G / np.trace(G)
Generate a random density matrix from the Hilbert-Schmidt metric. Args: N (int): the length of the density matrix. rank (int or None): the rank of the density matrix. The default value is full-rank. seed (int): Optional. To set a random seed. Returns: ndarray: rho (N,N a density matrix.
juraj-google-style
def decrypt(key, ciphertext, shift_function=shift_case_english): return [shift_function(key, symbol) for symbol in ciphertext]
Decrypt Shift enciphered ``ciphertext`` using ``key``. Examples: >>> ''.join(decrypt(3, "KHOOR")) HELLO >> decrypt(15, [0xcf, 0x9e, 0xaf, 0xe0], shift_bytes) [0xde, 0xad, 0xbe, 0xef] Args: key (int): The shift to use ciphertext (iterable): The symbols to decrypt shift_function (function (shift, symbol)): Shift function to apply to symbols in the ciphertext Returns: Decrypted ciphertext, list of plaintext symbols
juraj-google-style
def matches(self, msg_seq: int, msg: MessageInterface) -> bool: return all((crit.matches(msg_seq, msg) for crit in self.all_criteria))
The message matches if all the defined search key criteria match. Args: msg_seq: The message sequence ID. msg: The message object.
codesearchnet
def predict(self, data, alpha=0.01, max_iter=2000, **kwargs): edge_model = GraphLasso(alpha=alpha, max_iter=max_iter) edge_model.fit(data.values) return nx.relabel_nodes(nx.DiGraph(edge_model.get_precision()), {idx: i for idx, i in enumerate(data.columns)})
Predict the graph skeleton. Args: data (pandas.DataFrame): observational data alpha (float): regularization parameter max_iter (int): maximum number of iterations Returns: networkx.Graph: Graph skeleton
juraj-google-style
def _create_sample_validator(expected_input_keys: Collection[str]) -> Callable[[rd.RepresentativeSample], rd.RepresentativeSample]: def validator(sample: rd.RepresentativeSample) -> rd.RepresentativeSample: if not isinstance(sample, Mapping): raise ValueError(f'Invalid representative sample type. Provide a mapping (usually a dict) of {{input_key: input_value}}. Got type: {type(sample)} instead.') if set(sample.keys()) != expected_input_keys: raise KeyError(f'Invalid input keys for representative sample. The function expects input keys of: {set(expected_input_keys)}. Got: {set(sample.keys())}. Please provide correct input keys for representative samples.') return sample return validator
Creates a validator function for a representative sample. Args: expected_input_keys: Input keys (keyword argument names) that the function the sample will be used for is expecting to receive. Returns: A callable that validates a `RepresentativeSample`.
github-repos
def _validate_iss(claims, issuer=None): if issuer is not None: if isinstance(issuer, string_types): issuer = (issuer,) if claims.get('iss') not in issuer: raise JWTClaimsError('Invalid issuer')
Validates that the 'iss' claim is valid. The "iss" (issuer) claim identifies the principal that issued the JWT. The processing of this claim is generally application specific. The "iss" value is a case-sensitive string containing a StringOrURI value. Use of this claim is OPTIONAL. Args: claims (dict): The claims dictionary to validate. issuer (str or iterable): Acceptable value(s) for the issuer that signed the token.
juraj-google-style
def _get_required_fn(fn, root_path): if (not fn.startswith(root_path)): raise ValueError('Both paths have to be absolute or local!') replacer = ('/' if root_path.endswith('/') else '') return fn.replace(root_path, replacer, 1)
Definition of the MD5 file requires, that all paths will be absolute for the package directory, not for the filesystem. This function converts filesystem-absolute paths to package-absolute paths. Args: fn (str): Local/absolute path to the file. root_path (str): Local/absolute path to the package directory. Returns: str: Package-absolute path to the file. Raises: ValueError: When `fn` is absolute and `root_path` relative or \ conversely.
codesearchnet
def get_hours_description(self): expression = self._expression_parts[2] return self.get_segment_description(expression, _('every hour'), (lambda s: self.format_time(s, '0')), (lambda s: _('every {0} hours').format(s)), (lambda s: _('between {0} and {1}')), (lambda s: _('at {0}')))
Generates a description for only the HOUR portion of the expression Returns: The HOUR description
codesearchnet
def pathcase(string): string = snakecase(string) if not string: return string return re.sub(r"_", "/", string)
Convert string into path case. Join punctuation with slash. Args: string: String to convert. Returns: string: Path cased string.
juraj-google-style
def compile_source(self, sourcepath): relpath = os.path.relpath(sourcepath, self.settings.SOURCES_PATH) conditions = {'sourcedir': None, 'nopartial': True, 'exclude_patterns': self.settings.EXCLUDES, 'excluded_libdirs': self.settings.LIBRARY_PATHS} if self.finder.match_conditions(sourcepath, **conditions): destination = self.finder.get_destination(relpath, targetdir=self.settings.TARGET_PATH) self.logger.debug(u'Compile: {}'.format(sourcepath)) (success, message) = self.compiler.safe_compile(self.settings, sourcepath, destination) if success: self.logger.info(u'Output: {}'.format(message)) else: self.logger.error(message) return (sourcepath, destination) return None
Compile source to its destination Check if the source is eligible to compile (not partial and allowed from exclude patterns) Args: sourcepath (string): Sass source path to compile to its destination using project settings. Returns: tuple or None: A pair of (sourcepath, destination), if source has been compiled (or at least tried). If the source was not eligible to compile, return will be ``None``.
codesearchnet
def format_statevector(vec, decimals=None): num_basis = len(vec) vec_complex = np.zeros(num_basis, dtype=complex) for i in range(num_basis): vec_complex[i] = (vec[i][0] + (1j * vec[i][1])) if decimals: vec_complex = np.around(vec_complex, decimals=decimals) return vec_complex
Format statevector coming from the backend to present to the Qiskit user. Args: vec (list): a list of [re, im] complex numbers. decimals (int): the number of decimals in the statevector. If None, no rounding is done. Returns: list[complex]: a list of python complex numbers.
codesearchnet
def load_tf_sharded_weights_from_safetensors(model, shard_files, ignore_mismatched_sizes=False, strict=False, _prefix=None): unexpected_keys = set() all_missing_keys = [] mismatched_keys = set() for shard_file in shard_files: missing_layers, unexpected_layers, mismatched_layers = load_tf_weights_from_safetensors(model, shard_file, ignore_mismatched_sizes=ignore_mismatched_sizes, _prefix=_prefix) all_missing_keys.append(set(missing_layers)) unexpected_keys.update(unexpected_layers) mismatched_keys.update(mismatched_layers) gc.collect() missing_keys = set.intersection(*all_missing_keys) if strict and (len(missing_keys) > 0 or len(unexpected_keys) > 0): error_message = f'Error(s) in loading state_dict for {model.__class__.__name__}' if len(missing_keys) > 0: str_missing_keys = ','.join([f'"{k}"' for k in missing_keys]) error_message += f'\nMissing key(s): {str_missing_keys}.' if len(unexpected_keys) > 0: str_unexpected_keys = ','.join([f'"{k}"' for k in unexpected_keys]) error_message += f'\nMissing key(s): {str_unexpected_keys}.' raise RuntimeError(error_message) return (missing_keys, unexpected_keys, mismatched_keys)
This is the same as `load_tf_weights_from_safetensors` but for a sharded TF-format safetensors checkpoint. Detect missing and unexpected layers and load the TF weights from the shard file accordingly to their names and shapes. This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being loaded in the model. Args: model (`keras.models.Model`): The model in which to load the checkpoint. shard_files (`str` or `os.PathLike`): A list containing the sharded checkpoint names. ignore_mismatched_sizes`bool`, *optional`, defaults to `True`): Whether or not to ignore the mismatch between the sizes strict (`bool`, *optional*, defaults to `True`): Whether to strictly enforce that the keys in the model state dict match the keys in the sharded checkpoint. Returns: Three lists, one for the missing layers, another one for the unexpected layers, and a last one for the mismatched layers.
github-repos
def attach(cls, transform_job_name, sagemaker_session=None): sagemaker_session = (sagemaker_session or Session()) job_details = sagemaker_session.sagemaker_client.describe_transform_job(TransformJobName=transform_job_name) init_params = cls._prepare_init_params_from_job_description(job_details) transformer = cls(sagemaker_session=sagemaker_session, **init_params) transformer.latest_transform_job = _TransformJob(sagemaker_session=sagemaker_session, job_name=init_params['base_transform_job_name']) return transformer
Attach an existing transform job to a new Transformer instance Args: transform_job_name (str): Name for the transform job to be attached. sagemaker_session (sagemaker.session.Session): Session object which manages interactions with Amazon SageMaker APIs and any other AWS services needed. If not specified, one will be created using the default AWS configuration chain. Returns: sagemaker.transformer.Transformer: The Transformer instance with the specified transform job attached.
codesearchnet
def _on_receive(self, client, userdata, message): topic = message.topic encoded = message.payload try: packet = json.loads(encoded) except ValueError: self._logger.warn("Could not decode json packet: %s", encoded) return try: seq = packet['sequence'] message_data = packet['message'] except KeyError: self._logger.warn("Message received did not have required sequence and message keys: %s", packet) return if topic not in self.queues: found = False for _, regex, callback, ordered in self.wildcard_queues: if regex.match(topic): self.queues[topic] = PacketQueue(0, callback, ordered) found = True break if not found: self._logger.warn("Received message for unknown topic: %s", topic) return self.queues[topic].receive(seq, [seq, topic, message_data])
Callback called whenever we receive a message on a subscribed topic Args: client (string): The client id of the client receiving the message userdata (string): Any user data set with the underlying MQTT client message (object): The mesage with a topic and payload.
juraj-google-style
def _apply(self, ctx: ExtensionContext) -> AugmentedDict: def process(pattern: Pattern[str], _str: str) -> Any: _match = pattern.match(_str) if (_match is None): return _str (placeholder, external_path) = (_match.group(1), _match.group(2)) with open(self.locator(external_path, (cast(str, ctx.document) if Validator.is_file(document=ctx.document) else None))) as fhandle: content = fhandle.read() return _str.replace(placeholder, content) (node_key, node_value) = ctx.node _pattern = re.compile(self.__pattern__) return {node_key: process(_pattern, node_value)}
Performs the actual loading of an external resource into the current model. Args: ctx: The processing context. Returns: Returns a dictionary that gets incorporated into the actual model.
codesearchnet
def get_item_concept_mapping(self, lang): concepts = self.filter(active=True, lang=lang) return group_keys_by_value_lists(Concept.objects.get_concept_item_mapping(concepts, lang))
Get mapping of items_ids to concepts containing these items Args: lang (str): language of concepts Returns: dict: item (int) -> set of concepts (int)
codesearchnet
def Open(self, file_object): if (not file_object): raise ValueError('Missing file-like object.') file_object.seek(0, os.SEEK_SET) data = file_object.read(len(self._HEADER_SIGNATURE)) if (data != self._HEADER_SIGNATURE): file_object.close() raise IOError('Unsupported SQLite database signature.') with tempfile.NamedTemporaryFile(delete=False) as temp_file: self._temp_file_path = temp_file.name while data: temp_file.write(data) data = file_object.read(self._COPY_BUFFER_SIZE) self._connection = sqlite3.connect(self._temp_file_path) self._connection.text_factory = bytes self._cursor = self._connection.cursor()
Opens the database file object. Args: file_object (FileIO): file-like object. Raises: IOError: if the SQLite database signature does not match. OSError: if the SQLite database signature does not match. ValueError: if the file-like object is invalid.
codesearchnet
def get_all_leaves(self, item_ids=None, language=None, forbidden_item_ids=None): return sorted(set(flatten(self.get_leaves(item_ids, language=language, forbidden_item_ids=forbidden_item_ids).values())))
Get all leaves reachable from the given set of items. Leaves having inactive relations to other items are omitted. Args: item_ids (list): items which are taken as roots for the reachability language (str): if specified, filter out items which are not available in the given language Returns: set: leaf items which are reachable from the given set of items
juraj-google-style
def get_angle(v1, v2, units="degrees"): d = np.dot(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2) d = min(d, 1) d = max(d, -1) angle = math.acos(d) if units == "degrees": return math.degrees(angle) elif units == "radians": return angle else: raise ValueError("Invalid units {}".format(units))
Calculates the angle between two vectors. Args: v1: Vector 1 v2: Vector 2 units: "degrees" or "radians". Defaults to "degrees". Returns: Angle between them in degrees.
juraj-google-style
def ParseInteger(text, is_signed=False, is_long=False): result = _ParseAbstractInteger(text, is_long=is_long) checker = _INTEGER_CHECKERS[((2 * int(is_long)) + int(is_signed))] checker.CheckValue(result) return result
Parses an integer. Args: text: The text to parse. is_signed: True if a signed integer must be parsed. is_long: True if a long integer must be parsed. Returns: The integer value. Raises: ValueError: Thrown Iff the text is not a valid integer.
codesearchnet
def write(self, output_stream, kmip_version=enums.KMIPVersion.KMIP_1_0): local_stream = BytearrayStream() if self._unique_identifier: self._unique_identifier.write(local_stream, kmip_version=kmip_version) else: raise ValueError('Invalid struct missing the unique identifier attribute.') if self._cryptographic_parameters: self._cryptographic_parameters.write(local_stream, kmip_version=kmip_version) self.length = local_stream.length() super(MACSignatureKeyInformation, self).write(output_stream, kmip_version=kmip_version) output_stream.write(local_stream.buffer)
Write the data encoding the MACSignatureKeyInformation struct to a stream. Args: output_stream (stream): A data stream in which to encode object data, supporting a write method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0.
codesearchnet
def make_coordinated_read_dataset(self, cluster, num_consumers, sharding_policy=data_service_ops.ShardingPolicy.OFF): if sharding_policy not in [data_service_ops.ShardingPolicy.OFF, data_service_ops.ShardingPolicy.DYNAMIC]: raise ValueError(f'Unsupported sharding policy: {sharding_policy}') ds = dataset_ops.Dataset.from_tensors(math_ops.cast(0, dtypes.int64)) ds = ds.concatenate(dataset_ops.Dataset.random()) def make_group(x): x = x % 2 ** 32 return dataset_ops.Dataset.range(x * num_consumers, (x + 1) * num_consumers) ds = ds.flat_map(make_group) consumers = [] for consumer_index in range(num_consumers): consumers.append(self.make_distributed_dataset(ds, cluster, job_name='test', processing_mode=sharding_policy, consumer_index=consumer_index, num_consumers=num_consumers)) ds = dataset_ops.Dataset.from_tensor_slices(consumers) ds = ds.interleave(lambda x: x, cycle_length=num_consumers, num_parallel_calls=num_consumers) return ds
Creates a dataset that performs coordinated reads. The dataset simulates `num_consumers` consumers by using parallel interleave to read with `num_consumers` threads, one for each consumer. The nth element of the dataset is produced by consumer `n % num_consumers`. The dataset executed on each worker will produce groups of `num_consumers` sequentially increasing numbers. For example, if `num_consumers=3` a worker dataset could produce [0, 1, 2, 9, 10, 11, 21, 22, 23]. This enables `checkCoordinatedReadGroups` below to assess whether the values received in each step came from the same group. Args: cluster: A tf.data service `TestCluster`. num_consumers: The number of consumers to simulate. sharding_policy: The sharding policy to use. Currently only OFF and DYNAMIC are supported. Returns: A dataset that simulates reading with `num_consumers` consumers.
github-repos
def make_hash(self, task): t = [serialize_object(task['func_name'])[0], serialize_object(task['fn_hash'])[0], serialize_object(task['args'])[0], serialize_object(task['kwargs'])[0], serialize_object(task['env'])[0]] x = b''.join(t) hashedsum = hashlib.md5(x).hexdigest() return hashedsum
Create a hash of the task inputs. This uses a serialization library borrowed from ipyparallel. If this fails here, then all ipp calls are also likely to fail due to failure at serialization. Args: - task (dict) : Task dictionary from dfk.tasks Returns: - hash (str) : A unique hash string
codesearchnet
def __init__(self, channel): self.GetGroup = channel.unary_unary( "/google.devtools.clouderrorreporting.v1beta1.ErrorGroupService/GetGroup", request_serializer=google_dot_devtools_dot_clouderrorreporting__v1beta1_dot_proto_dot_error__group__service__pb2.GetGroupRequest.SerializeToString, response_deserializer=google_dot_devtools_dot_clouderrorreporting__v1beta1_dot_proto_dot_common__pb2.ErrorGroup.FromString, ) self.UpdateGroup = channel.unary_unary( "/google.devtools.clouderrorreporting.v1beta1.ErrorGroupService/UpdateGroup", request_serializer=google_dot_devtools_dot_clouderrorreporting__v1beta1_dot_proto_dot_error__group__service__pb2.UpdateGroupRequest.SerializeToString, response_deserializer=google_dot_devtools_dot_clouderrorreporting__v1beta1_dot_proto_dot_common__pb2.ErrorGroup.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def sketch_fasta(fasta_path, outdir): genome_name = genome_name_from_fasta_path(fasta_path) outpath = os.path.join(outdir, genome_name) args = ['mash', 'sketch', '-o', outpath, fasta_path] logging.info('Running Mash sketch with command: %s', ' '.join(args)) p = Popen(args) p.wait() sketch_path = (outpath + '.msh') assert os.path.exists(sketch_path), 'Mash sketch for genome {} was not created at {}'.format(genome_name, sketch_path) return sketch_path
Create a Mash sketch from an input fasta file Args: fasta_path (str): input fasta file path. Genome name in fasta filename outdir (str): output directory path to write Mash sketch file to Returns: str: output Mash sketch file path
codesearchnet
def distort_color(image, thread_id=0, scope=None): with tf.name_scope(values=[image], name=scope, default_name='distort_color'): color_ordering = thread_id % 2 if color_ordering == 0: image = tf.image.random_brightness(image, max_delta=32. / 255.) image = tf.image.random_saturation(image, lower=0.5, upper=1.5) image = tf.image.random_hue(image, max_delta=0.2) image = tf.image.random_contrast(image, lower=0.5, upper=1.5) elif color_ordering == 1: image = tf.image.random_brightness(image, max_delta=32. / 255.) image = tf.image.random_contrast(image, lower=0.5, upper=1.5) image = tf.image.random_saturation(image, lower=0.5, upper=1.5) image = tf.image.random_hue(image, max_delta=0.2) image = tf.clip_by_value(image, 0.0, 1.0) return image
Distort the color of the image. Each color distortion is non-commutative and thus ordering of the color ops matters. Ideally we would randomly permute the ordering of the color ops. Rather then adding that level of complication, we select a distinct ordering of color ops for each preprocessing thread. Args: image: Tensor containing single image. thread_id: preprocessing thread ID. scope: Optional scope for name_scope. Returns: color-distorted image
juraj-google-style
def ParseRecord(self, parser_mediator, key, structure): if (key not in ('log_entry', 'log_entry_at_end', 'log_entry_offset', 'log_entry_offset_at_end')): raise errors.ParseError('Unable to parse record, unknown structure: {0:s}'.format(key)) try: date_time_string = self._GetISO8601String(structure) except ValueError as exception: parser_mediator.ProduceExtractionWarning('unable to determine date time string with error: {0!s}'.format(exception)) fraction_of_second_length = len(structure.fraction_of_second) if (fraction_of_second_length == 3): date_time = dfdatetime_time_elements.TimeElementsInMilliseconds() elif (fraction_of_second_length in (6, 7)): date_time = dfdatetime_time_elements.TimeElementsInMicroseconds() try: date_time.CopyFromStringISO8601(date_time_string) except ValueError as exception: parser_mediator.ProduceExtractionWarning('unable to parse date time value: {0:s} with error: {1!s}'.format(date_time_string, exception)) return event_data = SCCMLogEventData() event_data.component = structure.component event_data.offset = 0 event_data.text = structure.text event = time_events.DateTimeValuesEvent(date_time, definitions.TIME_DESCRIPTION_WRITTEN) parser_mediator.ProduceEventWithEventData(event, event_data)
Parse the record and return an SCCM log event object. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. key (str): name of the parsed structure. structure (pyparsing.ParseResults): structure of tokens derived from a line of a text file. Raises: ParseError: when the structure type is unknown.
codesearchnet
def find_local_maxima(self, input_grid): (pixels, q_data) = self.quantize(input_grid) centers = OrderedDict() for p in pixels.keys(): centers[p] = [] marked = (np.ones(q_data.shape, dtype=int) * self.UNMARKED) MIN_INFL = int(np.round((1 + (0.5 * np.sqrt(self.max_size))))) MAX_INFL = (2 * MIN_INFL) marked_so_far = [] for b in sorted(pixels.keys(), reverse=True): infl_dist = (MIN_INFL + int(np.round(((float(b) / self.max_bin) * (MAX_INFL - MIN_INFL))))) for p in pixels[b]: if (marked[p] == self.UNMARKED): ok = False del marked_so_far[:] for ((i, j), v) in np.ndenumerate(marked[((p[0] - infl_dist):((p[0] + infl_dist) + 1), (p[1] - infl_dist):((p[1] + infl_dist) + 1))]): if (v == self.UNMARKED): ok = True marked[(((i - infl_dist) + p[0]), ((j - infl_dist) + p[1]))] = b marked_so_far.append((((i - infl_dist) + p[0]), ((j - infl_dist) + p[1]))) else: ok = False break if ok: centers[b].append(p) else: for m in marked_so_far: marked[m] = self.UNMARKED marked[(:, :)] = self.UNMARKED deferred_from_last = [] deferred_to_next = [] for delta in range(0, (self.delta + 1)): for b in sorted(centers.keys(), reverse=True): bin_lower = (b - delta) deferred_from_last[:] = deferred_to_next[:] del deferred_to_next[:] foothills = [] n_centers = len(centers[b]) tot_centers = (n_centers + len(deferred_from_last)) for i in range(tot_centers): if (i < n_centers): center = centers[b][i] else: center = deferred_from_last[(i - n_centers)] if (bin_lower < 0): bin_lower = 0 if (marked[center] == self.UNMARKED): captured = self.set_maximum(q_data, marked, center, bin_lower, foothills) if (not captured): deferred_to_next.append(center) else: pass self.remove_foothills(q_data, marked, b, bin_lower, centers, foothills) del deferred_from_last[:] del deferred_to_next[:] return marked
Finds the local maxima in the inputGrid and perform region growing to identify objects. Args: input_grid: Raw input data. Returns: array with labeled objects.
codesearchnet
def compute_weighted_loss(losses, sample_weight=None, reduction=ReductionV2.SUM_OVER_BATCH_SIZE, name=None): ReductionV2.validate(reduction) if reduction == ReductionV2.AUTO: reduction = ReductionV2.SUM_OVER_BATCH_SIZE if sample_weight is None: sample_weight = 1.0 with backend.name_scope(name or 'weighted_loss'): ops.get_default_graph()._last_loss_reduction = reduction if not isinstance(losses, (keras_tensor.KerasTensor, ragged_tensor.RaggedTensor)): losses = tensor_conversion.convert_to_tensor_v2_with_dispatch(losses) input_dtype = losses.dtype if not isinstance(sample_weight, keras_tensor.KerasTensor): sample_weight = tensor_conversion.convert_to_tensor_v2_with_dispatch(sample_weight) losses = math_ops.cast(losses, 'float32') sample_weight = math_ops.cast(sample_weight, 'float32') losses, _, sample_weight = squeeze_or_expand_dimensions(losses, None, sample_weight) weighted_losses = math_ops.multiply(losses, sample_weight) loss = reduce_weighted_loss(weighted_losses, reduction) loss = math_ops.cast(loss, input_dtype) return loss
Computes the weighted loss. Args: losses: `Tensor` of shape `[batch_size, d1, ... dN]`. sample_weight: Optional `Tensor` whose rank is either 0, or the same rank as `losses`, or be broadcastable to `losses`. reduction: (Optional) Type of `tf.keras.losses.Reduction` to apply to loss. Default value is `SUM_OVER_BATCH_SIZE`. name: Optional name for the op. Raises: ValueError: If the shape of `sample_weight` is not compatible with `losses`. Returns: Weighted loss `Tensor` of the same type as `losses`. If `reduction` is `NONE`, this has the same shape as `losses`; otherwise, it is scalar.
github-repos
def sparse_top_k_categorical_accuracy(y_true, y_pred, k=5, from_sorted_ids=False): reshape_matches = False y_pred = ops.convert_to_tensor(y_pred) y_true_dtype = y_pred.dtype if from_sorted_ids else 'int32' y_true = ops.convert_to_tensor(y_true, dtype=y_true_dtype) y_true_rank = len(y_true.shape) y_pred_rank = len(y_pred.shape) y_true_org_shape = ops.shape(y_true) if y_true_rank is not None and y_pred_rank is not None: if y_pred_rank > 2: y_pred = ops.reshape(y_pred, [-1, y_pred.shape[-1]]) if y_true_rank > 1: reshape_matches = True y_true = ops.reshape(y_true, [-1]) if from_sorted_ids: matches = ops.any(ops.equal(ops.expand_dims(y_true, axis=1), y_pred[:, :k]), axis=1) else: matches = ops.in_top_k(y_true, y_pred, k=k) matches = ops.cast(matches, dtype=backend.floatx()) if reshape_matches: matches = ops.reshape(matches, y_true_org_shape) return matches
Computes how often integer targets are in the top `K` predictions. Args: y_true: A tensor of shape `(batch_size)` representing indices or IDs of true categories. y_pred: If `from_sorted_ids=False`, a tensor of shape `(batch_size, num_categories)` containing the scores for each sample for all possible categories. If `from_sorted_ids=True`, a tensor of shape `(batch_size, N)` containing indices or IDs of the top `N` categories in order from highest score to lowest score. k: (Optional) Number of top elements to look at for computing accuracy. Defaults to `5`. from_sorted_ids: (Optional) Whether `y_pred` is sorted category IDs or scores for all categories (the default). Returns: A tensor with the same shape as `y_true` containing ones where `y_true` is in the top `k` and zeros elsewhere.
github-repos
def load_install_json(self, filename=None): if (filename is None): filename = 'install.json' file_fqpn = os.path.join(self.app_path, filename) install_json = None if os.path.isfile(file_fqpn): try: with open(file_fqpn, 'r') as fh: install_json = json.load(fh) except ValueError as e: self.handle_error('Failed to load "{}" file ({}).'.format(file_fqpn, e)) else: self.handle_error('File "{}" could not be found.'.format(file_fqpn)) return install_json
Return install.json data. Args: filename (str, optional): Defaults to None. The install.json filename (for bundled Apps). Returns: dict: The contents of the install.json file.
codesearchnet
def is_supported(cls, file=None, request=None, response=None, url_info=None): tests = ((response, cls.is_response), (file, cls.is_file), (request, cls.is_request), (url_info, cls.is_url)) for (instance, method) in tests: if instance: try: result = method(instance) except NotImplementedError: pass else: if result: return True elif (result is VeryFalse): return VeryFalse
Given the hints, return whether the document is supported. Args: file: A file object containing the document. request (:class:`.http.request.Request`): An HTTP request. response (:class:`.http.request.Response`): An HTTP response. url_info (:class:`.url.URLInfo`): A URLInfo. Returns: bool: If True, the reader should be able to read it.
codesearchnet
def get_named_parent(decl): if (not decl): return None parent = decl.parent while (parent and ((not parent.name) or (parent.name == '::'))): parent = parent.parent return parent
Returns a reference to a named parent declaration. Args: decl (declaration_t): the child declaration Returns: declaration_t: the declaration or None if not found.
codesearchnet
def unlock(self): if (not unlockers.unlock(self, self._device.manufacturer)): raise errors.JLinkException('Failed to unlock device.') return True
Unlocks the device connected to the J-Link. Unlocking a device allows for access to read/writing memory, as well as flash programming. Note: Unlock is not supported on all devices. Supported Devices: Kinetis Returns: ``True``. Raises: JLinkException: if the device fails to unlock.
codesearchnet
def is_file(self, follow_symlinks=True): return self._system.isfile( path=self._path, client_kwargs=self._client_kwargs)
Return True if this entry is a file or a symbolic link pointing to a file; return False if the entry is or points to a directory or other non-file entry, or if it doesn’t exist anymore. The result is cached on the os.DirEntry object. Args: follow_symlinks (bool): Follow symlinks. Not supported on cloud storage objects. Returns: bool: True if directory exists.
juraj-google-style
def _ensure_list(tensor_or_list): if isinstance(tensor_or_list, (list, tuple)): return list(tensor_or_list), True return [tensor_or_list], False
Converts the input arg to a list if it is not a list already. Args: tensor_or_list: A `Tensor` or a Python list of `Tensor`s. The argument to convert to a list of `Tensor`s. Returns: A tuple of two elements. The first is a Python list of `Tensor`s containing the original arguments. The second is a boolean indicating whether the original argument was a list or tuple already.
juraj-google-style
def assert_input_compatibility(input_spec, inputs, layer_name): if not input_spec: return input_spec = nest.flatten(input_spec) if isinstance(inputs, dict): names = [spec.name for spec in input_spec] if all(names): list_inputs = [] for name in names: if name not in inputs: raise ValueError('Missing data for input "%s". You passed a data dictionary with keys %s. Expected the following keys: %s' % (name, list(inputs.keys()), names)) list_inputs.append(inputs[name]) inputs = list_inputs inputs = nest.flatten(inputs) for x in inputs: if not hasattr(x, 'shape'): raise TypeError('Inputs to a layer should be tensors. Got: %s' % (x,)) if len(inputs) != len(input_spec): raise ValueError('Layer ' + layer_name + ' expects ' + str(len(input_spec)) + ' input(s), but it received ' + str(len(inputs)) + ' input tensors. Inputs received: ' + str(inputs)) for input_index, (x, spec) in enumerate(zip(inputs, input_spec)): if spec is None: continue shape = tensor_shape.TensorShape(x.shape) if shape.rank is None: return if spec.ndim is not None and (not spec.allow_last_axis_squeeze): ndim = shape.rank if ndim != spec.ndim: raise ValueError('Input ' + str(input_index) + ' of layer ' + layer_name + ' is incompatible with the layer: expected ndim=' + str(spec.ndim) + ', found ndim=' + str(ndim) + '. Full shape received: ' + str(tuple(shape))) if spec.max_ndim is not None: ndim = x.shape.rank if ndim is not None and ndim > spec.max_ndim: raise ValueError('Input ' + str(input_index) + ' of layer ' + layer_name + ' is incompatible with the layer: expected max_ndim=' + str(spec.max_ndim) + ', found ndim=' + str(ndim)) if spec.min_ndim is not None: ndim = x.shape.rank if ndim is not None and ndim < spec.min_ndim: raise ValueError('Input ' + str(input_index) + ' of layer ' + layer_name + ' is incompatible with the layer: : expected min_ndim=' + str(spec.min_ndim) + ', found ndim=' + str(ndim) + '. Full shape received: ' + str(tuple(shape))) if spec.dtype is not None: if x.dtype.name != spec.dtype: raise ValueError('Input ' + str(input_index) + ' of layer ' + layer_name + ' is incompatible with the layer: expected dtype=' + str(spec.dtype) + ', found dtype=' + str(x.dtype)) shape_as_list = shape.as_list() if spec.axes: for axis, value in spec.axes.items(): if hasattr(value, 'value'): value = value.value if value is not None and shape_as_list[int(axis)] not in {value, None}: raise ValueError('Input ' + str(input_index) + ' of layer ' + layer_name + ' is incompatible with the layer: expected axis ' + str(axis) + ' of input shape to have value ' + str(value) + ' but received input with shape ' + display_shape(x.shape)) if spec.shape is not None and shape.rank is not None: spec_shape = spec.shape if spec.allow_last_axis_squeeze: if shape_as_list and shape_as_list[-1] == 1: shape_as_list = shape_as_list[:-1] if spec_shape and spec_shape[-1] == 1: spec_shape = spec_shape[:-1] for spec_dim, dim in zip(spec_shape, shape_as_list): if spec_dim is not None and dim is not None: if spec_dim != dim: raise ValueError('Input ' + str(input_index) + ' is incompatible with layer ' + layer_name + ': expected shape=' + str(spec.shape) + ', found shape=' + display_shape(x.shape))
Checks compatibility between the layer and provided inputs. This checks that the tensor(s) `inputs` verify the input assumptions of a layer (if any). If not, a clear and actional exception gets raised. Args: input_spec: An InputSpec instance, list of InputSpec instances, a nested structure of InputSpec instances, or None. inputs: Input tensor, list of input tensors, or a nested structure of input tensors. layer_name: String, name of the layer (for error message formatting). Raises: ValueError: in case of mismatch between the provided inputs and the expectations of the layer.
github-repos
def macro_state(self, micro_state): assert (len(micro_state) == len(self.micro_indices)) reindexed = self.reindex() micro_state = np.array(micro_state) return tuple(((0 if (sum(micro_state[list(reindexed.partition[i])]) in self.grouping[i][0]) else 1) for i in self.macro_indices))
Translate a micro state to a macro state Args: micro_state (tuple[int]): The state of the micro nodes in this coarse-graining. Returns: tuple[int]: The state of the macro system, translated as specified by this coarse-graining. Example: >>> coarse_grain = CoarseGrain(((1, 2),), (((0,), (1, 2)),)) >>> coarse_grain.macro_state((0, 0)) (0,) >>> coarse_grain.macro_state((1, 0)) (1,) >>> coarse_grain.macro_state((1, 1)) (1,)
codesearchnet
def write(self, output_buffer, kmip_version=enums.KMIPVersion.KMIP_1_3): if (kmip_version < enums.KMIPVersion.KMIP_1_3): raise exceptions.VersionNotSupported('KMIP {} does not support the CapabilityInformation object.'.format(kmip_version.value)) local_buffer = BytearrayStream() if self._streaming_capability: self._streaming_capability.write(local_buffer, kmip_version=kmip_version) if self._asynchronous_capability: self._asynchronous_capability.write(local_buffer, kmip_version=kmip_version) if self._attestation_capability: self._attestation_capability.write(local_buffer, kmip_version=kmip_version) if (kmip_version >= enums.KMIPVersion.KMIP_1_4): if self._batch_undo_capability: self._batch_undo_capability.write(local_buffer, kmip_version=kmip_version) if self._batch_continue_capability: self._batch_continue_capability.write(local_buffer, kmip_version=kmip_version) if self._unwrap_mode: self._unwrap_mode.write(local_buffer, kmip_version=kmip_version) if self._destroy_action: self._destroy_action.write(local_buffer, kmip_version=kmip_version) if self._shredding_algorithm: self._shredding_algorithm.write(local_buffer, kmip_version=kmip_version) if self._rng_mode: self._rng_mode.write(local_buffer, kmip_version=kmip_version) self.length = local_buffer.length() super(CapabilityInformation, self).write(output_buffer, kmip_version=kmip_version) output_buffer.write(local_buffer.buffer)
Write the CapabilityInformation structure encoding to the data stream. Args: output_buffer (stream): A data stream in which to encode CapabilityInformation structure data, supporting a write method. kmip_version (enum): A KMIPVersion enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 2.0. Raises: VersionNotSupported: Raised when a KMIP version is provided that does not support the CapabilityInformation structure.
codesearchnet
def disambiguate_text(self, text, language=None, entities=None): body = { "text": text, "entities": [], "onlyNER": "false", "customisation": "generic" } if language: body['language'] = {"lang": language} if entities: body['entities'] = entities result, status_code = self._process_query(body) if status_code != 200: logger.debug('Disambiguation failed.') return result, status_code
Call the disambiguation service in order to get meanings. Args: text (str): Text to be disambiguated. language (str): language of text (if known) entities (list): list of entities or mentions to be supplied by the user. Returns: dict, int: API response and API status.
juraj-google-style
def generate_nearest_neighbour_lookup_table(self): self.jump_probability = {} for site_label_1 in self.connected_site_pairs: self.jump_probability[site_label_1] = {} for site_label_2 in self.connected_site_pairs[site_label_1]: self.jump_probability[site_label_1][site_label_2] = {} for coordination_1 in range(self.max_coordination_per_site[site_label_1]): self.jump_probability[site_label_1][site_label_2][coordination_1] = {} for coordination_2 in range(1, (self.max_coordination_per_site[site_label_2] + 1)): self.jump_probability[site_label_1][site_label_2][coordination_1][coordination_2] = self.relative_probability(site_label_1, site_label_2, coordination_1, coordination_2)
Construct a look-up table of relative jump probabilities for a nearest-neighbour interaction Hamiltonian. Args: None. Returns: None.
codesearchnet
def wait_until_page_ready(page_object, timeout=WTF_TIMEOUT_MANAGER.NORMAL): try: do_until(lambda: page_object.webdriver.execute_script("return document.readyState").lower() == 'complete', timeout) except wait_utils.OperationTimeoutError: raise PageUtilOperationTimeoutError( "Timeout occurred while waiting for page to be ready.")
Waits until document.readyState == Complete (e.g. ready to execute javascript commands) Args: page_object (PageObject) : PageObject class Kwargs: timeout (number) : timeout period
juraj-google-style
def infer_from_frame_stack(self, ob_stack): (logits, vf) = self.sess.run([self.logits_t, self.value_function_t], feed_dict={self.obs_t: ob_stack}) return (logits, vf)
Infer policy from stack of observations. Args: ob_stack: array of shape (1, frame_stack_size, height, width, channels) Returns: logits and vf.
codesearchnet
def update(self, measurement, measurement_matrix): measurement_matrix = np.atleast_2d(measurement_matrix) expected_meas_mat_shape = (measurement.mean.shape[0], self.state_length) if measurement_matrix.shape != expected_meas_mat_shape: raise ValueError("Measurement matrix is wrong shape ({}). " \ "Expected: {}".format( measurement_matrix.shape, expected_meas_mat_shape)) self.measurements[-1].append(measurement) self.measurement_matrices[-1].append(measurement_matrix) prior = self.posterior_state_estimates[-1] innovation = measurement.mean - measurement_matrix.dot(prior.mean) innovation_cov = measurement_matrix.dot(prior.cov).dot( measurement_matrix.T) innovation_cov += measurement.cov kalman_gain = prior.cov.dot(measurement_matrix.T).dot( np.linalg.inv(innovation_cov)) post = self.posterior_state_estimates[-1] self.posterior_state_estimates[-1] = MultivariateNormal( mean=post.mean + kalman_gain.dot(innovation), cov=post.cov - kalman_gain.dot(measurement_matrix).dot(prior.cov) )
After each :py:meth:`predict`, this method may be called repeatedly to provide additional measurements for each time step. Args: measurement (MultivariateNormal): Measurement for this time step with specified mean and covariance. measurement_matrix (array): Measurement matrix for this measurement.
juraj-google-style
def find(self, predicate, first_n=0, device_name=None, exclude_node_names=None): if exclude_node_names: exclude_node_names = re.compile(exclude_node_names) matched_data = [] for device in self._dump_tensor_data if device_name is None else (self._dump_tensor_data[device_name],): for datum in self._dump_tensor_data[device]: if exclude_node_names and exclude_node_names.match(datum.node_name): continue if predicate(datum, datum.get_tensor()): matched_data.append(datum) if first_n > 0 and len(matched_data) >= first_n: return matched_data return matched_data
Find dumped tensor data by a certain predicate. Args: predicate: A callable that takes two input arguments: ```python def predicate(debug_tensor_datum, tensor): # returns a bool ``` where `debug_tensor_datum` is an instance of `DebugTensorDatum`, which carries the metadata, such as the `Tensor`'s node name, output slot timestamp, debug op name, etc.; and `tensor` is the dumped tensor value as a `numpy.ndarray`. first_n: (`int`) return only the first n `DebugTensotDatum` instances (in time order) for which the predicate returns True. To return all the `DebugTensotDatum` instances, let first_n be <= 0. device_name: optional device name. exclude_node_names: Optional regular expression to exclude nodes with names matching the regular expression. Returns: A list of all `DebugTensorDatum` objects in this `DebugDumpDir` object for which predicate returns True, sorted in ascending order of the timestamp.
github-repos
def select_copula(cls, X): frank = Bivariate(CopulaTypes.FRANK) frank.fit(X) if frank.tau <= 0: selected_theta = frank.theta selected_copula = CopulaTypes.FRANK return selected_copula, selected_theta copula_candidates = [frank] theta_candidates = [frank.theta] try: clayton = Bivariate(CopulaTypes.CLAYTON) clayton.fit(X) copula_candidates.append(clayton) theta_candidates.append(clayton.theta) except ValueError: pass try: gumbel = Bivariate(CopulaTypes.GUMBEL) gumbel.fit(X) copula_candidates.append(gumbel) theta_candidates.append(gumbel.theta) except ValueError: pass z_left, L, z_right, R = cls.compute_empirical(X) left_dependence, right_dependence = cls.get_dependencies( copula_candidates, z_left, z_right) cost_L = [np.sum((L - l) ** 2) for l in left_dependence] cost_R = [np.sum((R - r) ** 2) for r in right_dependence] cost_LR = np.add(cost_L, cost_R) selected_copula = np.argmax(cost_LR) selected_theta = theta_candidates[selected_copula] return CopulaTypes(selected_copula), selected_theta
Select best copula function based on likelihood. Args: X: 2-dimensional `np.ndarray` Returns: tuple: `tuple(CopulaType, float)` best fit and model param.
juraj-google-style
def are_equal_elements(a_el, b_el): if (a_el.tagName != b_el.tagName): return False if (sorted(a_el.attributes.items()) != sorted(b_el.attributes.items())): return False if (len(a_el.childNodes) != len(b_el.childNodes)): return False for (a_child_el, b_child_el) in zip(a_el.childNodes, b_el.childNodes): if (a_child_el.nodeType != b_child_el.nodeType): return False if ((a_child_el.nodeType == a_child_el.TEXT_NODE) and (a_child_el.data != b_child_el.data)): return False if ((a_child_el.nodeType == a_child_el.ELEMENT_NODE) and (not are_equal_elements(a_child_el, b_child_el))): return False return True
Normalize and compare ElementTrees for equality. Args: a_el: ElementTree b_el: ElementTree ElementTrees to compare for equality. Returns: bool: ``True`` if the ElementTrees are semantically equivalent.
codesearchnet
def get_unspent_outputs(self): cursor = backend.query.get_unspent_outputs(self.connection) return (record for record in cursor)
Get the utxoset. Returns: generator of unspent_outputs.
codesearchnet
def pull(self, project, run=None, entity=None): project, run = self.parse_slug(project, run=run) urls = self.download_urls(project, run, entity) responses = [] for fileName in urls: _, response = self.download_write_file(urls[fileName]) if response: responses.append(response) return responses
Download files from W&B Args: project (str): The project to download run (str, optional): The run to upload to entity (str, optional): The entity to scope this project to. Defaults to wandb models Returns: The requests library response object
juraj-google-style
def obtain_all_variant_tensor_ops(dataset): return _traverse(dataset, lambda op: op.outputs[0].dtype == dtypes.variant)
Given an input dataset, finds all dataset ops used for construction. A series of transformations would have created this dataset with each transformation including zero or more Dataset ops, each producing a dataset variant tensor. This method outputs all of them. Args: dataset: Dataset to find variant tensors for. Returns: A list of variant_tensor producing dataset ops used to construct this dataset.
github-repos
def download_supplementary_files(self, directory='./', download_sra=True, email=None, sra_kwargs=None): directory_path = os.path.abspath(os.path.join(directory, ('%s_%s_%s' % ('Supp', self.get_accession(), re.sub('[\\s\\*\\?\\(\\),\\.;]', '_', self.metadata['title'][0]))))) utils.mkdir_p(os.path.abspath(directory_path)) downloaded_paths = dict() if (sra_kwargs is None): sra_kwargs = {} blacklist = ('NONE',) for (metakey, metavalue) in iteritems(self.metadata): if ('supplementary_file' in metakey): assert ((len(metavalue) == 1) and (metavalue != '')) if (metavalue[0] in blacklist): logger.warn(("%s value is blacklisted as '%s' - skipping" % (metakey, metavalue[0]))) continue if ('sra' not in metavalue[0]): download_path = os.path.abspath(os.path.join(directory, os.path.join(directory_path, metavalue[0].split('/')[(- 1)]))) try: utils.download_from_url(metavalue[0], download_path) downloaded_paths[metavalue[0]] = download_path except Exception as err: logger.error(('Cannot download %s supplementary file (%s)' % (self.get_accession(), err))) if download_sra: try: downloaded_files = self.download_SRA(email, directory=directory, **sra_kwargs) downloaded_paths.update(downloaded_files) except Exception as err: logger.error(('Cannot download %s SRA file (%s)' % (self.get_accession(), err))) return downloaded_paths
Download all supplementary data available for the sample. Args: directory (:obj:`str`): Directory to download the data (in this directory function will create new directory with the files). Defaults to "./". download_sra (:obj:`bool`): Indicates whether to download SRA raw data too. Defaults to True. email (:obj:`str`): E-mail that will be provided to the Entrez. It is mandatory if download_sra=True. Defaults to None. sra_kwargs (:obj:`dict`, optional): Kwargs passed to the download_SRA method. Defaults to None. Returns: :obj:`dict`: A key-value pair of name taken from the metadata and paths downloaded, in the case of SRA files the key is ``SRA``.
codesearchnet
def make_iaf_stack(total_event_size, num_hidden_layers=2, seed=None, dtype=tf.float32): seed = tfd.SeedStream(seed, 'make_iaf_stack') def make_iaf(): initializer = tf.compat.v2.keras.initializers.VarianceScaling( 2 * 0.01, seed=seed() % (2**31 - 1)) made = tfb.AutoregressiveLayer( params=2, event_shape=[total_event_size], hidden_units=[total_event_size] * num_hidden_layers, activation=tf.nn.elu, kernel_initializer=initializer, dtype=dtype) def shift_and_scale(x): x.set_shape( x.shape.merge_with([None] * (x.shape.ndims - 1) + [total_event_size])) return tf.unstack(made(x), num=2, axis=-1) return tfb.Invert(tfb.MaskedAutoregressiveFlow(shift_and_scale)) def make_swap(): permutation = list(reversed(range(total_event_size))) return tfb.Permute(permutation) bijector = make_iaf() bijector = make_swap()(bijector) bijector = make_iaf()(bijector) bijector = make_swap()(bijector) bijector = make_iaf()(bijector) bijector = make_swap()(bijector) return bijector
Creates an stacked IAF bijector. This bijector operates on vector-valued events. Args: total_event_size: Number of dimensions to operate over. num_hidden_layers: How many hidden layers to use in each IAF. seed: Random seed for the initializers. dtype: DType for the variables. Returns: bijector: The created bijector.
juraj-google-style
def addColumn(self, header, values=[]): if (len(values) == 0): self._impl.addColumn(header) else: assert (len(values) == self.getNumRows()) if any((isinstance(value, basestring) for value in values)): values = list(map(str, values)) self._impl.addColumnStr(header, values) elif all((isinstance(value, Real) for value in values)): values = list(map(float, values)) self._impl.addColumnDbl(header, values) else: raise NotImplementedError
Add a new column with the corresponding header and values to the dataframe. Args: header: The name of the new column. values: A list of size :func:`~amplpy.DataFrame.getNumRows` with all the values of the new column.
codesearchnet
def unpack(self, buff, offset=0): try: self._value = struct.unpack_from(self._fmt, buff, offset)[0] if self.enum_ref: self._value = self.enum_ref(self._value) except (struct.error, TypeError, ValueError) as exception: msg = '{}; fmt = {}, buff = {}, offset = {}.'.format(exception, self._fmt, buff, offset) raise UnpackException(msg)
Unpack *buff* into this object. This method will convert a binary data into a readable value according to the attribute format. Args: buff (bytes): Binary buffer. offset (int): Where to begin unpacking. Raises: :exc:`~.exceptions.UnpackException`: If unpack fails.
juraj-google-style
def ldap_sync(self, **kwargs): path = '/groups/%s/ldap_sync' % self.get_id() self.manager.gitlab.http_post(path, **kwargs)
Sync LDAP groups. Args: **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabCreateError: If the server cannot perform the request
juraj-google-style
def ensure_value_spec(value_spec: class_schema.ValueSpec, src_spec: class_schema.ValueSpec, root_path: typing.Optional[utils.KeyPath]=None) -> typing.Optional[class_schema.ValueSpec]: if isinstance(value_spec, Union): value_spec = value_spec.get_candidate(src_spec) if isinstance(value_spec, Any): return None if not src_spec.is_compatible(value_spec): raise TypeError(utils.message_on_path(f'Source spec {src_spec} is not compatible with destination spec {value_spec}.', root_path)) return value_spec
Extract counter part from value spec that matches dest spec type. Args: value_spec: Value spec. src_spec: Destination value spec. root_path: An optional path for the value to include in error message. Returns: value_spec of src_spec_type Raises: TypeError: When value_spec cannot match src_spec_type.
github-repos
def _SetUnknownFlag(self, name, value): setter = self.__dict__['__set_unknown'] if setter: try: setter(name, value) return value except (TypeError, ValueError): raise exceptions.IllegalFlagValueError('"{1}" is not valid for --{0}' .format(name, value)) except NameError: pass raise exceptions.UnrecognizedFlagError(name, value)
Returns value if setting flag |name| to |value| returned True. Args: name: Name of the flag to set. value: Value to set. Returns: Flag value on successful call. Raises: UnrecognizedFlagError IllegalFlagValueError
juraj-google-style
def from_composition_and_entries(comp, entries_in_chemsys, working_ion_symbol='Li'): pd = PhaseDiagram(entries_in_chemsys) return ConversionElectrode.from_composition_and_pd(comp, pd, working_ion_symbol)
Convenience constructor to make a ConversionElectrode from a composition and all entries in a chemical system. Args: comp: Starting composition for ConversionElectrode, e.g., Composition("FeF3") entries_in_chemsys: Sequence containing all entries in a chemical system. E.g., all Li-Fe-F containing entries. working_ion_symbol: Element symbol of working ion. Defaults to Li.
codesearchnet
def adb_cmd(self, command, **kwargs): kwargs['timeout'] = kwargs.get('timeout', self._adb_shell_timeout) if isinstance(command, list) or isinstance(command, tuple): return self.adb_device.run_cmd(*list(command), **kwargs) return self.adb_device.run_cmd(command, **kwargs)
Run adb command, for example: adb(['pull', '/data/local/tmp/a.png']) Args: command: string or list of string Returns: command output
juraj-google-style
def from_xmrs(cls, xmrs, **kwargs): x = cls() x.__dict__.update(xmrs.__dict__) return x
Facilitate conversion among subclasses. Args: xmrs (:class:`Xmrs`): instance to convert from; possibly an instance of a subclass, such as :class:`Mrs` or :class:`Dmrs` **kwargs: additional keyword arguments that may be used by a subclass's redefinition of :meth:`from_xmrs`.
codesearchnet
def get_nodes_lines(self, **kwargs): params = {'Nodes': util.ints_to_string(kwargs.get('nodes', []))} result = self.make_request('bus', 'get_nodes_lines', **params) if (not util.check_result(result)): return (False, result.get('resultDescription', 'UNKNOWN ERROR')) values = util.response_list(result, 'resultValues') return (True, [emtype.NodeLinesItem(**a) for a in values])
Obtain stop IDs, coordinates and line information. Args: nodes (list[int] | int): nodes to query, may be empty to get all nodes. Returns: Status boolean and parsed response (list[NodeLinesItem]), or message string in case of error.
codesearchnet
def google_api_initilaize(config, api_call, alias=None): if api_call['function'].endswith('list') or alias == 'list': api_call['iterate'] = True if api_call['api'] == 'dfareporting': if not api_call['function'].startswith('userProfiles'): is_superuser, profile_id = get_profile_for_api(config, api_call['auth'], api_call['kwargs']['id'] if api_call['function'] == 'accounts.get' else api_call['kwargs']['accountId']) api_call['kwargs']['profileId'] = profile_id if is_superuser: api_call['version'] = 'prerelease' elif 'accountId' in api_call['kwargs']: del api_call['kwargs']['accountId']
Some Google API calls require a lookup or pre-call, add it here. Modifies the API call before actual execution with any data specifically required by an endpoint. Currently: > dfa-reporting - look up user profile and add to call. Args: api_call (dict): the JSON for the API call as defined in recipe. alias (string): mostly used to signal a list behavior (change to iterate in future?) Returns (dict): A modified JSON with additional API values added. Currently mostly used by dfareporting API to add profile and account. Raises: ValueError: If a required key in the recipe is missing.
github-repos
def __truediv__(self, other): return self.__class__(self.x, self.y.__truediv__(other), *self._args, **self._kwargs)
True division of y Args: other: The divisor Returns: Spectrum object with y values divided
juraj-google-style
def bridge_create(br, may_exist=True, parent=None, vlan=None): param_may_exist = _param_may_exist(may_exist) if ((parent is not None) and (vlan is None)): raise ArgumentValueError('If parent is specified, vlan must also be specified.') if ((vlan is not None) and (parent is None)): raise ArgumentValueError('If vlan is specified, parent must also be specified.') param_parent = ('' if (parent is None) else ' {0}'.format(parent)) param_vlan = ('' if (vlan is None) else ' {0}'.format(vlan)) cmd = 'ovs-vsctl {1}add-br {0}{2}{3}'.format(br, param_may_exist, param_parent, param_vlan) result = __salt__['cmd.run_all'](cmd) return _retcode_to_bool(result['retcode'])
Creates a new bridge. Args: br: A string - bridge name may_exist: Bool, if False - attempting to create a bridge that exists returns False. parent: String, the name of the parent bridge (if the bridge shall be created as a fake bridge). If specified, vlan must also be specified. vlan: Int, the VLAN ID of the bridge (if the bridge shall be created as a fake bridge). If specified, parent must also be specified. Returns: True on success, else False. .. versionadded:: 2016.3.0 CLI Example: .. code-block:: bash salt '*' openvswitch.bridge_create br0
codesearchnet
def enableEditing(self, enabled): for button in self.buttons[1:]: button.setEnabled(enabled) if button.isChecked(): button.setChecked(False) model = self.tableView.model() if model is not None: model.enableEditing(enabled)
Enable the editing buttons to add/remove rows/columns and to edit the data. This method is also a slot. In addition, the data of model will be made editable, if the `enabled` parameter is true. Args: enabled (bool): This flag indicates, if the buttons shall be activated.
juraj-google-style
def destringize(self, string): m = read_tuple_destr_pattern.match(string) if (not m): smbl.messages.error("'{}' is not a valid read name with respect to the RNF specification".format(string), program='RNFtools', subprogram='RNF format', exception=ValueError) groups = m.groups() self.prefix = groups[0] read_tuple_id = groups[1] self.read_tuple_id = int(read_tuple_id, 16) self.segments = [] segments_str = groups[2:(- 1)] for b_str in segments_str: if (b_str is not None): if (b_str[0] == ','): b_str = b_str[1:] b = rnftools.rnfformat.Segment() b.destringize(b_str) self.segments.append(b) self.suffix = groups[(- 1)]
Get RNF values for this read from its textual representation and save them into this object. Args: string(str): Textual representation of a read. Raises: ValueError
codesearchnet
def doMove(self, orgresource, dstresource, dummy = 56184, stresource = 'F', bShareFireCopy = 'false'): url = nurls['doMove'] data = {'userid': self.user_id, 'useridx': self.useridx, 'dummy': dummy, 'orgresource': orgresource, 'dstresource': dstresource, 'overwrite': overwrite, 'bShareFireCopy': bShareFireCopy, } r = self.session.post(url = url, data = data) try: j = json.loads(r.text) except: print '[*] Success checkUpload: 0 result' return False return self.resultManager(r.text)
DoMove Args: dummy: ??? orgresource: Path for a file which you want to move dstresource: Destination path bShareFireCopy: ??? Returns: True: Move success False: Move failed
juraj-google-style
def list_classes(mod_name): mod = sys.modules[mod_name] return [cls.__name__ for cls in mod.__dict__.values() if is_mod_class(mod, cls)]
Lists all classes declared in a module. Args: mod_name: the module name Returns: A list of functions declared in that module.
juraj-google-style
def write_input(self, output_dir, make_dir_if_not_present=True, include_cif=False): vinput = self.get_vasp_input() vinput.write_input(output_dir, make_dir_if_not_present=make_dir_if_not_present) if include_cif: s = vinput['POSCAR'].structure fname = (Path(output_dir) / ('%s.cif' % re.sub('\\s', '', s.formula))) s.to(filename=fname)
Writes a set of VASP input to a directory. Args: output_dir (str): Directory to output the VASP input files make_dir_if_not_present (bool): Set to True if you want the directory (and the whole path) to be created if it is not present. include_cif (bool): Whether to write a CIF file in the output directory for easier opening by VESTA.
codesearchnet
def validate_key(self, key): if (not models.PasswordResetToken.valid_tokens.filter(key=key).exists()): raise serializers.ValidationError(_('The provided reset token does not exist, or is expired.')) return key
Validate the provided reset key. Returns: The validated key. Raises: serializers.ValidationError: If the provided key does not exist.
codesearchnet
def _is_part_processor_protocol(obj: Any) -> bool: def _full_name(obj: Any) -> str: return obj.__module__ + '.' + getattr(obj, '__qualname__', '') if not callable(obj): return False if isinstance(obj, types.FunctionType): type_hint = typing.get_type_hints(obj) else: type_hint = typing.get_type_hints(obj.__call__) if 'return' not in type_hint: return False return_type = type_hint.pop('return') if len(type_hint) != 1: return False if len(typing.get_args(return_type)) != 1: return False if return_type.__qualname__ != 'AsyncIterable' or _full_name(typing.get_args(return_type)[0]) != _full_name(content_api.ProcessorPart): return False if _full_name(next(iter(type_hint.values()))) != _full_name(content_api.ProcessorPart): return False return True
Returns True if `obj` implements PartProcessorFn. This function is needed as Processors and PartProcessors are Protocols and do not have proper runtime type checking. Args: obj: any object or function
github-repos
def _Open(self, path_spec, mode='rb'): if not path_spec.HasParent(): raise errors.PathSpecError( 'Unsupported path specification without parent.') resolver.Resolver.key_chain.ExtractCredentialsFromPathSpec(path_spec) fvde_volume = pyfvde.volume() file_object = resolver.Resolver.OpenFileObject( path_spec.parent, resolver_context=self._resolver_context) try: fvde.FVDEVolumeOpen( fvde_volume, path_spec, file_object, resolver.Resolver.key_chain) except: file_object.close() raise self._fvde_volume = fvde_volume self._file_object = file_object
Opens the file system defined by path specification. Args: path_spec (PathSpec): path specification. mode (Optional[str]): file access mode. The default is 'rb' read-only binary. Raises: AccessError: if the access to open the file was denied. IOError: if the file system could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid.
juraj-google-style
def update_user_groups(self, user, claims): if (settings.GROUPS_CLAIM is not None): django_groups = [group.name for group in user.groups.all()] if (settings.GROUPS_CLAIM in claims): claim_groups = claims[settings.GROUPS_CLAIM] if (not isinstance(claim_groups, list)): claim_groups = [claim_groups] else: logger.debug("The configured groups claim '{}' was not found in the access token".format(settings.GROUPS_CLAIM)) claim_groups = [] groups_to_remove = (set(django_groups) - set(claim_groups)) groups_to_add = (set(claim_groups) - set(django_groups)) for group_name in groups_to_remove: group = Group.objects.get(name=group_name) user.groups.remove(group) logger.debug("User removed from group '{}'".format(group_name)) for group_name in groups_to_add: try: if settings.MIRROR_GROUPS: (group, _) = Group.objects.get_or_create(name=group_name) logger.debug("Created group '{}'".format(group_name)) else: group = Group.objects.get(name=group_name) user.groups.add(group) logger.debug("User added to group '{}'".format(group_name)) except ObjectDoesNotExist: pass
Updates user group memberships based on the GROUPS_CLAIM setting. Args: user (django.contrib.auth.models.User): User model instance claims (dict): Claims from the access token
codesearchnet
def is_valid_op(self, symmop): coords = self.centered_mol.cart_coords for site in self.centered_mol: coord = symmop.operate(site.coords) ind = find_in_coord_list(coords, coord, self.tol) if (not ((len(ind) == 1) and (self.centered_mol[ind[0]].species == site.species))): return False return True
Check if a particular symmetry operation is a valid symmetry operation for a molecule, i.e., the operation maps all atoms to another equivalent atom. Args: symmop (SymmOp): Symmetry operation to test. Returns: (bool): Whether SymmOp is valid for Molecule.
codesearchnet
def multiply(self, other): if not isinstance(other, Number): raise QiskitError("other is not a number") return Chi(other * self._data, self._input_dims, self._output_dims)
Return the QuantumChannel self + other. Args: other (complex): a complex number. Returns: Chi: the scalar multiplication other * self as a Chi object. Raises: QiskitError: if other is not a valid scalar.
juraj-google-style
def get_nested_plot_frame(obj, key_map, cached=False): clone = obj.map(lambda x: x) for it1, it2 in zip(obj.traverse(lambda x: x), clone.traverse(lambda x: x)): if isinstance(it1, DynamicMap): with disable_constant(it2.callback): it2.callback.inputs = it1.callback.inputs with item_check(False): return clone.map(lambda x: get_plot_frame(x, key_map, cached=cached), [DynamicMap, HoloMap], clone=False)
Extracts a single frame from a nested object. Replaces any HoloMap or DynamicMap in the nested data structure, with the item corresponding to the supplied key. Args: obj: Nested Dimensioned object key_map: Dictionary mapping between dimensions and key value cached: Whether to allow looking up key in cache Returns: Nested datastructure where maps are replaced with single frames
juraj-google-style
def get_board(self, id, name=None): return self.create_board(dict(id=id, name=name))
Get a board Returns: Board: The board with the given `id`
codesearchnet
def list(cls, session, mailbox): endpoint = ('/mailboxes/%d/conversations.json' % mailbox.id) return super(Conversations, cls).list(session, endpoint)
Return conversations in a mailbox. Args: session (requests.sessions.Session): Authenticated session. mailbox (helpscout.models.Mailbox): Mailbox to list. Returns: RequestPaginator(output_type=helpscout.models.Conversation): Conversations iterator.
codesearchnet
def _GetMaxSizeFromNestedMaximumIterations(value, while_ctxt): value_name = value.name curr_ctxt = ops.get_default_graph()._get_control_flow_context() curr_ctxt_name = curr_ctxt.name if curr_ctxt is not None else '' max_size = constant_op.constant(1) while while_ctxt not in (None, curr_ctxt): max_iter = while_ctxt.maximum_iterations if max_iter is None: raise ValueError("Cannot create a gradient accumulator for tensor '%s' inside XLA while_loop because maximum_iterations was not passed to the tf.while_loop call ('%s')." % (value_name, while_ctxt.name)) max_iter_ctxt = max_iter.op._get_control_flow_context() if util.IsContainingContext(curr_ctxt, max_iter_ctxt): max_size *= max_iter else: const_max_iter = tensor_util.constant_value(max_iter) if const_max_iter is None: raise ValueError("Cannot create a gradient accumulator for tensor '%s' inside XLA while_loop. maximum_iterations tensor '%s' for while_loop context '%s' must be statically known (e.g. a constant value or known shape dimension), or be defined at or outside the while loop context '%s' (currently defined in '%s')." % (value_name, max_iter.name, while_ctxt.name, curr_ctxt_name, max_iter_ctxt.name)) max_size *= const_max_iter while_ctxt = util.GetContainingWhileContext(while_ctxt.outer_context, stop_ctxt=curr_ctxt) return max_size
Calculate a max_size for use by stack ops inside an XLA while_loop. Args: value: The value inside the while_loop forward context. Used for printing error messages. while_ctxt: The forward context inside which value resides. This does not always match the value's immediate context, as `value` may be inside e.g. a cond context inside the while_loop. Returns: A tensor containing the `max_size` to feed to a Stack initializer. Raises: ValueError: If `value` is nested inside a `while_loop` that either lacks a `maximum_iterations` parameter, or the `maximum_iterations` parameter: - is inside a `while_loop` that is a parent of the calling context, and - cannot be evaluated at graph build time to a constant.
github-repos
def isempty(self, tables=None): tables = (tables or self.tables) for table in tables: if (self.num_rows(table) > 0): return False return True
Return whether a table or the entire database is empty. A database is empty is if it has no tables. A table is empty if it has no rows. Arguments: tables (sequence of str, optional): If provided, check that the named tables are empty. If not provided, check that all tables are empty. Returns: bool: True if tables are empty, else false. Raises: sql.OperationalError: If one or more of the tables do not exist.
codesearchnet
def remote_upload(self, remote_url, folder_id=None, headers=None): kwargs = {'folder': folder_id, 'headers': headers} params = {'url': remote_url} params.update({key: value for (key, value) in kwargs.items() if value}) return self._get('remotedl/add', params=params)
Used to make a remote file upload to openload.co Note: If folder_id is not provided, the file will be uploaded to ``Home`` folder. Args: remote_url (str): direct link of file to be remotely downloaded. folder_id (:obj:`str`, optional): folder-ID to upload to. headers (:obj:`dict`, optional): additional HTTP headers (e.g. Cookies or HTTP Basic-Auth) Returns: dict: dictionary containing ("id": uploaded file id, "folderid"). :: { "id": "12", "folderid": "4248" }
codesearchnet
def disassemble_instruction(self, instruction): if not util.is_integer(instruction): raise TypeError('Expected instruction to be an integer.') buf_size = self.MAX_BUF_SIZE buf = (ctypes.c_char * buf_size)() res = self._dll.JLINKARM_DisassembleInst(ctypes.byref(buf), buf_size, instruction) if res < 0: raise errors.JLinkException('Failed to disassemble instruction.') return ctypes.string_at(buf).decode()
Disassembles and returns the assembly instruction string. Args: self (JLink): the ``JLink`` instance. instruction (int): the instruction address. Returns: A string corresponding to the assembly instruction string at the given instruction address. Raises: JLinkException: on error. TypeError: if ``instruction`` is not a number.
juraj-google-style
def remove_send_message(self, connection): if (connection in self._send_message): del self._send_message[connection] LOGGER.debug('Removed send_message function for connection %s', connection) else: LOGGER.warning('Attempted to remove send_message function for connection %s, but no send_message function was registered', connection)
Removes a send_message function previously registered with the Dispatcher. Args: connection (str): A locally unique identifier provided by the receiver of messages.
codesearchnet
def adjust_internal_tacking_values(self, min_non_zero_index, max_index, total_added): if max_index >= 0: max_value = self.get_highest_equivalent_value(self.get_value_from_index(max_index)) self.max_value = max(self.max_value, max_value) if min_non_zero_index >= 0: min_value = self.get_value_from_index(min_non_zero_index) self.min_value = min(self.min_value, min_value) self.total_count += total_added
Called during decoding and add to adjust the new min/max value and total count Args: min_non_zero_index min nonzero index of all added counts (-1 if none) max_index max index of all added counts (-1 if none)
juraj-google-style
def run_trials(runs): inside_runs = 0 for _ in range(runs): x = random.uniform(0, 1) y = random.uniform(0, 1) inside_runs += 1 if x * x + y * y <= 1.0 else 0 return (runs, inside_runs, 0)
Run trials and return a 3-tuple representing the results. Args: runs: Number of trial runs to be executed. Returns: A 3-tuple (total trials, inside trials, 0). The final zero is needed solely to make sure that the combine_results function has same type for inputs and outputs (a requirement for combiner functions).
github-repos
def significant_control(self, num, entity_id, entity_type='individual', **kwargs): entities = {'individual': 'individual', 'corporate': 'corporate-entity', 'legal': 'legal-person', 'statements': 'persons-with-significant-control-statements', 'secure': 'super-secure'} try: entity = entities[entity_type] except KeyError as e: msg = ('Wrong entity_type supplied. Please choose from ' + 'individual, corporate, legal, statements or secure') raise Exception(msg) from e baseuri = ((self._BASE_URI + 'company/{}/persons-with-significant-control/'.format(num)) + '{}/{}'.format(entity, entity_id)) res = self.session.get(baseuri, params=kwargs) self.handle_http_error(res) return res
Get details of a specific entity with significant control. Args: num (str, int): Company number to search on. entity_id (str, int): Entity id to request details for entity_type (str, int): What type of entity to search for. Defaults to 'individual'. Other possible opetions are 'corporate' (for corporate entitys), 'legal' (for legal persons), 'statements' (for a person with significant control statement) and 'secure' (for a super secure person). kwargs (dict): additional keywords passed into requests.session.get *params* keyword.
codesearchnet
def createCategoryFilter(self, positiveExamples): categoryFilter = self._fullClient.createCategoryFilter("CategoryFilter", positiveExamples) return categoryFilter.positions
Creates a filter fingerprint. Args: positiveExamples, list(str): The list of positive example texts. Returns: list of int: the positions representing the filter representing the texts Raises: CorticalioException: if the request was not successful
juraj-google-style
def __init__(self, port_no=None, queue_id=None): super().__init__() self.port_no = port_no self.queue_id = queue_id
Create a QueueStatsRequest with the optional parameters below. Args: port_no (:class:`int`, :class:`~pyof.v0x01.common.phy_port.Port`): All ports if :attr:`.Port.OFPP_ALL`. queue_id (int): All queues if OFPQ_ALL (``0xfffffff``).
juraj-google-style
def get_angle(v1, v2, units='degrees'): d = ((np.dot(v1, v2) / np.linalg.norm(v1)) / np.linalg.norm(v2)) d = min(d, 1) d = max(d, (- 1)) angle = math.acos(d) if (units == 'degrees'): return math.degrees(angle) elif (units == 'radians'): return angle else: raise ValueError('Invalid units {}'.format(units))
Calculates the angle between two vectors. Args: v1: Vector 1 v2: Vector 2 units: "degrees" or "radians". Defaults to "degrees". Returns: Angle between them in degrees.
codesearchnet
def __init__(self, name, context=None): if context is None: context = google.datalab.Context.default() self._context = context self._api = _api.Api(context) self._name_parts = _utils.parse_dataset_name(name, self._api.project_id) self._full_name = '%s.%s' % self._name_parts self._info = None try: self._info = self._get_info() except google.datalab.utils.RequestException: pass
Initializes an instance of a Dataset. Args: name: the name of the dataset, as a string or (project_id, dataset_id) tuple. context: an optional Context object providing project_id and credentials. If a specific project id or credentials are unspecified, the default ones configured at the global level are used. Raises: Exception if the name is invalid.
juraj-google-style
def to_script(self, wf_name='wf'): self._closed() script = [] params = [] returns = [] for name, typ in self.wf_inputs.items(): params.append('{}=\'{}\''.format(name, typ)) returns.append(name) script.append('{} = {}.add_inputs({})'.format( ', '.join(returns), wf_name, ', '.join(params))) returns = [] for name, step in self.wf_steps.items(): pyname = step.python_name returns = ['{}_{}'.format(pyname, o) for o in step['out']] params = ['{}={}'.format(name, python_name(param)) for name, param in step['in'].items()] script.append('{} = {}.{}({})'.format( ', '.join(returns), wf_name, pyname, ', '.join(params))) params = [] for name, details in self.wf_outputs.items(): params.append('{}={}'.format( name, python_name(details['outputSource']))) script.append('{}.add_outputs({})'.format(wf_name, ', '.join(params))) return '\n'.join(script)
Generated and print the scriptcwl script for the currunt workflow. Args: wf_name (str): string used for the WorkflowGenerator object in the generated script (default: ``wf``).
juraj-google-style
def url_to_text(self, url): path, headers = urllib.request.urlretrieve(url) return self.path_to_text(path)
Download PDF file and transform its document to string. Args: url: PDF url. Returns: string.
juraj-google-style
def _validate_compose_list(destination_file, file_list, files_metadata=None, number_of_files=32): common.validate_file_path(destination_file) bucket = destination_file[0:(destination_file.index('/', 1) + 1)] try: if isinstance(file_list, types.StringTypes): raise TypeError list_len = len(file_list) except TypeError: raise TypeError('file_list must be a list') if (list_len > number_of_files): raise ValueError(('Compose attempted to create composite with too many(%i) components; limit is (%i).' % (list_len, number_of_files))) if (list_len <= 0): raise ValueError('Compose operation requires at least one component; 0 provided.') if (files_metadata is None): files_metadata = [] elif (len(files_metadata) > list_len): raise ValueError(('files_metadata contains more entries(%i) than file_list(%i)' % (len(files_metadata), list_len))) list_of_files = [] for (source_file, meta_data) in itertools.izip_longest(file_list, files_metadata): if (not isinstance(source_file, str)): raise TypeError('Each item of file_list must be a string') if source_file.startswith('/'): logging.warn('Detected a "/" at the start of the file, Unless the file name contains a "/" it may cause files to be misread') if source_file.startswith(bucket): logging.warn('Detected bucket name at the start of the file, must not specify the bucket when listing file_names. May cause files to be misread') common.validate_file_path((bucket + source_file)) list_entry = {} if (meta_data is not None): list_entry.update(meta_data) list_entry['Name'] = source_file list_of_files.append(list_entry) return (list_of_files, bucket)
Validates the file_list and merges the file_list, files_metadata. Args: destination: Path to the file (ie. /destination_bucket/destination_file). file_list: List of files to compose, see compose for details. files_metadata: Meta details for each file in the file_list. number_of_files: Maximum number of files allowed in the list. Returns: A tuple (list_of_files, bucket): list_of_files: Ready to use dict version of the list. bucket: bucket name extracted from the file paths.
codesearchnet