code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def ValidateDependencies(rdf_artifact): for dependency in GetArtifactDependencies(rdf_artifact): try: dependency_obj = REGISTRY.GetArtifact(dependency) except rdf_artifacts.ArtifactNotRegisteredError as e: raise rdf_artifacts.ArtifactDependencyError(rdf_artifact, 'missing dependency', cause=e) message = dependency_obj.error_message if message: raise rdf_artifacts.ArtifactDependencyError(rdf_artifact, 'dependency error', cause=message)
Validates artifact dependencies. This method checks whether all dependencies of the artifact are present and contain no errors. This method can be called only after all other artifacts have been loaded. Args: rdf_artifact: RDF object artifact. Raises: ArtifactDependencyError: If a dependency is missing or contains errors.
codesearchnet
def plot_val_with_title(self, idxs, y): if len(idxs) > 0: imgs = np.stack([self.ds[x][0] for x in idxs]) title_probs = [self.probs[x,y] for x in idxs] return plots(self.ds.denorm(imgs), rows=1, titles=title_probs) else: return False;
Displays the images and their probabilities of belonging to a certain class Arguments: idxs (numpy.ndarray): indexes of the image samples from the dataset y (int): the selected class Returns: Plots the images in n rows [rows = n]
juraj-google-style
def _cast_to_frameset(cls, other): if isinstance(other, FrameSet): return other try: return FrameSet(other) except Exception: return NotImplemented
Private method to simplify comparison operations. Args: other (:class:`FrameSet` or set or frozenset or or iterable): item to be compared Returns: :class:`FrameSet` Raises: :class:`NotImplemented`: if a comparison is impossible
juraj-google-style
def tracer_diffusion_coefficient( self ): if self.has_run: return self.atoms.sum_dr_squared() / ( 6.0 * float( self.number_of_atoms ) * self.lattice.time ) else: return None
Tracer diffusion coefficient, D*. Args: None Returns: (Float): The tracer diffusion coefficient, D*.
juraj-google-style
def import_family(self, rfa_file): self._add_entry(templates.IMPORT_FAMILY.format(family_file=rfa_file))
Append a import family entry to the journal. This instructs Revit to import a family into the opened model. Args: rfa_file (str): full path of the family file
codesearchnet
def from_sub_models_config(cls, text_encoder_config: PretrainedConfig, audio_encoder_config: PretrainedConfig, decoder_config: MusicgenDecoderConfig, **kwargs): return cls(text_encoder=text_encoder_config.to_dict(), audio_encoder=audio_encoder_config.to_dict(), decoder=decoder_config.to_dict(), **kwargs)
Instantiate a [`MusicgenConfig`] (or a derived class) from text encoder, audio encoder and decoder configurations. Returns: [`MusicgenConfig`]: An instance of a configuration object
github-repos
def find_link(self, target_node): try: return next((l for l in self.link_list if (l.target == target_node))) except StopIteration: return None
Find the link that points to ``target_node`` if it exists. If no link in ``self`` points to ``target_node``, return None Args: target_node (Node): The node to look for in ``self.link_list`` Returns: Link: An existing link pointing to ``target_node`` if found None: If no such link exists Example: >>> node_1 = Node('One') >>> node_2 = Node('Two') >>> node_1.add_link(node_2, 1) >>> link_1 = node_1.link_list[0] >>> found_link = node_1.find_link(node_2) >>> found_link == link_1 True
codesearchnet
def valid_vlan_id(vlan_id, extended=True): minimum_vlan_id = 1 maximum_vlan_id = 4095 if extended: maximum_vlan_id = 8191 return (minimum_vlan_id <= int(vlan_id) <= maximum_vlan_id)
Validates a VLAN ID. Args: vlan_id (integer): VLAN ID to validate. If passed as ``str``, it will be cast to ``int``. extended (bool): If the VLAN ID range should be considered extended for Virtual Fabrics. Returns: bool: ``True`` if it is a valid VLAN ID. ``False`` if not. Raises: None Examples: >>> import pynos.utilities >>> vlan = '565' >>> pynos.utilities.valid_vlan_id(vlan) True >>> extended = False >>> vlan = '6789' >>> pynos.utilities.valid_vlan_id(vlan, extended=extended) False >>> pynos.utilities.valid_vlan_id(vlan) True
codesearchnet
def memory_write32(self, addr, data, zone=None): return self.memory_write(addr, data, zone, 32)
Writes words to memory of a target system. Args: self (JLink): the ``JLink`` instance addr (int): start address to write to data (list): list of words to write zone (str): optional memory zone to access Returns: Number of words written to target. Raises: JLinkException: on memory access error.
juraj-google-style
def get_raw_data(self, url, *args, **kwargs): res = self._conn.get(url, headers=self._prepare_headers(**kwargs)) if res.status_code == 200: return res.content else: return None
Gets data from url as bytes Returns content under the provided url as bytes ie. for binary data Args: **url**: address of the wanted data .. versionadded:: 0.3.2 **additional_headers**: (optional) Additional headers to be used with request Returns: bytes
juraj-google-style
def __new__(cls: Type[_T], *args: PathLike) -> _T: if cls == Path: if not args: return register.make_path('.') root, *parts = args return register.make_path(root).joinpath(*parts) else: return super().__new__(cls, *args)
Create a new path. ```python path = abcpath.Path() ``` We use __new__ instead of __init__ to allow subclassing, even though the usage of __init__ is possible from python>=3.12. Args: *args: Paths to create Returns: path: The registered path
github-repos
def count_tornadoes(input_data): return input_data | 'months with tornadoes' >> beam.FlatMap(lambda row: [(int(row['month']), 1)] if row['tornado'] else []) | 'monthly count' >> beam.CombinePerKey(sum) | 'format' >> beam.Map(lambda k_v: {'month': k_v[0], 'tornado_count': k_v[1]})
Workflow computing the number of tornadoes for each month that had one. Args: input_data: a PCollection of dictionaries representing table rows. Each dictionary will have a 'month' and a 'tornado' key as described in the module comment. Returns: A PCollection of dictionaries containing 'month' and 'tornado_count' keys. Months without tornadoes are skipped.
github-repos
def _get_new_group_key(self, devices): new_key = self._group_key self._group_key += 1 self._instance_key_table[new_key] = {} for device in devices: self._instance_key_table[new_key][device] = INSTANCE_KEY_START_NUMBER return new_key
Returns a new group key. The caller should store and reuse the same group key for the same set of devices. Calling this method always returns a new group key. This method is not thread-safe. Args: devices: a list of canonical device strings in a collective group. Returns: a new group key.
github-repos
def reset_internal_states(self, record=None): self._record = None self._count = 0 self._record = record
Resets the internal state of the recorder. Args: record: records.TestResultRecord, the test record for a test.
codesearchnet
def error(msg): return debugger_cli_common.rich_text_lines_from_rich_line_list([RL('ERROR: ' + msg, COLOR_RED)])
Generate a RichTextLines output for error. Args: msg: (str) The error message. Returns: (debugger_cli_common.RichTextLines) A representation of the error message for screen output.
github-repos
def start_time_distance(item_a, item_b, max_value): start_time_diff = np.abs(item_a.times[0] - item_b.times[0]) return np.minimum(start_time_diff, max_value) / float(max_value)
Absolute difference between the starting times of each item. Args: item_a: STObject from the first set in TrackMatcher item_b: STObject from the second set in TrackMatcher max_value: Maximum distance value used as scaling value and upper constraint. Returns: Distance value between 0 and 1.
juraj-google-style
def generate_tests(self, test_logic, name_func, arg_sets, uid_func=None): self._assert_function_names_in_stack([STAGE_NAME_PRE_RUN]) root_msg = 'During test generation of "%s":' % test_logic.__name__ for args in arg_sets: test_name = name_func(*args) if test_name in self.get_existing_test_names(): raise Error('%s Test name "%s" already exists, cannot be duplicated!' % (root_msg, test_name)) test_func = functools.partial(test_logic, *args) for attr_name in (ATTR_MAX_RETRY_CNT, ATTR_MAX_CONSEC_ERROR, ATTR_REPEAT_CNT): attr = getattr(test_logic, attr_name, None) if attr is not None: setattr(test_func, attr_name, attr) if uid_func is not None: uid = uid_func(*args) if uid is None: logging.warning('%s UID for arg set %s is None.', root_msg, args) else: setattr(test_func, 'uid', uid) self._generated_test_table[test_name] = test_func
Generates tests in the test class. This function has to be called inside a test class's `self.pre_run`. Generated tests are not written down as methods, but as a list of parameter sets. This way we reduce code repetition and improve test scalability. Users can provide an optional function to specify the UID of each test. Not all generated tests are required to have UID. Args: test_logic: function, the common logic shared by all the generated tests. name_func: function, generate a test name according to a set of test arguments. This function should take the same arguments as the test logic function. arg_sets: a list of tuples, each tuple is a set of arguments to be passed to the test logic function and name function. uid_func: function, an optional function that takes the same arguments as the test logic function and returns a string that is the corresponding UID.
github-repos
def logsumexp(x, axis=None, keepdims=False): if any_symbolic_tensors((x,)): return Logsumexp(axis, keepdims).symbolic_call(x) return backend.math.logsumexp(x, axis=axis, keepdims=keepdims)
Computes the logarithm of sum of exponentials of elements in a tensor. Args: x: Input tensor. axis: An integer or a tuple of integers specifying the axis/axes along which to compute the sum. If `None`, the sum is computed over all elements. Defaults to `None`. keepdims: A boolean indicating whether to keep the dimensions of the input tensor when computing the sum. Defaults to `False`. Returns: A tensor containing the logarithm of the sum of exponentials of elements in `x`. Example: >>> x = keras.ops.convert_to_tensor([1., 2., 3.]) >>> logsumexp(x) 3.407606
github-repos
def add_query(self, query, join_with=AND): if not isinstance(query, DomainCondition): query = DomainCondition.from_tuple(query) if len(self.query): self.query.append(join_with) self.query.append(query)
Join a new query to existing queries on the stack. Args: query (tuple or list or DomainCondition): The condition for the query. If a ``DomainCondition`` object is not provided, the input should conform to the interface defined in :func:`~.domain.DomainCondition.from_tuple`. join_with (str): The join string to apply, if other queries are already on the stack.
juraj-google-style
def get_inventory(self, keys=None): inventory = defaultdict(list) keys = (keys or ['vm-type', 'groups', 'vm-provider']) vms = self.prefix.get_vms().values() for vm in vms: entry = self._generate_entry(vm) vm_spec = vm.spec for key in keys: value = self.get_key(key, vm_spec) if (value is None): continue if isinstance(value, list): for sub_value in value: inventory['{}={}'.format(key, sub_value)].append(entry) else: inventory['{}={}'.format(key, value)].append(entry) for group in vm_spec.get('groups', []): inventory[group].append(entry) return inventory
Create an Ansible inventory based on python dicts and lists. The returned value is a dict in which every key represents a group and every value is a list of entries for that group. Args: keys (list of str): Path to the keys that will be used to create groups. Returns: dict: dict based Ansible inventory
codesearchnet
def download_url(url, root, filename=None, md5=None): from six.moves import urllib root = os.path.expanduser(root) if not filename: filename = os.path.basename(url) fpath = os.path.join(root, filename) makedir_exist_ok(root) if os.path.isfile(fpath) and check_integrity(fpath, md5): print('Using downloaded and verified file: ' + fpath) else: try: print('Downloading ' + url + ' to ' + fpath) urllib.request.urlretrieve( url, fpath, reporthook=gen_bar_updater() ) except OSError: if url[:5] == 'https': url = url.replace('https:', 'http:') print('Failed download. Trying https -> http instead.' ' Downloading ' + url + ' to ' + fpath) urllib.request.urlretrieve( url, fpath, reporthook=gen_bar_updater() )
Download a file from a url and place it in root. Args: url (str): URL to download file from root (str): Directory to place downloaded file in filename (str, optional): Name to save the file under. If None, use the basename of the URL md5 (str, optional): MD5 checksum of the download. If None, do not check
juraj-google-style
def ParseFileObject(self, parser_mediator, file_object): if (not self._encoding): self._encoding = parser_mediator.codepage try: if (not self._HasExpectedLineLength(file_object)): display_name = parser_mediator.GetDisplayName() raise errors.UnableToParseFile('[{0:s}] Unable to parse DSV file: {1:s} with error: unexpected line length.'.format(self.NAME, display_name)) except UnicodeDecodeError as exception: display_name = parser_mediator.GetDisplayName() raise errors.UnableToParseFile('[{0:s}] Unable to parse DSV file: {1:s} with error: {2!s}.'.format(self.NAME, display_name, exception)) try: line_reader = self._CreateLineReader(file_object) reader = self._CreateDictReader(line_reader) row_offset = line_reader.tell() row = next(reader) except (StopIteration, csv.Error, UnicodeDecodeError) as exception: display_name = parser_mediator.GetDisplayName() raise errors.UnableToParseFile('[{0:s}] Unable to parse DSV file: {1:s} with error: {2!s}.'.format(self.NAME, display_name, exception)) number_of_columns = len(self.COLUMNS) number_of_records = len(row) if (number_of_records != number_of_columns): display_name = parser_mediator.GetDisplayName() raise errors.UnableToParseFile('[{0:s}] Unable to parse DSV file: {1:s}. Wrong number of records (expected: {2:d}, got: {3:d})'.format(self.NAME, display_name, number_of_columns, number_of_records)) for (key, value) in row.items(): if (self._MAGIC_TEST_STRING in (key, value)): display_name = parser_mediator.GetDisplayName() raise errors.UnableToParseFile('[{0:s}] Unable to parse DSV file: {1:s}. Signature mismatch.'.format(self.NAME, display_name)) row = self._ConvertRowToUnicode(parser_mediator, row) if (not self.VerifyRow(parser_mediator, row)): display_name = parser_mediator.GetDisplayName() raise errors.UnableToParseFile('[{0:s}] Unable to parse DSV file: {1:s}. Verification failed.'.format(self.NAME, display_name)) self.ParseRow(parser_mediator, row_offset, row) row_offset = line_reader.tell() for row in reader: if parser_mediator.abort: break row = self._ConvertRowToUnicode(parser_mediator, row) self.ParseRow(parser_mediator, row_offset, row) row_offset = line_reader.tell()
Parses a DSV text file-like object. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. file_object (dfvfs.FileIO): file-like object. Raises: UnableToParseFile: when the file cannot be parsed.
codesearchnet
def test_pass(self, e=None): self._test_end(TestResultEnums.TEST_RESULT_PASS, e)
To mark the test as passed in this record. Args: e: An instance of mobly.signals.TestPass.
github-repos
def angle(self, deg=False): if self.dtype.str[1] != 'c': warnings.warn('angle() is intended for complex-valued timeseries', RuntimeWarning, 1) return Timeseries(np.angle(self, deg=deg), self.tspan, self.labels)
Return the angle of the complex argument. Args: deg (bool, optional): Return angle in degrees if True, radians if False (default). Returns: angle (Timeseries): The counterclockwise angle from the positive real axis on the complex plane, with dtype as numpy.float64.
juraj-google-style
def is_valid(self, value): try: if validation_on(): self.validate(value, False) except ValueError: return False else: return True
Whether the value passes validation Args: value (obj) : the value to validate against this property type Returns: True if valid, False otherwise
codesearchnet
def _html_checker(job_var, interval, status, header, _interval_set=False): job_status = job_var.status() job_status_name = job_status.name job_status_msg = job_status.value status.value = (header % job_status_msg) while (job_status_name not in ['DONE', 'CANCELLED']): time.sleep(interval) job_status = job_var.status() job_status_name = job_status.name job_status_msg = job_status.value if (job_status_name == 'ERROR'): break else: if (job_status_name == 'QUEUED'): job_status_msg += (' (%s)' % job_var.queue_position()) if (not _interval_set): interval = max(job_var.queue_position(), 2) elif (not _interval_set): interval = 2 status.value = (header % job_status_msg) status.value = (header % job_status_msg)
Internal function that updates the status of a HTML job monitor. Args: job_var (BaseJob): The job to keep track of. interval (int): The status check interval status (widget): HTML ipywidget for output ot screen header (str): String representing HTML code for status. _interval_set (bool): Was interval set by user?
codesearchnet
def __init__(self, property_type=TableFeaturePropType.OFPTFPT_NEXT_TABLES, next_table_ids=None): super().__init__(property_type) self.next_table_ids = (ListOfInstruction() if next_table_ids is None else next_table_ids) self.update_length()
Create a NextTablesProperty with the optional parameters below. Args: type(|TableFeaturePropType_v0x04|): Property Type value of this instance. next_table_ids (|ListOfInstruction_v0x04|): List of InstructionGotoTable instances.
juraj-google-style
def lyap_e_len(**kwargs): m = ((kwargs['emb_dim'] - 1) min_len = kwargs['emb_dim'] min_len += m min_len += (kwargs['min_tsep'] * 2) min_len += kwargs['min_nb'] return min_len
Helper function that calculates the minimum number of data points required to use lyap_e. Note that none of the required parameters may be set to None. Kwargs: kwargs(dict): arguments used for lyap_e (required: emb_dim, matrix_dim, min_nb and min_tsep) Returns: minimum number of data points required to call lyap_e with the given parameters
codesearchnet
def _subtoken_to_tokens(self, subtokens): concatenated = ''.join(subtokens) split = concatenated.split('_') return [_unescape_token((t + '_')) for t in split if t]
Converts a list of subtoken to a list of tokens. Args: subtokens: a list of integers in the range [0, vocab_size) Returns: a list of strings.
codesearchnet
def get_heroku_connect_models(): from django.apps import apps apps.check_models_ready() from heroku_connect.db.models import HerokuConnectModel return (model for models in apps.all_models.values() for model in models.values() if (issubclass(model, HerokuConnectModel) and (not model._meta.managed)))
Return all registered Heroku Connect Models. Returns: (Iterator): All registered models that are subclasses of `.HerokuConnectModel`. Abstract models are excluded, since they are not registered.
codesearchnet
def hamming_distance(str1, str2): if len(str1) != len(str2): raise VisualizationError('Strings not same length.') return sum(s1 != s2 for s1, s2 in zip(str1, str2))
Calculate the Hamming distance between two bit strings Args: str1 (str): First string. str2 (str): Second string. Returns: int: Distance between strings. Raises: VisualizationError: Strings not same length
juraj-google-style
def __init__(self, num_classes: int, matcher: OneFormerHungarianMatcher, weight_dict: Dict[str, float], eos_coef: float, num_points: int, oversample_ratio: float, importance_sample_ratio: float, contrastive_temperature: Optional[float]=None): requires_backends(self, ['scipy']) super().__init__() self.num_classes = num_classes self.matcher = matcher self.weight_dict = weight_dict self.eos_coef = eos_coef empty_weight = torch.ones(self.num_classes + 1) empty_weight[-1] = self.eos_coef self.register_buffer('empty_weight', empty_weight) self.num_points = num_points self.oversample_ratio = oversample_ratio self.importance_sample_ratio = importance_sample_ratio self.contrastive_temperature = contrastive_temperature if self.contrastive_temperature is not None: self.logit_scale = nn.Parameter(torch.tensor(np.log(1 / contrastive_temperature)))
This class computes the losses using the class predictions, mask predictions and the contrastive queries. Oneformer calculates the classification CE loss on the class predictions. Mask predictions are used for calculating the binary CE loss and dice loss. The contrastive queries are used for calculating the contrastive loss. Args: num_labels (`int`): The number of classes. matcher (`OneFormerHungarianMatcher`): A torch module that computes the assignments between the predictions and labels. weight_dict (`Dict[str, float]`): A dictionary of weights to be applied to the different losses. eos_coef (`float`): Weight to apply to the null class. num_points (`int`): Number of points to be sampled for dice and mask loss calculations. oversample_ratio (`float`): Required for pointwise loss calculation. importance_sample_ratio (`float`): Required for pointwise loss calculation. contrastive_temperature (`float`): Temperature for scaling the contrastive logits.
github-repos
def list_and_add(a, b): if not isinstance(b, list): b = [b] if not isinstance(a, list): a = [a] return a + b
Concatenate anything into a list. Args: a: the first thing b: the second thing Returns: list. All the things in a list.
juraj-google-style
def ParseNumericOption(self, options, name, base=10, default_value=None): numeric_value = getattr(options, name, None) if not numeric_value: return default_value try: return int(numeric_value, base) except (TypeError, ValueError): name = name.replace('_', ' ') raise errors.BadConfigOption( 'Unsupported numeric value {0:s}: {1!s}.'.format( name, numeric_value))
Parses a numeric option. If the option is not set the default value is returned. Args: options (argparse.Namespace): command line arguments. name (str): name of the numeric option. base (Optional[int]): base of the numeric value. default_value (Optional[object]): default value. Returns: int: numeric value. Raises: BadConfigOption: if the options are invalid.
juraj-google-style
def swap(self, old_chunks, new_chunk): indexes = [self.index(chunk) for chunk in old_chunks] del self[indexes[0]:indexes[-1] + 1] self.insert(indexes[0], new_chunk)
Swaps old consecutive chunks with new chunk. Args: old_chunks (:obj:`budou.chunk.ChunkList`): List of consecutive Chunks to be removed. new_chunk (:obj:`budou.chunk.Chunk`): A Chunk to be inserted.
juraj-google-style
def __init__(self, model_name: str, *, max_seq_length: Optional[int]=None, **kwargs): if not SentenceTransformer: raise ImportError('sentence-transformers is required to use HuggingfaceTextEmbeddings.Please install it with using `pip install sentence-transformers`.') super().__init__(type_adapter=create_rag_adapter(), **kwargs) self.model_name = model_name self.max_seq_length = max_seq_length self.model_class = SentenceTransformer
Utilizes huggingface SentenceTransformer embeddings for RAG pipeline. Args: model_name: Name of the sentence-transformers model to use max_seq_length: Maximum sequence length for the model **kwargs: Additional arguments passed to :class:`~apache_beam.ml.transforms.base.EmbeddingsManager` constructor including ModelHandler arguments
github-repos
def days_since_last_snowfall(self, value=99): if (value is not None): try: value = int(value) except ValueError: raise ValueError('value {} need to be of type int for field `days_since_last_snowfall`'.format(value)) self._days_since_last_snowfall = value
Corresponds to IDD Field `days_since_last_snowfall` Args: value (int): value for IDD Field `days_since_last_snowfall` Missing value: 99 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def retry(func): def retried_func(*args, **kwargs): max_tries = 3 tries = 0 while True: try: resp = func(*args, **kwargs) except requests.exceptions.ConnectionError as exc: exc.msg = "Connection error for session; exiting" raise exc except requests.exceptions.HTTPError as exc: exc.msg = "HTTP error for session; exiting" raise exc if resp.status_code != 200 and tries < max_tries: logger.warning("retrying request; current status code: {}" .format(resp.status_code)) tries += 1 time.sleep(tries ** 2) continue break if resp.status_code != 200: error_message = resp.json()["error"]["message"] logger.error("HTTP Error code: {}: {}".format(resp.status_code, error_message)) logger.error("Rule payload: {}".format(kwargs["rule_payload"])) raise requests.exceptions.HTTPError return resp return retried_func
Decorator to handle API retries and exceptions. Defaults to three retries. Args: func (function): function for decoration Returns: decorated function
juraj-google-style
def read_log(self, logfile): logfile.seek(0) (field_names, _) = self._parse_bro_header(logfile) while 1: _line = next(logfile).strip() if (not _line.startswith(' (yield self._cast_dict(dict(zip(field_names, _line.split(self.delimiter))))) else: time.sleep(0.1) break
The read_log method returns a memory efficient generator for rows in a Bro log. Usage: rows = my_bro_reader.read_log(logfile) for row in rows: do something with row Args: logfile: The Bro Log file.
codesearchnet
def tf_baseline_loss(self, states, internals, reward, update, reference=None): if (self.baseline_mode == 'states'): loss = self.baseline.loss(states=states, internals=internals, reward=reward, update=update, reference=reference) elif (self.baseline_mode == 'network'): loss = self.baseline.loss(states=self.network.apply(x=states, internals=internals, update=update), internals=internals, reward=reward, update=update, reference=reference) regularization_loss = self.baseline.regularization_loss() if (regularization_loss is not None): loss += regularization_loss return loss
Creates the TensorFlow operations for calculating the baseline loss of a batch. Args: states: Dict of state tensors. internals: List of prior internal state tensors. reward: Reward tensor. update: Boolean tensor indicating whether this call happens during an update. reference: Optional reference tensor(s), in case of a comparative loss. Returns: Loss tensor.
codesearchnet
def get_nic(access_token, subscription_id, resource_group, nic_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', resource_group, '/providers/Microsoft.Network/networkInterfaces/', nic_name, '?api-version=', NETWORK_API]) return do_get(endpoint, access_token)
Get details about a network interface. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. nic_name (str): Name of the NIC. Returns: HTTP response. NIC JSON body.
juraj-google-style
def ListChildren(self, urn, limit=None, age=NEWEST_TIME): _, children_urns = list( self.MultiListChildren([urn], limit=limit, age=age))[0] return children_urns
Lists bunch of directories efficiently. Args: urn: Urn to list children. limit: Max number of children to list. age: The age of the items to retrieve. Should be one of ALL_TIMES, NEWEST_TIME or a range. Returns: RDFURNs instances of each child.
juraj-google-style
def kms_key_arn(kms_client, alias): try: response = kms_client.describe_key(KeyId=alias) key_arn = response["KeyMetadata"]["Arn"] except ClientError as error: raise RuntimeError("Failed to obtain key arn for alias {}, error: {}".format(alias, error.response["Error"]["Message"])) return key_arn
Obtain the full key arn based on the key alias provided Args: kms_client (boto3 kms client object): Instantiated kms client object. Usually created through create_aws_clients. alias (string): alias of key, example alias/proto0-evs-drm. Returns: string of the full key arn
juraj-google-style
def find_duplicates_in_array(array): duplicates = [] non_duplicates = [] if len(array) != len(set(array)): for item in array: if item not in non_duplicates: non_duplicates.append(item) elif item in non_duplicates and item not in duplicates: duplicates.append(item) return duplicates
Runs through the array and returns the elements that contain more than one duplicate Args: array: The array to check for duplicates. Returns: Array of the elements that are duplicates. Returns empty list if there are no duplicates.
juraj-google-style
def _parse_meta_info(self, line): if self.mslevel: self.meta_info['ms_level'] = self.mslevel if self.polarity: self.meta_info['polarity'] = self.polarity for (k, regexes) in six.iteritems(self.meta_regex): for reg in regexes: m = re.search(reg, line, re.IGNORECASE) if m: self.meta_info[k] = m.group(1).strip()
Parse and extract all meta data by looping through the dictionary of meta_info regexs updates self.meta_info Args: line (str): line of the msp file
codesearchnet
def add_values_to_bundle_safe(connection, bundle, values): for value in values: try: connection.addValueToBundle(bundle, value) except YouTrackException as e: if (e.response.status == 409): print(('Value with name [ %s ] already exists in bundle [ %s ]' % (utf8encode(value.name), utf8encode(bundle.name)))) else: raise e
Adds values to specified bundle. Checks, whether each value already contains in bundle. If yes, it is not added. Args: connection: An opened Connection instance. bundle: Bundle instance to add values in. values: Values, that should be added in bundle. Raises: YouTrackException: if something is wrong with queries.
codesearchnet
def check(self, dsm, independence_factor=5, **kwargs): least_common_mechanism = False message = '' data = dsm.data categories = dsm.categories dsm_size = dsm.size[0] if not categories: categories = ['appmodule'] * dsm_size dependent_module_number = [] for j in range(0, dsm_size): dependent_module_number.append(0) for i in range(0, dsm_size): if (categories[i] != 'framework' and categories[j] != 'framework' and data[i][j] > 0): dependent_module_number[j] += 1 for index, item in enumerate(dsm.categories): if item == 'broker' or item == 'applib': dependent_module_number[index] = 0 if max(dependent_module_number) <= dsm_size / independence_factor: least_common_mechanism = True else: maximum = max(dependent_module_number) message = ( 'Dependencies to %s (%s) > matrix size (%s) / ' 'independence factor (%s) = %s' % ( dsm.entities[dependent_module_number.index(maximum)], maximum, dsm_size, independence_factor, dsm_size / independence_factor)) return least_common_mechanism, message
Check least common mechanism. Args: dsm (:class:`DesignStructureMatrix`): the DSM to check. independence_factor (int): if the maximum dependencies for one module is inferior or equal to the DSM size divided by the independence factor, then this criterion is verified. Returns: bool: True if least common mechanism, else False
juraj-google-style
def __init__(self, channel): self.CreateCompany = channel.unary_unary( "/google.cloud.talent.v4beta1.CompanyService/CreateCompany", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_company__service__pb2.CreateCompanyRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_company__pb2.Company.FromString, ) self.GetCompany = channel.unary_unary( "/google.cloud.talent.v4beta1.CompanyService/GetCompany", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_company__service__pb2.GetCompanyRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_company__pb2.Company.FromString, ) self.UpdateCompany = channel.unary_unary( "/google.cloud.talent.v4beta1.CompanyService/UpdateCompany", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_company__service__pb2.UpdateCompanyRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_company__pb2.Company.FromString, ) self.DeleteCompany = channel.unary_unary( "/google.cloud.talent.v4beta1.CompanyService/DeleteCompany", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_company__service__pb2.DeleteCompanyRequest.SerializeToString, response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString, ) self.ListCompanies = channel.unary_unary( "/google.cloud.talent.v4beta1.CompanyService/ListCompanies", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_company__service__pb2.ListCompaniesRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_company__service__pb2.ListCompaniesResponse.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def set_configuration_from_input_tensors(self, input_tensors): if len(input_tensors) != self.number_of_tuple_elements: raise ValueError(f'input_tensors is {str(input_tensors)}, but should be a list of {self.number_of_tuple_elements} Tensors') self.set_tuple_shapes([t.shape for t in input_tensors]) self.set_tuple_types([t.dtype for t in input_tensors])
Sets the shapes and types of the queue tuple elements. input_tensors is a list of Tensors whose types and shapes are used to set the queue configuration. Args: input_tensors: list of Tensors of the same types and shapes as the desired queue Tuple. Raises: ValueError: if input_tensors is not a list of length self.number_of_tuple_elements
github-repos
def _get_create_query(partition, tablename, include=None): TYPE_MAP = { 'int': 'INTEGER', 'float': 'REAL', six.binary_type.__name__: 'TEXT', six.text_type.__name__: 'TEXT', 'date': 'DATE', 'datetime': 'TIMESTAMP WITHOUT TIME ZONE' } columns_types = [] if not include: include = [] for column in sorted(partition.datafile.reader.columns, key=lambda x: x['pos']): if include and column['name'] not in include: continue sqlite_type = TYPE_MAP.get(column['type']) if not sqlite_type: raise Exception('Do not know how to convert {} to sql column.'.format(column['type'])) columns_types.append(' "{}" {}'.format(column['name'], sqlite_type)) columns_types_str = ',\n'.join(columns_types) query = 'CREATE TABLE IF NOT EXISTS {}(\n{})'.format(tablename, columns_types_str) return query
Creates and returns `CREATE TABLE ...` sql statement for given mprows. Args: partition (orm.Partition): tablename (str): name of the table in the return create query. include (list of str, optional): list of columns to include to query. Returns: str: create table query.
juraj-google-style
def site_coordination_numbers( self ): coordination_numbers = {} for l in self.site_labels: coordination_numbers[ l ] = set( [ len( site.neighbours ) for site in self.sites if site.label is l ] ) return coordination_numbers
Returns a dictionary of the coordination numbers for each site label. e.g.:: { 'A' : { 4 }, 'B' : { 2, 4 } } Args: none Returns: coordination_numbers (Dict(Str:Set(Int))): dictionary of coordination numbers for each site label.
juraj-google-style
def make_subdivision_matrices(degree): left = np.zeros(((degree + 1), (degree + 1)), order='F') right = np.zeros(((degree + 1), (degree + 1)), order='F') left[(0, 0)] = 1.0 right[((- 1), (- 1))] = 1.0 for col in six.moves.xrange(1, (degree + 1)): half_prev = (0.5 * left[(:col, (col - 1))]) left[(:col, col)] = half_prev left[(1:(col + 1), col)] += half_prev complement = (degree - col) right[((- (col + 1)):, complement)] = left[(:(col + 1), col)] return (left, right)
Make the matrix used to subdivide a curve. .. note:: This is a helper for :func:`_subdivide_nodes`. It does not have a Fortran speedup because it is **only** used by a function which has a Fortran speedup. Args: degree (int): The degree of the curve. Returns: Tuple[numpy.ndarray, numpy.ndarray]: The matrices used to convert the nodes into left and right nodes, respectively.
codesearchnet
def register_rml_def(self, location_type, location, filename=None, **kwargs): if (location_type == 'directory'): self.register_directory(location, **kwargs) elif (location_type == 'filepath'): if (not os.path.exists(location)): raise OSError('File not found', location) if os.path.isfile(location): self.register_rml(location) elif filename: new_loc = os.path.join(location, filename) if (not os.path.exists(new_loc)): raise OSError('File not found', new_loc) elif os.path.isfile(new_loc): self.register_rml(new_loc) else: raise OSError('File not found', location) elif location_type.startswith('package'): pkg_path = importlib.util.find_spec(location).submodule_search_locations[0] if location_type.endswith('_all'): self.register_directory(pkg_path, **kwargs) elif location_type.endswith('_file'): filepath = os.path.join(pkg_path, filename) self.register_rml(filepath, **kwargs) else: raise NotImplementedError
Registers the rml file locations for easy access Args: ----- location_type: ['package_all', 'package_file', 'directory', 'filepath'] location: The correlated location string based on the location_type filename: Optional, associated with 'package_file' location_type kwargs: ------- include_subfolders: Boolean
codesearchnet
def BuildTypeDescriptor(self, value_cls): result = ApiRDFValueDescriptor(name=value_cls.__name__, parents=[klass.__name__ for klass in value_cls.__mro__], doc=(value_cls.__doc__ or ''), kind='PRIMITIVE') result.default = self.BuildDefaultValue(value_cls) return result
Renders metadata of a given value class. Args: value_cls: Metadata of this class will be rendered. This class has to be (or to be a subclass of) a self.value_class (i.e. a class that this renderer is capable of rendering). Returns: Dictionary with class metadata.
codesearchnet
def get_container_service(access_token, subscription_id, resource_group, service_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourcegroups/', resource_group, '/providers/Microsoft.ContainerService/ContainerServices/', service_name, '?api-version=', ACS_API]) return do_get(endpoint, access_token)
Get details about an Azure Container Server Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. service_name (str): Name of container service. Returns: HTTP response. JSON model.
codesearchnet
def _resource_capture_helper(self, tensor): assert tensor.dtype == dtypes.resource forward_graph_input_names = [t.name for t in self._forward_graph.inputs] forward_graph_name_to_opdef = {op.name: op.node_def for op in self._forward_graph.get_operations()} index = util.resource_input_index(tensor.name, forward_graph_input_names, forward_graph_name_to_opdef, self._forward_graph._functions) input_placeholder = self._forward_graph.inputs[index] tensor_in_outer_graph = self._forward_graph._while.inputs[index] assert input_placeholder.dtype == dtypes.resource assert tensor_in_outer_graph.dtype == dtypes.resource if index != util.resource_input_index(self._forward_graph.outputs[index].name, forward_graph_input_names, forward_graph_name_to_opdef, self._forward_graph._functions): raise AssertionError(f'Resource tensors must be loop invariants {tensor_in_outer_graph}') self._indirect_captures[ops.tensor_id(tensor)] = self.capture(tensor_in_outer_graph) return self._indirect_captures[ops.tensor_id(tensor)]
Returns the captured resource tensor. Resource-type tensors are not accumulated. If a resource tensor exists in the loop body it must either be a loop input or an output of a nested While op inside the loop body which had captured the external resource. Args: tensor: the external resource Tensor to be captured. Returns: Tensor in this graph.
github-repos
def cancelOrder(self, order: Order) -> Trade: self.client.cancelOrder(order.orderId) now = datetime.datetime.now(datetime.timezone.utc) key = self.wrapper.orderKey( order.clientId, order.orderId, order.permId) trade = self.wrapper.trades.get(key) if trade: if not trade.isDone(): status = trade.orderStatus.status if (status == OrderStatus.PendingSubmit and not order.transmit or status == OrderStatus.Inactive): newStatus = OrderStatus.Cancelled else: newStatus = OrderStatus.PendingCancel logEntry = TradeLogEntry(now, newStatus, '') trade.log.append(logEntry) trade.orderStatus.status = newStatus self._logger.info(f'cancelOrder: {trade}') trade.cancelEvent.emit(trade) trade.statusEvent.emit(trade) self.cancelOrderEvent.emit(trade) self.orderStatusEvent.emit(trade) if newStatus == OrderStatus.Cancelled: trade.cancelledEvent.emit(trade) else: self._logger.error(f'cancelOrder: Unknown orderId {order.orderId}') return trade
Cancel the order and return the Trade it belongs to. Args: order: The order to be canceled.
juraj-google-style
def document(self, document_id=None): if (document_id is None): document_id = _auto_id() child_path = (self._path + (document_id,)) return self._client.document(*child_path)
Create a sub-document underneath the current collection. Args: document_id (Optional[str]): The document identifier within the current collection. If not provided, will default to a random 20 character string composed of digits, uppercase and lowercase and letters. Returns: ~.firestore_v1beta1.document.DocumentReference: The child document.
codesearchnet
def get_ytvideos(query, ilogger): queue = [] search_result = ytdiscoveryapi.search().list(q=query, part='id,snippet', maxResults=1, type='video,playlist').execute() if (not search_result['items']): return [] title = search_result['items'][0]['snippet']['title'] ilogger.info('Queueing {}'.format(title)) if (search_result['items'][0]['id']['kind'] == 'youtube videoid = search_result['items'][0]['id']['videoId'] queue.append(['https: elif (search_result['items'][0]['id']['kind'] == 'youtube queue = get_queue_from_playlist(search_result['items'][0]['id']['playlistId']) return queue
Gets either a list of videos from a playlist or a single video, using the first result of a YouTube search Args: query (str): The YouTube search query ilogger (logging.logger): The logger to log API calls to Returns: queue (list): The items obtained from the YouTube search
codesearchnet
def from_deformation(cls, deformation): dfm = Deformation(deformation) return cls((0.5 * (np.dot(dfm.trans, dfm) - np.eye(3))))
Factory method that returns a Strain object from a deformation gradient Args: deformation (3x3 array-like):
codesearchnet
def fit(self, X, y): self._word_vocab.add_documents(X) self._label_vocab.add_documents(y) if self._use_char: for doc in X: self._char_vocab.add_documents(doc) self._word_vocab.build() self._char_vocab.build() self._label_vocab.build() return self
Learn vocabulary from training set. Args: X : iterable. An iterable which yields either str, unicode or file objects. Returns: self : IndexTransformer.
juraj-google-style
def to_obj(self, ns_info=None): if ns_info: ns_info.collect(self) if (not hasattr(self, '_binding_class')): return None entity_obj = self._binding_class() for (field, val) in six.iteritems(self._fields): if (isinstance(val, EntityList) and (len(val) == 0)): val = None elif field.multiple: if val: val = [_objectify(field, x, ns_info) for x in val] else: val = [] else: val = _objectify(field, val, ns_info) setattr(entity_obj, field.name, val) self._finalize_obj(entity_obj) return entity_obj
Convert to a GenerateDS binding object. Subclasses can override this function. Returns: An instance of this Entity's ``_binding_class`` with properties set from this Entity.
codesearchnet
def __use_cache__(self, cache): try: cache_mod = os.path.getmtime(self.cache_filepath) except FileNotFoundError: return False last_file_mod = sorted( \ self.conn.mgr.loaded_times.values())[-1].timestamp() if last_file_mod > cache_mod: return False curr_load = set(self.conn.mgr.loaded) try: with open(self.loaded_filepath, "r") as fo: loaded_files = set(json.loads(fo.read())) if curr_load != loaded_files: return False except FileNotFoundError: return False return cache
checks for changes in the vocabulary and mod times of the files to see if the cache should be used. Args: cache: the kwarg passed in to use the cache during __init__ Returns: Bool: True = use the cache files False = requery the triplestore
juraj-google-style
def _example_from_definition(self, prop_spec): definition_name = self.get_definition_name_from_ref(prop_spec['$ref']) if self.build_one_definition_example(definition_name): example_dict = self.definitions_example[definition_name] if (not isinstance(example_dict, dict)): return example_dict example = dict(((example_name, example_value) for (example_name, example_value) in example_dict.items())) return example
Get an example from a property specification linked to a definition. Args: prop_spec: specification of the property you want an example of. Returns: An example.
codesearchnet
def _make_cluster_def(self): self._cluster_def = cluster_pb2.ClusterDef() for job_name, tasks in sorted(self._cluster_spec.items()): try: job_name = compat.as_bytes(job_name) except TypeError: raise TypeError('Job name %r must be bytes or unicode' % job_name) job_def = self._cluster_def.job.add() job_def.name = job_name for i, task_address in sorted(tasks.items()): try: task_address = compat.as_bytes(task_address) except TypeError: raise TypeError('Task address %r must be bytes or unicode' % task_address) job_def.tasks[i] = task_address
Creates a `tf.train.ClusterDef` based on the given `cluster_spec`. Raises: TypeError: If `cluster_spec` is not a dictionary mapping strings to lists of strings.
github-repos
def from_json_file(cls, json_file: Union[str, os.PathLike]): with open(json_file, encoding='utf-8') as reader: text = reader.read() image_processor_dict = json.loads(text) return cls(**image_processor_dict)
Instantiates a image processor of type [`~image_processing_utils.ImageProcessingMixin`] from the path to a JSON file of parameters. Args: json_file (`str` or `os.PathLike`): Path to the JSON file containing the parameters. Returns: A image processor of type [`~image_processing_utils.ImageProcessingMixin`]: The image_processor object instantiated from that JSON file.
github-repos
def _get_addresses(tx): from_address = set([vin['address'] for vin in tx['vins']]) if len(from_address) != 1: raise InvalidTransactionError("Transaction should have inputs " \ "from only one address {}".format(from_address)) vouts = sorted(tx['vouts'], key=lambda d: d['n'])[:-1] piece_address = vouts[0]['address'] to_address = vouts[-1]['address'] from_address = from_address.pop() return from_address, to_address, piece_address
Checks for the from, to, and piece address of a SPOOL transaction. Args: tx (dict): Transaction payload, as returned by :meth:`transactions.Transactions.get()`. .. note:: Formats as returned by JSON-RPC API ``decoderawtransaction`` have yet to be supported. Returns: Tuple([str]): Sender, receiver, and piece addresses.
juraj-google-style
def _cleanup_workflow(config, task_id, args, **kwargs): from lightflow.models import Workflow if isinstance(args[0], Workflow): if (config.celery['result_expires'] == 0): AsyncResult(task_id).forget()
Cleanup the results of a workflow when it finished. Connects to the postrun signal of Celery. If the signal was sent by a workflow, remove the result from the result backend. Args: task_id (str): The id of the task. args (tuple): The arguments the task was started with. **kwargs: Keyword arguments from the hook.
codesearchnet
def placeholder_symbol_table(name, version, max_id): if (version <= 0): raise ValueError(('Version must be grater than or equal to 1: %s' % version)) if (max_id < 0): raise ValueError(('Max ID must be zero or positive: %s' % max_id)) return SymbolTable(table_type=SHARED_TABLE_TYPE, symbols=repeat(None, max_id), name=name, version=version, is_substitute=True)
Constructs a shared symbol table that consists symbols that all have no known text. This is generally used for cases where a shared symbol table is not available by the application. Args: name (unicode): The name of the shared symbol table. version (int): The version of the shared symbol table. max_id (int): The maximum ID allocated by this symbol table, must be ``>= 0`` Returns: SymbolTable: The synthesized table.
codesearchnet
def edgelist_to_adjacency(edgelist): adjacency = dict() for (u, v) in edgelist: if (u in adjacency): adjacency[u].add(v) else: adjacency[u] = {v} if (v in adjacency): adjacency[v].add(u) else: adjacency[v] = {u} return adjacency
Converts an iterator of edges to an adjacency dict. Args: edgelist (iterable): An iterator over 2-tuples where each 2-tuple is an edge. Returns: dict: The adjacency dict. A dict of the form {v: Nv, ...} where v is a node in a graph and Nv is the neighbors of v as an set.
codesearchnet
def __init__(self, service): if not isinstance(service, sm_messages.Service): raise ValueError(u'service should be an instance of Service') if not service.name: raise ValueError(u'Bad service: the name is missing') self._service = service self._extracted_methods = {} self._auth_infos = self._extract_auth_config() self._quota_infos = self._extract_quota_config() self._templates_method_infos = collections.defaultdict(list) self._extract_methods()
Constructor. Args: service (:class:`endpoints_management.gen.servicemanagement_v1_messages.Service`): a service instance
juraj-google-style
def inference(cluster_info, feed_timeout=600, qname='input'): def _inference(iter): mgr = _get_manager(cluster_info, util.get_ip_address(), util.read_executor_id()) try: queue_in = mgr.get_queue(qname) equeue = mgr.get_queue('error') except (AttributeError, KeyError): msg = "Queue '{}' not found on this node, check for exceptions on other nodes.".format(qname) raise Exception(msg) logging.info('Feeding partition {0} into {1} queue {2}'.format(iter, qname, queue_in)) count = 0 for item in iter: count += 1 queue_in.put(item, block=True) queue_in.put(marker.EndPartition()) if (count == 0): return [] joinThr = Thread(target=queue_in.join) joinThr.start() timeout = feed_timeout while joinThr.isAlive(): if (not equeue.empty()): e_str = equeue.get() equeue.task_done() raise Exception(('exception in worker:\n' + e_str)) time.sleep(1) timeout -= 1 if (timeout <= 0): raise Exception('Timeout while feeding partition') logging.info('Processed {0} items in partition'.format(count)) results = [] queue_out = mgr.get_queue('output') while (count > 0): result = queue_out.get(block=True) results.append(result) count -= 1 queue_out.task_done() logging.info('Finished processing partition') return results return _inference
Feeds Spark partitions into the shared multiprocessing.Queue and returns inference results. Args: :cluster_info: node reservation information for the cluster (e.g. host, executor_id, pid, ports, etc) :feed_timeout: number of seconds after which data feeding times out (600 sec default) :qname: *INTERNAL_USE* Returns: A dataRDD.mapPartitions() function
codesearchnet
def load_exons(adapter, exon_lines, build='37', ensembl_genes=None): ensembl_genes = ensembl_genes or adapter.ensembl_genes(build) hgnc_id_transcripts = adapter.id_transcripts_by_gene(build=build) if isinstance(exon_lines, DataFrame): exons = parse_ensembl_exon_request(exon_lines) nr_exons = exon_lines.shape[0] else: exons = parse_ensembl_exons(exon_lines) nr_exons = 1000000 start_insertion = datetime.now() loaded_exons = 0 LOG.info("Loading exons...") with progressbar(exons, label="Loading exons", length=nr_exons) as bar: for exon in bar: ensg_id = exon['gene'] enst_id = exon['transcript'] gene_obj = ensembl_genes.get(ensg_id) if not gene_obj: continue hgnc_id = gene_obj['hgnc_id'] if not enst_id in hgnc_id_transcripts[hgnc_id]: continue exon['hgnc_id'] = hgnc_id exon_obj = build_exon(exon, build) adapter.load_exon(exon_obj) loaded_exons += 1 LOG.info('Number of exons in build {0}: {1}'.format(build, nr_exons)) LOG.info('Number loaded: {0}'.format(loaded_exons)) LOG.info('Time to load exons: {0}'.format(datetime.now() - start_insertion))
Load all the exons Transcript information is from ensembl. Check that the transcript that the exon belongs to exists in the database Args: adapter(MongoAdapter) exon_lines(iterable): iterable with ensembl exon lines build(str) ensembl_transcripts(dict): Existing ensembl transcripts
juraj-google-style
def parse_args(args): parser = argparse.ArgumentParser( description="Imports GramVaani data for Deep Speech" ) parser.add_argument( "--version", action="version", version="GramVaaniImporter {ver}".format(ver=__version__), ) parser.add_argument( "-v", "--verbose", action="store_const", required=False, help="set loglevel to INFO", dest="loglevel", const=logging.INFO, ) parser.add_argument( "-vv", "--very-verbose", action="store_const", required=False, help="set loglevel to DEBUG", dest="loglevel", const=logging.DEBUG, ) parser.add_argument( "-c", "--csv_filename", required=True, help="Path to the GramVaani csv", dest="csv_filename", ) parser.add_argument( "-t", "--target_dir", required=True, help="Directory in which to save the importer GramVaani data", dest="target_dir", ) return parser.parse_args(args)
Parse command line parameters Args: args ([str]): Command line parameters as list of strings Returns: :obj:`argparse.Namespace`: command line parameters namespace
juraj-google-style
def _sync_to_uri(self, uri): cmd_cp = 'aws s3 cp {} {} --recursive --profile {}'.format(self.s3_version_uri, uri, self.env) cmd_sync = 'aws s3 sync {} {} --delete --exact-timestamps --profile {}'.format( self.s3_version_uri, uri, self.env) cp_result = subprocess.run(cmd_cp, check=True, shell=True, stdout=subprocess.PIPE) LOG.debug("Copy to %s before sync output: %s", uri, cp_result.stdout) LOG.info("Copied version %s to %s", self.version, uri) sync_result = subprocess.run(cmd_sync, check=True, shell=True, stdout=subprocess.PIPE) LOG.debug("Sync to %s command output: %s", uri, sync_result.stdout) LOG.info("Synced version %s to %s", self.version, uri)
Copy and sync versioned directory to uri in S3. Args: uri (str): S3 URI to sync version to.
juraj-google-style
def add_timeout_arg(a_func, timeout, **kwargs): def inner(*args): updated_args = args + (timeout,) return a_func(*updated_args, **kwargs) return inner
Updates a_func so that it gets called with the timeout as its final arg. This converts a callable, a_func, into another callable with an additional positional arg. Args: a_func (callable): a callable to be updated timeout (int): to be added to the original callable as it final positional arg. kwargs: Addtional arguments passed through to the callable. Returns: callable: the original callable updated to the timeout arg
juraj-google-style
def __init__(self, object_type: str, subscriber: str, callback_handler: Callable = None): self._queue = DB.pub_sub() if callback_handler is None: self._queue.subscribe(object_type) else: self._queue.subscribe(**{object_type: callback_handler}) self._pub_key = _keys.published(object_type, subscriber) self._data_key = _keys.data(object_type, subscriber) self._processed_key = _keys.processed_events(object_type, subscriber) self._object_type = object_type self._subscriber = subscriber
Initialise the event queue. Subscribes to Redis pub/sub events of the given object type. Args: object_type (str): Object type subscriber (str): Subscriber name
juraj-google-style
def __init__(self, env, directory, collect_freq=1, flush_freq=100): super().__init__(env) self.directory = directory self.states = [] self.action_infos = [] self.collect_freq = collect_freq self.flush_freq = flush_freq if not os.path.exists(directory): print("DataCollectionWrapper: making new directory at {}".format(directory)) os.makedirs(directory) self.ep_directory = None self.has_interaction = False
Initializes the data collection wrapper. Args: env: The environment to monitor. directory: Where to store collected data. collect_freq: How often to save simulation state, in terms of environment steps. flush_freq: How frequently to dump data to disk, in terms of environment steps.
juraj-google-style
def objects_copy(self, source_bucket, source_key, target_bucket, target_key): url = Api._ENDPOINT + (Api._OBJECT_COPY_PATH % (source_bucket, Api._escape_key(source_key), target_bucket, Api._escape_key(target_key))) return datalab.utils.Http.request(url, method='POST', credentials=self._credentials)
Updates the metadata associated with an object. Args: source_bucket: the name of the bucket containing the source object. source_key: the key of the source object being copied. target_bucket: the name of the bucket that will contain the copied object. target_key: the key of the copied object. Returns: A parsed object information dictionary. Raises: Exception if there is an error performing the operation.
juraj-google-style
def RemoveObject(self, identifier): if identifier not in self._values: raise KeyError('Missing cached object for identifier: {0:s}'.format( identifier)) del self._values[identifier]
Removes a cached object based on the identifier. This method ignores the cache value reference count. Args: identifier (str): VFS object identifier. Raises: KeyError: if the VFS object is not found in the cache.
juraj-google-style
def SetActiveBreakpoints(self, breakpoints_data): with self._lock: ids = set([x['id'] for x in breakpoints_data]) for breakpoint_id in six.viewkeys(self._active) - ids: self._active.pop(breakpoint_id).Clear() self._active.update([ (x['id'], python_breakpoint.PythonBreakpoint( x, self._hub_client, self, self.data_visibility_policy)) for x in breakpoints_data if x['id'] in ids - six.viewkeys(self._active) - self._completed]) self._completed &= ids if self._active: self._next_expiration = datetime.min else: self._next_expiration = datetime.max
Adds new breakpoints and removes missing ones. Args: breakpoints_data: updated list of active breakpoints.
juraj-google-style
def update_remote_archive(self, save_uri, timeout=(- 1)): return self._client.update_with_zero_body(uri=save_uri, timeout=timeout)
Saves a backup of the appliance to a previously-configured remote location. Args: save_uri (dict): The URI for saving the backup to a previously configured location. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView, just stop waiting for its completion. Returns: dict: Backup details.
codesearchnet
def to_hg_scheme_url(cls, url): regexes = cls._get_url_scheme_regexes() for scheme_key, pattern, regex in regexes: match = regex.match(url) if match is not None: groups = match.groups() if len(groups) == 2: return u''.join( scheme_key, ': pattern.replace('{1}', groups[0]), groups[1]) elif len(groups) == 1: return u''.join( scheme_key, ': pattern, groups[0])
Convert a URL to local mercurial URL schemes Args: url (str): URL to map to local mercurial URL schemes example:: # schemes.gh = git://github.com/ >> remote_url = git://github.com/westurner/dotfiles' >> to_hg_scheme_url(remote_url) << gh://westurner/dotfiles
juraj-google-style
def send_location(self, room_id, geo_uri, name, thumb_url=None, thumb_info=None, timestamp=None): content_pack = { "geo_uri": geo_uri, "msgtype": "m.location", "body": name, } if thumb_url: content_pack["thumbnail_url"] = thumb_url if thumb_info: content_pack["thumbnail_info"] = thumb_info return self.send_message_event(room_id, "m.room.message", content_pack, timestamp=timestamp)
Send m.location message event Args: room_id (str): The room ID to send the event in. geo_uri (str): The geo uri representing the location. name (str): Description for the location. thumb_url (str): URL to the thumbnail of the location. thumb_info (dict): Metadata about the thumbnail, type ImageInfo. timestamp (int): Set origin_server_ts (For application services only)
juraj-google-style
def parse_rule(cls, txt): types = {"glob": GlobRule, "regex": RegexRule, "range": RangeRule, "before": TimestampRule, "after": TimestampRule} label, txt = Rule._parse_label(txt) if label is None: if '*' in txt: label = "glob" else: label = "range" elif label not in types: raise ConfigurationError( "'%s' is not a valid package filter type" % label) rule_cls = types[label] txt_ = "%s(%s)" % (label, txt) try: rule = rule_cls._parse(txt_) except Exception as e: raise ConfigurationError("Error parsing package filter '%s': %s: %s" % (txt_, e.__class__.__name__, str(e))) return rule
Parse a rule from a string. See rezconfig.package_filter for an overview of valid strings. Args: txt (str): String to parse. Returns: `Rule` instance.
juraj-google-style
def on_train_begin(self, logs=None): logs = self._process_logs(logs) for callback in self.callbacks: callback.on_train_begin(logs)
Calls the `on_train_begin` methods of its callbacks. Args: logs: Dict. Currently no data is passed to this argument for this method but that may change in the future.
github-repos
def parsed_forensic_reports_to_csv(reports): fields = ['feedback_type', 'user_agent', 'version', 'original_envelope_id', 'original_mail_from', 'original_rcpt_to', 'arrival_date', 'arrival_date_utc', 'subject', 'message_id', 'authentication_results', 'dkim_domain', 'source_ip_address', 'source_country', 'source_reverse_dns', 'source_base_domain', 'delivery_result', 'auth_failure', 'reported_domain', 'authentication_mechanisms', 'sample_headers_only'] if (type(reports) == OrderedDict): reports = [reports] csv_file = StringIO() csv_writer = DictWriter(csv_file, fieldnames=fields) csv_writer.writeheader() for report in reports: row = report.copy() row['source_ip_address'] = report['source']['ip_address'] row['source_reverse_dns'] = report['source']['reverse_dns'] row['source_base_domain'] = report['source']['base_domain'] row['source_country'] = report['source']['country'] del row['source'] row['subject'] = report['parsed_sample']['subject'] row['auth_failure'] = ','.join(report['auth_failure']) authentication_mechanisms = report['authentication_mechanisms'] row['authentication_mechanisms'] = ','.join(authentication_mechanisms) del row['sample'] del row['parsed_sample'] csv_writer.writerow(row) return csv_file.getvalue()
Converts one or more parsed forensic reports to flat CSV format, including headers Args: reports: A parsed forensic report or list of parsed forensic reports Returns: str: Parsed forensic report data in flat CSV format, including headers
codesearchnet
def rescale(image: np.ndarray, scale: float, data_format: Optional[ChannelDimension]=None, dtype: np.dtype=np.float32, input_data_format: Optional[Union[str, ChannelDimension]]=None) -> np.ndarray: if not isinstance(image, np.ndarray): raise TypeError(f'Input image must be of type np.ndarray, got {type(image)}') rescaled_image = image.astype(np.float64) * scale if data_format is not None: rescaled_image = to_channel_dimension_format(rescaled_image, data_format, input_data_format) rescaled_image = rescaled_image.astype(dtype) return rescaled_image
Rescales `image` by `scale`. Args: image (`np.ndarray`): The image to rescale. scale (`float`): The scale to use for rescaling the image. data_format (`ChannelDimension`, *optional*): The channel dimension format of the image. If not provided, it will be the same as the input image. dtype (`np.dtype`, *optional*, defaults to `np.float32`): The dtype of the output image. Defaults to `np.float32`. Used for backwards compatibility with feature extractors. input_data_format (`ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred from the input image. Returns: `np.ndarray`: The rescaled image.
github-repos
def automatic_density_by_vol(structure, kppvol, force_gamma=False): vol = structure.lattice.reciprocal_lattice.volume kppa = ((kppvol * vol) * structure.num_sites) return Kpoints.automatic_density(structure, kppa, force_gamma=force_gamma)
Returns an automatic Kpoint object based on a structure and a kpoint density per inverse Angstrom^3 of reciprocal cell. Algorithm: Same as automatic_density() Args: structure (Structure): Input structure kppvol (int): Grid density per Angstrom^(-3) of reciprocal cell force_gamma (bool): Force a gamma centered mesh Returns: Kpoints
codesearchnet
def gini(y, p): assert y.shape == p.shape n_samples = y.shape[0] arr = np.array([y, p]).transpose() true_order = arr[arr[:,0].argsort()][::-1,0] pred_order = arr[arr[:,1].argsort()][::-1,0] l_true = np.cumsum(true_order) / np.sum(true_order) l_pred = np.cumsum(pred_order) / np.sum(pred_order) l_ones = np.linspace(1/n_samples, 1, n_samples) g_true = np.sum(l_ones - l_true) g_pred = np.sum(l_ones - l_pred) return g_pred / g_true
Normalized Gini Coefficient. Args: y (numpy.array): target p (numpy.array): prediction Returns: e (numpy.float64): normalized Gini coefficient
juraj-google-style
def _add_train_op(self, train_op): if train_op is not None: if not isinstance(train_op, tensor.Tensor) and (not isinstance(train_op, ops.Operation)): raise TypeError(f'`train_op` {train_op} needs to be a Tensor or Op.') ops.add_to_collection(constants.TRAIN_OP_KEY, train_op)
Add train op to the SavedModel. Note that this functionality is in development, and liable to be moved elsewhere. Args: train_op: Op or group of ops that are used for training. These are stored as a collection with key TRAIN_OP_KEY, but not executed. Raises: TypeError if Train op is not of type `Operation`.
github-repos
def pipeline(gcp_project_id: str, region: str, component_artifact_root: str, dataflow_staging_root: str, beam_runner: str): ingest_data_task = DataIngestOp(base_artifact_path=component_artifact_root) data_preprocessing_task = DataPreprocessingOp(ingested_dataset_path=ingest_data_task.outputs['ingested_dataset_path'], base_artifact_path=component_artifact_root, gcp_project_id=gcp_project_id, region=region, dataflow_staging_root=dataflow_staging_root, beam_runner=beam_runner) train_model_task = TrainModelOp(preprocessed_dataset_path=data_preprocessing_task.outputs['preprocessed_dataset_path'], base_artifact_path=component_artifact_root)
KFP pipeline definition. Args: gcp_project_id (str): ID for the google cloud project to deploy the pipeline to. region (str): Region in which to deploy the pipeline. component_artifact_root (str): Path to artifact repository where Kubeflow Pipelines components can store artifacts. dataflow_staging_root (str): Path to staging directory for the dataflow runner. beam_runner (str): Beam runner: DataflowRunner or DirectRunner.
github-repos
def _resource_apply_sparse_duplicate_indices(self, grad, handle, indices, **kwargs): summed_grad, unique_indices = _deduplicate_indexed_slices(values=grad, indices=indices) return self._resource_apply_sparse(summed_grad, handle, unique_indices, **kwargs)
Add ops to apply sparse gradients to `handle`, with repeated indices. Optimizers which override this method must deal with repeated indices. See the docstring of `_apply_sparse_duplicate_indices` for details. By default the correct behavior, to sum non-unique indices and their associated gradients, is enforced by first pre-processing `grad` and `indices` and passing them on to `_resource_apply_sparse`. Optimizers which deal correctly with duplicate indices may instead override this method to avoid the overhead of summing. Args: grad: a `Tensor` representing the gradient for the affected indices. handle: a `Tensor` of dtype `resource` which points to the variable to be updated. indices: a `Tensor` of integral type representing the indices for which the gradient is nonzero. Indices may be repeated. **kwargs: May optionally contain `apply_state` Returns: An `Operation` which updates the value of the variable.
github-repos
def __init__(self, origin='center', coords='relative', **kwargs): self._origin = origin self._coords = coords super(OrthoProjection, self).__init__(**kwargs)
Orthogonal Projection Object cretes projection Object that can be used in Camera Args: origin (str): 'center' or 'corner' coords (str): 'relative' or 'absolute' Returns: OrthoProjection instance
juraj-google-style
def reconnect(self): if (self._auth_method is 'userpass'): self._mgr = manager.connect(host=self._conn[0], port=self._conn[1], username=self._auth[0], password=self._auth[1], hostkey_verify=self._hostkey_verify) elif (self._auth_method is 'key'): self._mgr = manager.connect(host=self._conn[0], port=self._conn[1], username=self._auth[0], key_filename=self._auth_key, hostkey_verify=self._hostkey_verify) else: raise ValueError('auth_method incorrect value.') self._mgr.timeout = 600 return True
Reconnect session with device. Args: None Returns: bool: True if reconnect succeeds, False if not. Raises: None
codesearchnet
def get_effective_ecs(self, strain, order=2): ec_sum = 0 for n, ecs in enumerate(self[order-2:]): ec_sum += ecs.einsum_sequence([strain] * n) / factorial(n) return ec_sum
Returns the effective elastic constants from the elastic tensor expansion. Args: strain (Strain or 3x3 array-like): strain condition under which to calculate the effective constants order (int): order of the ecs to be returned
juraj-google-style
def sheets_create(config, auth, sheet_name, sheet_tab, template_sheet=None, template_tab=None): created = False sheet_id, tab_id = sheets_tab_id(config, auth, sheet_name, sheet_tab) if sheet_id is None: if config.verbose: print('SHEET CREATE', sheet_name, sheet_tab) body = {'properties': {'title': sheet_name}, 'sheets': [{'properties': {'title': sheet_tab}}]} spreadsheet = API_Sheets(config, auth).spreadsheets().create(body=body).execute() sheet_id = spreadsheet['spreadsheetId'] tab_id = spreadsheet['sheets'][0]['properties']['title'] created = True if (created or tab_id is None) and template_sheet and template_tab: if config.verbose: print('SHEET TAB COPY', sheet_tab) sheets_tab_copy(config, auth, template_sheet, template_tab, sheet_id, sheet_tab, True) elif tab_id is None: if config.verbose: print('SHEET TAB CREATE', sheet_name, sheet_tab) sheets_tab_create(config, auth, sheet_name, sheet_tab) elif config.verbose: print('SHEET EXISTS', sheet_name, sheet_tab) return (sheet_id, tab_id, created)
Checks if sheet with name already exists ( outside of trash ) and if not, creates the sheet. Both sheet and tab must be provided or both must be omitted to create a blank sheet and tab. Args: * auth: (string) Either user or service. * sheet_name: (string) name of sheet to create, used as key to check if it exists in the future. * sheet_tab: (string) name of the tab to create. * template_sheet: (string) optional sheet to copy tempalte from. * template_tab: (string) optional tab to copy template from. * parent: (string) the Google Drive to upload the file to. Returns: * JSON specification of the file created or existing.
github-repos
def compute_output(self, o, output_shape=None): if self.combine_dims: o = mtf.transpose(o, o.shape - self.o_dims + self.o_dims) o = mtf.replace_dimensions(o, self.o_dims, self.wo.shape.dims[0]) reduced_dims = [self.wo.shape.dims[0]] else: reduced_dims = self.o_dims return mtf.einsum( [o, self.wo], output_shape=output_shape, reduced_dims=reduced_dims)
Compute output of multihead attention. Args: o: a Tensor with dimensions query_heads_dims + {value_dim} + other_dims output_shape: an optional Shape Returns: a Tensor with shape: {output_dim} + other_dims
juraj-google-style
def _map_or_apply(input_layer, op, *args, **kwargs): kwargs.pop('name') right = kwargs.pop('right_', False) if input_layer.is_sequence(): if right: args += (input_layer,) else: args = ((input_layer,) + args) result = [op(*x, **kwargs) for x in _zip_with_scalars(args)] if (len(result) != len(input_layer)): raise ValueError('Not all arguments were the same length.') return result else: if right: my_op = (lambda x: op(*(args + (x,)), **kwargs)) else: my_op = (lambda x: op(x, *args, **kwargs)) return my_op(input_layer.tensor)
Map op across the input if it is a sequence; otherwise apply it. Note: This takes a keyword argument `right_` to right apply the op to this input. The name is chosen to limit conflicts with other keyword arguments. Args: input_layer: The input_layer (self when chaining). op: The op to apply: *args: Positional arguments for op; if input is a list then any iterable is treated as an argument to co-map (i.e. it zips across non-scalars). **kwargs: Keyword arguments for op; note that `right_` is used by this function. Returns: A new Pretty Tensor that is the result of applying the op to every internal Tensor. Raises: ValueError: If a sequence argument is not the same length as the input_layer.
codesearchnet
def search(self, query_string): query = self.create_query() parser = QueryParser(query_string, query) parser.parse() return self.query(query)
Performs a search against the index using lunr query syntax. Results will be returned sorted by their score, the most relevant results will be returned first. For more programmatic querying use `lunr.Index.query`. Args: query_string (str): A string to parse into a Query. Returns: dict: Results of executing the query.
codesearchnet