code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def snap_mismatched_borders(script, edge_dist_ratio=0.01, unify_vert=True): filter_xml = ''.join([' <filter name="Snap Mismatched Borders">\n', ' <Param name="EdgeDistRatio" ', 'value="{}" '.format(edge_dist_ratio), 'description="Edge Distance Ratio" ', 'type="RichFloat" ', '/>\n', ' <Param name="UnifyVertices" ', 'value="{}" '.format(str(unify_vert).lower()), 'description="UnifyVertices" ', 'type="RichBool" ', '/>\n', ' </filter>\n']) util.write_filter(script, filter_xml) return None
Try to snap together adjacent borders that are slightly mismatched. This situation can happen on badly triangulated adjacent patches defined by high order surfaces. For each border vertex the filter snaps it onto the closest boundary edge only if it is closest of edge_legth*threshold. When vertex is snapped the corresponding face it split and a new vertex is created. Args: script: the FilterScript object or script filename to write the filter to. edge_dist_ratio (float): Collapse edge when the edge / distance ratio is greater than this value. E.g. for default value 1000 two straight border edges are collapsed if the central vertex dist from the straight line composed by the two edges less than a 1/1000 of the sum of the edges length. Larger values enforce that only vertexes very close to the line are removed. unify_vert (bool): If true the snap vertices are welded together. Layer stack: No impacts MeshLab versions: 2016.12 1.3.4BETA
codesearchnet
def _get_spec(self) -> dict: if self.spec: return self.spec self.spec = requests.get(self.SPEC_URL.format(self.version)).json() return self.spec
Fetches the OpenAPI spec from the server. If the spec has already been fetched, the cached version is returned instead. ArgS: None Returns: OpenAPI spec data
juraj-google-style
def protoc_command(lang, output_dir, proto_path, refactored_dir): proto_files = glob.glob(os.path.join(refactored_dir, '*.proto')) cmd = ['protoc', '-I', proto_path, '--{}_out'.format(lang), output_dir] cmd.extend(proto_files) print(' '.join(cmd)) p = subprocess.Popen( cmd, stdout=sys.stdout, stderr=sys.stderr, stdin=sys.stdin, cwd=proto_path) p.communicate()
Runs the "protoc" command on the refactored Protobuf files to generate the source python/python3 files. Args: lang (str): the language to compile with "protoc" (i.e. python, python3) output_dir (str): the output directory for the generated source files proto_path (str): the root protobuf build path in which to run "protoc" refactored_dir (str): the input directory of the Protobuf files
juraj-google-style
def local_reduction_attention(x, block_length, multihead_params): @expert_utils.add_name_scope() def dot_product_self_local_attention_flattened(q, k, v): _, num_head, _, depth = q.get_shape().as_list() def pad_and_reshape(x): length_x = common_layers.shape_list(x)[2] x = tf.pad(x, [[0, 0], [0, 0], [0, -length_x % block_length], [0, 0]]) x = tf.reshape( x, [ common_layers.shape_list(x)[0], num_head, common_layers.shape_list(x)[2] block_length, depth, ]) return x q, k, v = [pad_and_reshape(t) for t in (q, k, v)] logits = tf.matmul(q, k, transpose_b=True) logits = tf.reshape( logits, [ common_layers.shape_list(logits)[0], num_head, common_layers.shape_list(logits)[2], block_length**2, ]) weights = tf.nn.softmax(logits) weights = tf.reshape( weights, [ common_layers.shape_list(weights)[0], num_head, common_layers.shape_list(weights)[2], block_length, block_length, ]) weights = tf.reduce_sum(weights, axis=3, keep_dims=True) v_out = tf.matmul(weights, v) v_out = tf.squeeze(v_out, axis=3) return v_out return multihead_attention( x, None, bias=None, output_depth=x.get_shape().as_list()[-1], attention_type=dot_product_self_local_attention_flattened, **multihead_params)
Reduce the length dimension using self attention. Args: x (tf.Tensor): float32 of shape [batch, length, depth] block_length (int): Block length for local attention (Compression factor) multihead_params (dict): parameters for multihead attention Returns: tf.Tensor: Compressed tensor of shape [batch, length // factor, depth]
juraj-google-style
def register(self, obj, value): if obj in self._registry: raise KeyError(f'{type(obj)} has already been registered.') self._registry[obj] = value
Registers a Python object within the registry. Args: obj: The object to add to the registry. value: The stored value for the 'obj' type. Raises: KeyError: If the same obj is used twice.
github-repos
def task_done(self, message): topic_partition = (message.topic, message.partition) if (topic_partition not in self._topics): logger.warning('Unrecognized topic/partition in task_done message: {0}:{1}'.format(*topic_partition)) return False offset = message.offset prev_done = self._offsets.task_done[topic_partition] if ((prev_done is not None) and (offset != (prev_done + 1))): logger.warning('Marking task_done on a non-continuous offset: %d != %d + 1', offset, prev_done) prev_commit = self._offsets.commit[topic_partition] if ((prev_commit is not None) and ((offset + 1) <= prev_commit)): logger.warning('Marking task_done on a previously committed offset?: %d (+1) <= %d', offset, prev_commit) self._offsets.task_done[topic_partition] = offset if self._does_auto_commit_messages(): self._incr_auto_commit_message_count() if self._should_auto_commit(): self.commit() return True
Mark a fetched message as consumed. Offsets for messages marked as "task_done" will be stored back to the kafka cluster for this consumer group on commit() Arguments: message (KafkaMessage): the message to mark as complete Returns: True, unless the topic-partition for this message has not been configured for the consumer. In normal operation, this should not happen. But see github issue 364.
codesearchnet
def _str_to_ord(content, weights): ordinal = 0 for (i, c) in enumerate(content): ordinal += ((weights[i] * _ALPHABET.index(c)) + 1) return ordinal
Converts a string to its lexicographical order. Args: content: the string to convert. Of type str. weights: weights from _get_weights. Returns: an int or long that represents the order of this string. "" has order 0.
codesearchnet
def is_subdir(base_path, test_path, trailing_slash=False, wildcards=False): if trailing_slash: base_path = (base_path.rsplit('/', 1)[0] + '/') test_path = (test_path.rsplit('/', 1)[0] + '/') else: if (not base_path.endswith('/')): base_path += '/' if (not test_path.endswith('/')): test_path += '/' if wildcards: return fnmatch.fnmatchcase(test_path, base_path) else: return test_path.startswith(base_path)
Return whether the a path is a subpath of another. Args: base_path: The base path test_path: The path which we are testing trailing_slash: If True, the trailing slash is treated with importance. For example, ``/images/`` is a directory while ``/images`` is a file. wildcards: If True, globbing wildcards are matched against paths
codesearchnet
def copy(self, effects=None, target=None): warning = 'File.copy method is deprecated and will be\n removed in 4.0.0.\n Please use `create_local_copy`\n and `create_remote_copy` instead.\n ' logger.warn('API Warning: {0}'.format(warning)) if (target is not None): return self.create_remote_copy(target, effects) else: return self.create_local_copy(effects)
Creates a File Copy on Uploadcare or Custom Storage. File.copy method is deprecated and will be removed in 4.0.0. Please use `create_local_copy` and `create_remote_copy` instead. Args: - effects: Adds CDN image effects. If ``self.default_effects`` property is set effects will be combined with default effects. - target: Name of a custom storage connected to your project. Uploadcare storage is used if target is absent.
codesearchnet
def limitReal(x, max_denominator=1000000): f = Fraction(x).limit_denominator(max_denominator) return Real((f.numerator, f.denominator))
Creates an pysmt Real constant from x. Args: x (number): A number to be cast to a pysmt constant. max_denominator (int, optional): The maximum size of the denominator. Default 1000000. Returns: A Real constant with the given value and the denominator limited.
juraj-google-style
def _compute_new_attention_mask(hidden_states: torch.Tensor, seq_lens: torch.Tensor): batch_size, mask_seq_len = hidden_states.shape[:2] indices = torch.arange(mask_seq_len, device=seq_lens.device).expand(batch_size, -1) bool_mask = indices >= seq_lens.unsqueeze(1).expand(-1, mask_seq_len) mask = hidden_states.new_ones((batch_size, mask_seq_len)) mask = mask.masked_fill(bool_mask, 0) return mask
Computes an attention mask of the form `(batch, seq_len)` with an attention for each element in the batch that stops at the corresponding element in `seq_lens`. Args: hidden_states (`torch.FloatTensor` of shape `(batch, seq_len, *)`): The sequences to mask, where `*` is any number of sequence-specific dimensions including none. seq_lens (`torch.Tensor` of shape `(batch)`: Each element represents the length of the sequence at the same index in `hidden_states` Returns: `torch.FloatTensor`: The float attention mask of shape `(batch, seq_len)`
github-repos
def set_settings(self, settings): for (k, v) in settings.items(): setattr(self, k, v)
Set every given settings as object attributes. Args: settings (dict): Dictionnary of settings.
codesearchnet
def convert_dropout(params, w_name, scope_name, inputs, layers, weights, names): print('Converting dropout ...') if (names == 'short'): tf_name = ('DO' + random_string(6)) elif (names == 'keep'): tf_name = w_name else: tf_name = (w_name + str(random.random())) dropout = keras.layers.Dropout(rate=params['ratio'], name=tf_name) layers[scope_name] = dropout(layers[inputs[0]])
Convert dropout. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
codesearchnet
def __call__(self, kl_fn): if not callable(kl_fn): raise TypeError("kl_fn must be callable, received: %s" % kl_fn) if self._key in _DIVERGENCES: raise ValueError("KL(%s || %s) has already been registered to: %s" % (self._key[0].__name__, self._key[1].__name__, _DIVERGENCES[self._key])) _DIVERGENCES[self._key] = kl_fn return kl_fn
Perform the KL registration. Args: kl_fn: The function to use for the KL divergence. Returns: kl_fn Raises: TypeError: if kl_fn is not a callable. ValueError: if a KL divergence function has already been registered for the given argument classes.
juraj-google-style
def run_with_time_limit(self, cmd, time_limit=SUBMISSION_TIME_LIMIT): if (time_limit < 0): return self.run_without_time_limit(cmd) container_name = str(uuid.uuid4()) cmd = ([DOCKER_BINARY, 'run', DOCKER_NVIDIA_RUNTIME, '--detach', '--name', container_name] + cmd) logging.info('Docker command: %s', ' '.join(cmd)) logging.info('Time limit %d seconds', time_limit) retval = subprocess.call(cmd) start_time = time.time() elapsed_time_sec = 0 while is_docker_still_running(container_name): elapsed_time_sec = int((time.time() - start_time)) if (elapsed_time_sec < time_limit): time.sleep(1) else: kill_docker_container(container_name) logging.warning('Submission was killed because run out of time') logging.info('Elapsed time of submission: %d', elapsed_time_sec) logging.info('Docker retval: %d', retval) if (retval != 0): logging.warning('Docker returned non-zero retval: %d', retval) raise WorkerError(('Docker returned non-zero retval ' + str(retval))) return elapsed_time_sec
Runs docker command and enforces time limit. Args: cmd: list with the command line arguments which are passed to docker binary after run time_limit: time limit, in seconds. Negative value means no limit. Returns: how long it took to run submission in seconds Raises: WorkerError: if error occurred during execution of the submission
codesearchnet
def ListClients(self, request, timeout=None): return self._RetryLoop( lambda t: self._stub.ListClients(request, timeout=t))
Provides basic information about Fleetspeak clients. Args: request: fleetspeak.admin.ListClientsRequest timeout: How many seconds to try for. Returns: fleetspeak.admin.ListClientsResponse
juraj-google-style
def _ScaleAndTranslateGrad(op, grad): grad0 = gen_image_ops.scale_and_translate_grad(grad, op.inputs[0], op.inputs[2], op.inputs[3], kernel_type=op.get_attr('kernel_type'), antialias=op.get_attr('antialias')) return [grad0, None, None, None]
The derivatives for ScaleAndTranslate transformation op. Args: op: The ScaleAndTranslate op. grad: The tensor representing the gradient w.r.t. the output. Returns: The gradients w.r.t. the input.
github-repos
def delete(self): if self.exists(): try: self._api.buckets_delete(self._name) except Exception as e: raise e
Deletes the bucket. Raises: Exception if there was an error deleting the bucket.
codesearchnet
def _add_string_to_commastring(self, field, string): if string in self._get_stringlist_from_commastring(field): return False strings = '%s,%s' % (self.data.get(field, ''), string) if strings[0] == ',': strings = strings[1:] self.data[field] = strings return True
Add a string to a comma separated list of strings Args: field (str): Field containing comma separated list string (str): String to add Returns: bool: True if string added or False if string already present
juraj-google-style
class MeanSquaredError(reduction_metrics.MeanMetricWrapper): def __init__(self, name='mean_squared_error', dtype=None): super().__init__(fn=mean_squared_error, name=name, dtype=dtype) self._direction = 'down' def get_config(self): return {'name': self.name, 'dtype': self.dtype}
Computes the mean squared error between `y_true` and `y_pred`. Formula: ```python loss = mean(square(y_true - y_pred)) ``` Args: name: (Optional) string name of the metric instance. dtype: (Optional) data type of the metric result. Example: >>> m = keras.metrics.MeanSquaredError() >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) >>> m.result() 0.25
github-repos
def create_token_type_ids_from_sequences(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]: sep = [self.sep_token_id] cls = [self.cls_token_id] if token_ids_1 is None: return len(cls + token_ids_0 + sep) * [0] return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]
Create a mask from the two sequences passed to be used in a sequence-pair classification task. Blenderbot does not make use of token type ids, therefore a list of zeros is returned. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of zeros.
github-repos
def read_up_to(self, queue, num_records, name=None): if isinstance(queue, tensor_lib.Tensor): queue_ref = queue else: queue_ref = queue.queue_ref if self._reader_ref.dtype == dtypes.resource: return gen_io_ops.reader_read_up_to_v2(self._reader_ref, queue_ref, num_records, name=name) else: old_queue_op = gen_data_flow_ops.fake_queue(queue_ref) return gen_io_ops.reader_read_up_to(self._reader_ref, old_queue_op, num_records, name=name)
Returns up to num_records (key, value) pairs produced by a reader. Will dequeue a work unit from queue if necessary (e.g., when the Reader needs to start reading from a new file since it has finished with the previous file). It may return less than num_records even before the last batch. Args: queue: A Queue or a mutable string Tensor representing a handle to a Queue, with string work items. num_records: Number of records to read. name: A name for the operation (optional). Returns: A tuple of Tensors (keys, values). keys: A 1-D string Tensor. values: A 1-D string Tensor.
github-repos
def fetch(self, order_id, data={}, **kwargs): return super(Order, self).fetch(order_id, data, **kwargs)
Fetch Order for given Id Args: order_id : Id for which order object has to be retrieved Returns: Order dict for given order Id
juraj-google-style
def __init__(self, num_groups=2): if num_groups < 1: raise ValueError(f'Argument `num_groups` must be a positive integer. Received: num_groups={num_groups}') self._ready = threading.Condition(threading.Lock()) self._num_groups = num_groups self._group_member_counts = [0] * self._num_groups
Initialize a group lock. Args: num_groups: The number of groups that will be accessing the resource under consideration. Should be a positive number. Returns: A group lock that can then be used to synchronize code. Raises: ValueError: If num_groups is less than 1.
github-repos
def _on_connection_open(self, connection): _log.info('Successfully opened connection to %s', connection.params.host) self._channel = connection.channel(on_open_callback=self._on_channel_open)
Callback invoked when the connection is successfully established. Args: connection (pika.connection.SelectConnection): The newly-estabilished connection.
codesearchnet
def find_pip(pip_version=None, python_version=None): pip_exe = 'pip' try: context = create_context(pip_version, python_version) except BuildError as e: from rez.backport.shutilwhich import which pip_exe = which('pip') if pip_exe: print_warning(("pip rez package could not be found; system 'pip' command (%s) will be used instead." % pip_exe)) context = None else: raise e return (pip_exe, context)
Find a pip exe using the given python version. Returns: 2-tuple: str: pip executable; `ResolvedContext`: Context containing pip, or None if we fell back to system pip.
codesearchnet
def GetEntries(self, parser_mediator, cache=None, database=None, **kwargs): if database is None: raise ValueError('Invalid database.') for table_name, callback_method in iter(self._tables.items()): if parser_mediator.abort: break if not callback_method: continue callback = getattr(self, callback_method, None) if callback is None: logger.warning( '[{0:s}] missing callback method: {1:s} for table: {2:s}'.format( self.NAME, callback_method, table_name)) continue esedb_table = database.get_table_by_name(table_name) if not esedb_table: logger.warning('[{0:s}] missing table: {1:s}'.format( self.NAME, table_name)) continue callback( parser_mediator, cache=cache, database=database, table=esedb_table, **kwargs)
Extracts event objects from the database. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. cache (Optional[ESEDBCache]): cache. database (Optional[pyesedb.file]): ESE database. Raises: ValueError: If the database attribute is not valid.
juraj-google-style
def read(cls, five9, external_id): results = cls.search(five9, {cls.__uid_field__: external_id}) if (not results): return None return results[0]
Return a record singleton for the ID. Args: five9 (five9.Five9): The authenticated Five9 remote. external_id (mixed): The identified on Five9. This should be the value that is in the ``__uid_field__`` field on the record. Returns: BaseModel: The record, if found. Otherwise ``None``
codesearchnet
class PatchTSMixerNormLayer(nn.Module): def __init__(self, config: PatchTSMixerConfig): super().__init__() self.norm_mlp = config.norm_mlp if 'batch' in config.norm_mlp.lower(): self.norm = PatchTSMixerBatchNorm(config) else: self.norm = nn.LayerNorm(config.d_model, eps=config.norm_eps) def forward(self, inputs: torch.Tensor): if 'batch' in self.norm_mlp.lower(): inputs_reshaped = torch.reshape(inputs, (inputs.shape[0] * inputs.shape[1], inputs.shape[2], inputs.shape[3])) inputs_reshaped = self.norm(inputs_reshaped) inputs = torch.reshape(inputs_reshaped, inputs.shape) else: inputs = self.norm(inputs) return inputs
Normalization block Args: config (`PatchTSMixerConfig`): Configuration.
github-repos
def fill(self, text): def _fill(elem): elem.clear() elem.send_keys(text) self.map(_fill, u'fill({!r})'.format(text)).execute()
Set the text value of each matched element to `text`. Example usage: .. code:: python # Set the text of the first element matched by the query to "Foo" q.first.fill('Foo') Args: text (str): The text used to fill the element (usually a text field or text area). Returns: None
juraj-google-style
def get_operation_mtf_dimension_names(self, operation_name): mtf_dimension_names = set() for tensor_name in self.get_operation_input_names(operation_name): mtf_dimension_names.update(self.get_tensor_mtf_dimension_names(tensor_name)) for tensor_name in self.get_operation_output_names(operation_name): mtf_dimension_names.update(self.get_tensor_mtf_dimension_names(tensor_name)) return mtf_dimension_names
The Mesh TensorFlow dimensions associated with an operation. Args: operation_name: a string, name of an operation in the graph. Returns: a set(string), the names of Mesh TensorFlow dimensions.
codesearchnet
def list(self, **kwargs): resp = self.client.api.volumes(**kwargs) if (not resp.get('Volumes')): return [] return [self.prepare_model(obj) for obj in resp['Volumes']]
List volumes. Similar to the ``docker volume ls`` command. Args: filters (dict): Server-side list filtering options. Returns: (list of :py:class:`Volume`): The volumes. Raises: :py:class:`docker.errors.APIError` If the server returns an error.
codesearchnet
def isClose(x, y, relative_tolerance): if math.isnan(x) or math.isnan(y): return math.isnan(x) == math.isnan(y) if math.isinf(x) or math.isinf(y): return x == y return abs(x - y) <= relative_tolerance * max(abs(x), abs(y))
Returns True if x is close to y given the relative tolerance or if x and y are both inf, both -inf, or both NaNs. This function does not distinguish between signalling and non-signalling NaN. Args: x: float value to be compared y: float value to be compared relative_tolerance: float. The allowable difference between the two values being compared is determined by multiplying the relative tolerance by the maximum of the two values. If this is not provided, then all floats are compared using string comparison.
github-repos
def add_positional_embedding_nd(x, max_length, name=None): with tf.name_scope('add_positional_embedding_nd'): x_shape = common_layers.shape_list(x) num_dims = (len(x_shape) - 2) depth = x_shape[(- 1)] base_shape = (([1] * (num_dims + 1)) + [depth]) base_start = ([0] * (num_dims + 2)) base_size = (([(- 1)] + ([1] * num_dims)) + [depth]) for i in range(num_dims): shape = base_shape[:] start = base_start[:] size = base_size[:] shape[(i + 1)] = max_length size[(i + 1)] = x_shape[(i + 1)] var = tf.get_variable((name + ('_%d' % i)), shape, initializer=tf.random_normal_initializer(0, (depth ** (- 0.5)))) var = (var * (depth ** 0.5)) x += tf.slice(var, start, size) return x
Adds n-dimensional positional embedding. The embeddings add to all positional dimensions of the tensor. Args: x: Tensor with shape [batch, p1 ... pn, depth]. It has n positional dimensions, i.e., 1 for text, 2 for images, 3 for video, etc. max_length: int representing static maximum size of any dimension. name: str representing name of the embedding tf.Variable. Returns: Tensor of same shape as x.
codesearchnet
def char_spacing(self, dots): if dots in range(0,127): self.send(chr(27)+chr(32)+chr(dots)) else: raise RuntimeError('Invalid dot amount in function charSpacing')
Specifes character spacing in dots. Args: dots: the character spacing you desire, in dots Returns: None Raises: RuntimeError: Invalid dot amount.
juraj-google-style
def __init__(self, key_or_key_list: Optional[Union[Any, List[Any]]]=None, parent: Optional['KeyPath']=None): if key_or_key_list is None: key_or_key_list = [] elif not isinstance(key_or_key_list, (tuple, list)): key_or_key_list = [key_or_key_list] keys = [] if parent: keys.extend(parent.keys) keys.extend(key_or_key_list) self._keys = keys self._path_str = None
Constructor. Args: key_or_key_list: A single object as key, or a list/tuple of objects as keys in the path. When string types or StrKey objects are used as key, dot ('.') is used as the delimiter, otherwise square brackets ('[]') is used as the delimiter when formatting a KeyPath. For object type key, str(object) will be used to represent the key in string form. parent: Parent KeyPath.
github-repos
def create_all(cls, list_of_kwargs): try: return cls.add_all([(cls.new(**kwargs) if (kwargs is not None) else None) for kwargs in list_of_kwargs]) except: cls.session.rollback() raise
Batch method for creating a list of instances Args: list_of_kwargs(list of dicts): hereA list of dicts where each dict denotes the keyword args that you would pass to the create method separately Examples: >>> Customer.create_all([ ... {'name': 'Vicky', 'age': 34, 'user_id': 1}, ... {'name': 'Ron', 'age': 40, 'user_id': 1, 'gender': 'Male'}])
codesearchnet
def _compute_fans(shape): if len(shape) < 1: fan_in = fan_out = 1 elif len(shape) == 1: fan_in = fan_out = shape[0] elif len(shape) == 2: fan_in = shape[0] fan_out = shape[1] else: receptive_field_size = 1 for dim in shape[:-2]: receptive_field_size *= dim fan_in = shape[-2] * receptive_field_size fan_out = shape[-1] * receptive_field_size return (int(fan_in), int(fan_out))
Computes the number of input and output units for a weight shape. Args: shape: Integer shape tuple or TF tensor shape. Returns: A tuple of integer scalars (fan_in, fan_out).
github-repos
def _CanSkipDataStream(self, file_entry, data_stream): if file_entry.IsFile(): return False if data_stream.IsDefault(): return True return False
Determines if analysis and extraction of a data stream can be skipped. This is used to prevent Plaso trying to run analyzers or extract content from a pipe or socket it encounters while processing a mounted filesystem. Args: file_entry (dfvfs.FileEntry): file entry to consider for skipping. data_stream (dfvfs.DataStream): data stream to consider for skipping. Returns: bool: True if the data stream can be skipped.
codesearchnet
def get(self, name): interface = name if (not interface): raise ValueError('Vrrp.get(): interface must contain a value.') config = self.get_block(('interface %s' % interface)) if (config is None): return config match = set(re.findall('^\\s+(?:no |)vrrp (\\d+)', config, re.M)) if (not match): return None result = dict() for vrid in match: subd = dict() subd.update(self._parse_delay_reload(config, vrid)) subd.update(self._parse_description(config, vrid)) subd.update(self._parse_enable(config, vrid)) subd.update(self._parse_ip_version(config, vrid)) subd.update(self._parse_mac_addr_adv_interval(config, vrid)) subd.update(self._parse_preempt(config, vrid)) subd.update(self._parse_preempt_delay_min(config, vrid)) subd.update(self._parse_preempt_delay_reload(config, vrid)) subd.update(self._parse_primary_ip(config, vrid)) subd.update(self._parse_priority(config, vrid)) subd.update(self._parse_secondary_ip(config, vrid)) subd.update(self._parse_timers_advertise(config, vrid)) subd.update(self._parse_track(config, vrid)) subd.update(self._parse_bfd_ip(config, vrid)) result.update({int(vrid): subd}) return (result if result else None)
Get the vrrp configurations for a single node interface Args: name (string): The name of the interface for which vrrp configurations will be retrieved. Returns: A dictionary containing the vrrp configurations on the interface. Returns None if no vrrp configurations are defined or if the interface is not configured.
codesearchnet
def __init__(self, name, *value): self.name = name self.key = name self.value = name if len(value) != 1 else value[0] self.description = "Matches {!r} and maps it to {!r}".format(name, self.value)
Initialize Keywords Args: name -- keyword name value -- Optional value, otherwise name is used value is setup as *value to detect if the parameter is supplied, while still supporting None. If no value is supplied then name should be used. If any value is supplied (even None), then that value is used instead
juraj-google-style
def map_exp_ids(self, exp, positions=False): if positions: exp = [('%s_%s' % ( self.indexed_string.word(x[0]), '-'.join( map(str, self.indexed_string.string_position(x[0])))), x[1]) for x in exp] else: exp = [(self.indexed_string.word(x[0]), x[1]) for x in exp] return exp
Maps ids to words or word-position strings. Args: exp: list of tuples [(id, weight), (id,weight)] positions: if True, also return word positions Returns: list of tuples (word, weight), or (word_positions, weight) if examples: ('bad', 1) or ('bad_3-6-12', 1)
juraj-google-style
def bbox(lat, lon, dist): latr = math.radians(lat) lonr = math.radians(lon) rad = r_mm prad = rad * math.cos(latr) latd = dist / rad lond = dist / prad latmin = math.degrees(latr - latd) latmax = math.degrees(latr + latd) lonmin = math.degrees(lonr - lond) lonmax = math.degrees(lonr + lond) return (latmin, latmax, lonmin, lonmax)
Calculate a min/max bounding box for the circle defined by lalo/dist. Args: lat (float): The latitude in degrees lon (float): The longitude in degrees dist (int): A distance in geo:dist base units (mm) Returns: (float,float,float,float): (latmin, latmax, lonmin, lonmax)
juraj-google-style
def trading_dates(start, end, calendar='US'): kw = dict(start=pd.Timestamp(start, tz='UTC').date(), end=pd.Timestamp(end, tz='UTC').date()) us_cal = getattr(sys.modules[__name__], f'{calendar}TradingCalendar')() return pd.bdate_range(**kw).drop(us_cal.holidays(**kw))
Trading dates for given exchange Args: start: start date end: end date calendar: exchange as string Returns: pd.DatetimeIndex: datetime index Examples: >>> bus_dates = ['2018-12-24', '2018-12-26', '2018-12-27'] >>> trd_dates = trading_dates(start='2018-12-23', end='2018-12-27') >>> assert len(trd_dates) == len(bus_dates) >>> assert pd.Series(trd_dates == pd.DatetimeIndex(bus_dates)).all()
codesearchnet
def __init__(self, service_endpoint_uri=None): self._send_interval = 1.0 self._send_remaining_time = 0 self._send_time = 3.0 self._lock_send_remaining_time = Lock() SenderBase.__init__(self, service_endpoint_uri or DEFAULT_ENDPOINT_URL)
Initializes a new instance of the class. Args: sender (String) service_endpoint_uri the address of the service to send telemetry data to.
juraj-google-style
def get_user(self, user): self.project_service.set_auth(self._token_project) return self.project_service.get_user(user)
Get user's data (first and last name, email, etc). Args: user (string): User name. Returns: (dictionary): User's data encoded in a dictionary. Raises: requests.HTTPError on failure.
juraj-google-style
def ReadByte(self, do_ord=True): try: if do_ord: return ord(self.stream.read(1)) return self.stream.read(1) except Exception as e: logger.error("ord expected character but got none") return 0
Read a single byte. Args: do_ord (bool): (default True) convert the byte to an ordinal first. Returns: bytes: a single byte if successful. 0 (int) if an exception occurred.
juraj-google-style
def work_request(self, worker_name, md5, subkeys=None): work_results = self._recursive_work_resolver(worker_name, md5) if subkeys: if isinstance(subkeys, str): subkeys = [subkeys] try: sub_results = {} for subkey in subkeys: tmp = work_results[worker_name] for key in subkey.split('.')[:-1]: tmp = tmp[key] key = subkey.split('.')[-1] if key == '*': for key in tmp.keys(): sub_results[key] = tmp[key] else: sub_results[key] = tmp[key] work_results = sub_results except (KeyError, TypeError): raise RuntimeError('Could not get one or more subkeys for: %s' % (work_results)) return self.data_store.clean_for_serialization(work_results)
Make a work request for an existing stored sample. Args: worker_name: 'strings', 'pe_features', whatever md5: the md5 of the sample (or sample_set!) subkeys: just get a subkey of the output: 'foo' or 'foo.bar' (None for all) Returns: The output of the worker.
juraj-google-style
def view(filepath): try: view_func = getattr(view, PLATFORM) except AttributeError: raise RuntimeError('platform %r not supported' % PLATFORM) view_func(filepath)
Open filepath with its default viewing application (platform-specific). Args: filepath: Path to the file to open in viewer. Raises: RuntimeError: If the current platform is not supported.
juraj-google-style
def GetAnalyzerInstance(cls, analyzer_name): analyzer_name = analyzer_name.lower() if analyzer_name not in cls._analyzer_classes: raise KeyError( 'analyzer class not set for name: {0:s}.'.format(analyzer_name)) analyzer_class = cls._analyzer_classes[analyzer_name] return analyzer_class()
Retrieves an instance of a specific analyzer. Args: analyzer_name (str): name of the analyzer to retrieve. Returns: BaseAnalyzer: analyzer instance. Raises: KeyError: if analyzer class is not set for the corresponding name.
juraj-google-style
def call(self, hidden_states: tf.Tensor, attention_mask: Optional[tf.Tensor]=None, position_ids: Optional[tf.Tensor]=None, past_key_value: Optional[Tuple[tf.Tensor]]=None, output_attentions: Optional[bool]=False, use_cache: Optional[bool]=False, **kwargs) -> Tuple[tf.Tensor, Optional[Tuple[tf.Tensor, tf.Tensor]]]: if 'padding_mask' in kwargs: warnings.warn('Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`') residual = hidden_states hidden_states = self.input_layernorm(hidden_states) hidden_states, self_attn_weights, present_key_value = self.self_attn(hidden_states=hidden_states, attention_mask=attention_mask, position_ids=position_ids, past_key_value=past_key_value, output_attentions=output_attentions, use_cache=use_cache) hidden_states = residual + hidden_states residual = hidden_states hidden_states = self.post_attention_layernorm(hidden_states) hidden_states = self.mlp(hidden_states) hidden_states = residual + hidden_states outputs = (hidden_states,) if output_attentions: outputs += (self_attn_weights,) if use_cache: outputs += (present_key_value,) return outputs
Args: hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`tf.Tensor`, *optional*): attention mask of size `(batch, sequence_length)` where padding elements are indicated by 0. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail. use_cache (`bool`, *optional*): If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see `past_key_values`). past_key_value (`Tuple(tf.Tensor)`, *optional*): cached past key and value projection states
github-repos
def load_snippet(self, name, package, config=None): if hasattr(self, name): raise SnippetError(self, 'Attribute "%s" already exists, please use a different name.' % name) self.services.snippets.add_snippet_client(name, package, config=config)
Starts the snippet apk with the given package name and connects. Examples: .. code-block:: python ad.load_snippet( name='maps', package='com.google.maps.snippets') ad.maps.activateZoom('3') Args: name: string, the attribute name to which to attach the snippet client. E.g. `name='maps'` attaches the snippet client to `ad.maps`. package: string, the package name of the snippet apk to connect to. config: snippet_client_v2.Config, the configuration object for controlling the snippet behaviors. See the docstring of the `Config` class for supported configurations. Raises: SnippetError: Illegal load operations are attempted.
github-repos
def period_end_day(self, value=None): if value is not None: try: value = str(value) except ValueError: raise ValueError('value {} need to be of type str ' 'for field `period_end_day`'.format(value)) if ',' in value: raise ValueError('value should not contain a comma ' 'for field `period_end_day`') self._period_end_day = value
Corresponds to IDD Field `period_end_day` Args: value (str): value for IDD Field `period_end_day` if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
juraj-google-style
def relu_density_logit(x, reduce_dims): frac = tf.reduce_mean(to_float(x > 0.0), reduce_dims) scaled = tf.log(frac + math.exp(-10)) - tf.log((1.0 - frac) + math.exp(-10)) return scaled
logit(density(x)). Useful for histograms. Args: x: a Tensor, typically the output of tf.relu reduce_dims: a list of dimensions Returns: a Tensor
juraj-google-style
def decode_image_tokens(self, image_tokens: torch.LongTensor, height: int, width: int): sequences = image_tokens[:, :-3].view(-1, height, width + 1) image_tokens = self.vocabulary_mapping.convert_bpe2img(sequences) image = self.vqmodel.decode(image_tokens) return image
Decodes generated image tokens from language model to continuous pixel values with VQGAN module via upsampling. Args: image_tokens (`torch.LongTensor` of shape `(batch_size, num_of_tokens)`): The tensors corresponding to the input images. height (`int`): Height of the generated image before upsampling. width (`int`): Width of the generated image before upsampling.
github-repos
def __init__(self, contents): precondition.AssertOptionalType(contents, Text) self.contents = contents
Initialise the parser, presenting file contents to parse. Args: contents: file contents that are to be parsed.
juraj-google-style
def raster_binarization(given_value, rasterfilename): origin_raster = RasterUtilClass.read_raster(rasterfilename) binary_raster = numpy.where((origin_raster.data == given_value), 1, 0) return binary_raster
Make the raster into binarization. The opening and closing are based on binary image. Therefore we need to make the raster into binarization. Args: given_value: The given value's pixels will be value in 1, other pixels will be value in 0. rasterfilename: The initial rasterfilena,e. Returns: binary_raster: Raster after binarization.
codesearchnet
def _parse_error_message(self, message): msg = message['error']['message'] code = message['error']['code'] err = None out = None if ('data' in message['error']): err = ' '.join(message['error']['data'][(- 1)]['errors']) out = message['error']['data'] return (code, msg, err, out)
Parses the eAPI failure response message This method accepts an eAPI failure message and parses the necesary parts in order to generate a CommandError. Args: message (str): The error message to parse Returns: tuple: A tuple that consists of the following: * code: The error code specified in the failure message * message: The error text specified in the failure message * error: The error text from the command that generated the error (the last command that ran) * output: A list of all output from all commands
codesearchnet
def load_profiles_from_file(self, fqfn): if self.args.verbose: print('Loading profiles from File: {}{}{}'.format(c.Style.BRIGHT, c.Fore.MAGENTA, fqfn)) with open(fqfn, 'r+') as fh: data = json.load(fh) for profile in data: self.profile_update(profile) if (self.args.action == 'validate'): self.validate(profile) fh.seek(0) fh.write(json.dumps(data, indent=2, sort_keys=True)) fh.truncate() for d in data: if (d.get('profile_name') in self.profiles): self.handle_error('Found a duplicate profile name ({}).'.format(d.get('profile_name'))) self.profiles.setdefault(d.get('profile_name'), {'data': d, 'ij_filename': d.get('install_json'), 'fqfn': fqfn})
Load profiles from file. Args: fqfn (str): Fully qualified file name.
codesearchnet
def get_case_groups(adapter, total_cases, institute_id=None, slice_query=None): cases = [{'status': 'all', 'count': total_cases, 'percent': 1}] pipeline = [] group = {'$group' : {'_id': '$status', 'count': {'$sum': 1}}} subquery = {} if institute_id and slice_query: subquery = adapter.cases(owner=institute_id, name_query=slice_query, yield_query=True) elif institute_id: subquery = adapter.cases(owner=institute_id, yield_query=True) elif slice_query: subquery = adapter.cases(name_query=slice_query, yield_query=True) query = {'$match': subquery} if subquery else {} if query: pipeline.append(query) pipeline.append(group) res = adapter.case_collection.aggregate(pipeline) for status_group in res: cases.append({'status': status_group['_id'], 'count': status_group['count'], 'percent': status_group['count'] / total_cases}) return cases
Return the information about case groups Args: store(adapter.MongoAdapter) total_cases(int): Total number of cases slice_query(str): Query to filter cases to obtain statistics for. Returns: cases(dict):
juraj-google-style
def create_storage_account(access_token, subscription_id, rgname, account_name, location, storage_type='Standard_LRS'): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourcegroups/', rgname, '/providers/Microsoft.Storage/storageAccounts/', account_name, '?api-version=', STORAGE_API]) storage_body = {'location': location} storage_body['sku'] = {'name': storage_type} storage_body['kind'] = 'Storage' body = json.dumps(storage_body) return do_put(endpoint, body, access_token)
Create a new storage account in the named resource group, with the named location. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. rgname (str): Azure resource group name. account_name (str): Name of the new storage account. location (str): Azure data center location. E.g. westus. storage_type (str): Premium or Standard, local or globally redundant. Defaults to Standard_LRS. Returns: HTTP response. JSON body of storage account properties.
codesearchnet
def restore(self, sess, save_path): if self._saver is None: raise TensorForceError("register_saver_ops should be called before restore") self._saver.restore(sess=sess, save_path=save_path)
Restores the values of the managed variables from disk location. Args: sess: The session for which to save the managed variables. save_path: The path used to save the data to.
juraj-google-style
def CheckAccess(filename, clean_lines, linenum, nesting_state, error): line = clean_lines.elided[linenum] matched = Match((r'\s*(DISALLOW_COPY_AND_ASSIGN|' r'DISALLOW_IMPLICIT_CONSTRUCTORS)'), line) if not matched: return if nesting_state.stack and isinstance(nesting_state.stack[-1], _ClassInfo): if nesting_state.stack[-1].access != 'private': error(filename, linenum, 'readability/constructors', 3, '%s must be in the private: section' % matched.group(1)) else: pass
Checks for improper use of DISALLOW* macros. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. nesting_state: A NestingState instance which maintains information about the current stack of nested blocks being parsed. error: The function to call with any errors found.
juraj-google-style
def saveplot(fig, *name_args, close=True, **name_kwargs): oname = out_name(*name_args, **name_kwargs) fig.savefig('{}.{}'.format(oname, conf.plot.format), format=conf.plot.format, bbox_inches='tight') if close: plt.close(fig)
Save matplotlib figure. You need to provide :data:`stem` as a positional or keyword argument (see :func:`out_name`). Args: fig (:class:`matplotlib.figure.Figure`): matplotlib figure. close (bool): whether to close the figure. name_args: positional arguments passed on to :func:`out_name`. name_kwargs: keyword arguments passed on to :func:`out_name`.
codesearchnet
def _parameterize_obj(obj): if isinstance(obj, Mapping): return dict(((key, _parameterize_obj(value)) for (key, value) in obj.items())) elif isinstance(obj, bytes): return _parameterize_string(obj.decode('utf8')) elif isinstance(obj, str): return _parameterize_string(obj) elif isinstance(obj, Sequence): return list((_parameterize_obj(item) for item in obj)) else: return obj
Recursively parameterize all strings contained in an object. Parameterizes all values of a Mapping, all items of a Sequence, an unicode string, or pass other objects through unmodified. Byte strings will be interpreted as UTF-8. Args: obj: data to parameterize Return: A parameterized object to be included in a CloudFormation template. Mappings are converted to `dict`, Sequences are converted to `list`, and strings possibly replaced by compositions of function calls.
codesearchnet
def anm_score(self, x, y): gp = GaussianProcessRegressor().fit(x, y) y_predict = gp.predict(x) indepscore = normalized_hsic((y_predict - y), x) return indepscore
Compute the fitness score of the ANM model in the x->y direction. Args: a (numpy.ndarray): Variable seen as cause b (numpy.ndarray): Variable seen as effect Returns: float: ANM fit score
codesearchnet
def getStreamNetworkAsWkt(self, session, withNodes=True): wkt_list = [] for link in self.streamLinks: wkt_link = link.getAsWkt(session) if wkt_link: wkt_list.append(wkt_link) if withNodes: for node in link.nodes: wkt_node = node.getAsWkt(session) if wkt_node: wkt_list.append(wkt_node) return 'GEOMCOLLECTION ({0})'.format(', '.join(wkt_list))
Retrieve the stream network geometry in Well Known Text format. Args: session (:mod:`sqlalchemy.orm.session.Session`): SQLAlchemy session object bound to PostGIS enabled database withNodes (bool, optional): Include nodes. Defaults to False. Returns: str: Well Known Text string.
codesearchnet
def gaussian_noise(x, severity=1): c = [0.08, 0.12, 0.18, 0.26, 0.38][(severity - 1)] x = (np.array(x) / 255.0) x_clip = (np.clip((x + np.random.normal(size=x.shape, scale=c)), 0, 1) * 255) return around_and_astype(x_clip)
Gaussian noise corruption to images. Args: x: numpy array, uncorrupted image, assumed to have uint8 pixel in [0,255]. severity: integer, severity of corruption. Returns: numpy array, image with uint8 pixels in [0,255]. Added Gaussian noise.
codesearchnet
def _NoBlankLinesBeforeCurrentToken(text, cur_token, prev_token): cur_token_lineno = cur_token.lineno if cur_token.is_comment: cur_token_lineno -= cur_token.value.count('\n') num_newlines = text.count('\n') if not prev_token.is_comment else 0 return prev_token.lineno + num_newlines == cur_token_lineno - 1
Determine if there are no blank lines before the current token. The previous token is a docstring or comment. The prev_token_lineno is the start of the text of that token. Counting the number of newlines in its text gives us the extent and thus where the line number of the end of the docstring or comment. After that, we just compare it to the current token's line number to see if there are blank lines between them. Arguments: text: (unicode) The text of the docstring or comment before the current token. cur_token: (format_token.FormatToken) The current token in the logical line. prev_token: (format_token.FormatToken) The previous token in the logical line. Returns: True if there is no blank line before the current token.
github-repos
def search_artists_by_name(self, artist_name: str, limit: int=5) -> List[NameExternalIDPair]: response: requests.Response = requests.get(self._API_URL_TEMPLATE.format('search'), params={'q': artist_name, 'type': 'artist', 'limit': limit}, headers={'Authorization': 'Bearer {}'.format(self._token.access_token)}) response.raise_for_status() if (not response.text): return [] result: List[NameExternalIDPair] = [] data: List[Dict] = response.json()['artists']['items'] for artist in data: artist = NameExternalIDPair(artist['name'].strip(), artist['id'].strip()) if ((not artist.name) or (not artist.external_id)): raise SpotifyClientError('Name or ID is missing') result.append(artist) return result
Returns zero or more artist name - external ID pairs that match the specified artist name. Arguments: artist_name (str): The artist name to search in the Spotify API. limit (int): The maximum number of results to return. Returns: Zero or more artist name - external ID pairs. Raises: requests.HTTPError: If an HTTP error occurred during the request. SpotifyClientError: If an invalid item is found.
codesearchnet
def initialize_repository(path, spor_dir='.spor'): path = pathlib.Path(path) spor_path = path / spor_dir if spor_path.exists(): raise ValueError('spor directory already exists: {}'.format(spor_path)) spor_path.mkdir() return Repository(path, spor_dir)
Initialize a spor repository in `path` if one doesn't already exist. Args: path: Path to any file or directory within the repository. spor_dir: The name of the directory containing spor data. Returns: A `Repository` instance. Raises: ValueError: A repository already exists at `path`.
juraj-google-style
def __init__(self, buffer_size=8, max_workers=5, client=None, credential=None): self.buffer_size = buffer_size self.max_workers = max_workers self.client = client or DicomApiHttpClient() self.credential = credential
Initializes DicomSearch. Args: buffer_size: # type: Int. Size of the request buffer. max_workers: # type: Int. Maximum number of threads a worker can create. If it is set to one, all the request will be processed sequentially in a worker. client: # type: object. If it is specified, all the Api calls will made by this client instead of the default one (DicomApiHttpClient). credential: # type: Google credential object, if it is specified, the Http client will use it to create sessions instead of the default.
github-repos
def pause(self, device): resp = self.post('pause', params={'device': device}, return_response=True) error = resp.text if (not error): error = None return {'success': (resp.status_code == requests.codes.ok), 'error': error}
Pause the given device. Args: device (str): Device ID. Returns: dict: with keys ``success`` and ``error``.
codesearchnet
def shifted_centroid_distance(item_a, time_a, item_b, time_b, max_value): (ax, ay) = item_a.center_of_mass(time_a) (bx, by) = item_b.center_of_mass(time_b) if (time_a < time_b): bx = (bx - item_b.u) by = (by - item_b.v) else: ax = (ax - item_a.u) ay = (ay - item_a.v) return (np.minimum(np.sqrt((((ax - bx) ** 2) + ((ay - by) ** 2))), max_value) / float(max_value))
Centroid distance with motion corrections. Args: item_a: STObject from the first set in ObjectMatcher time_a: Time integer being evaluated item_b: STObject from the second set in ObjectMatcher time_b: Time integer being evaluated max_value: Maximum distance value used as scaling value and upper constraint. Returns: Distance value between 0 and 1.
codesearchnet
def _format_output(kernel_restart, packages, verbose, restartable, nonrestartable, restartservicecommands, restartinitcommands): if (not verbose): packages = (restartable + nonrestartable) if kernel_restart: packages.append('System restart required.') return packages else: ret = '' if kernel_restart: ret = 'System restart required.\n\n' if packages: ret += 'Found {0} processes using old versions of upgraded files.\n'.format(len(packages)) ret += 'These are the packages:\n' if restartable: ret += 'Of these, {0} seem to contain systemd service definitions or init scripts which can be used to restart them:\n'.format(len(restartable)) for package in restartable: ret += (package + ':\n') for program in packages[package]['processes']: ret += (program + '\n') if restartservicecommands: ret += '\n\nThese are the systemd services:\n' ret += '\n'.join(restartservicecommands) if restartinitcommands: ret += '\n\nThese are the initd scripts:\n' ret += '\n'.join(restartinitcommands) if nonrestartable: ret += '\n\nThese processes {0} do not seem to have an associated init script to restart them:\n'.format(len(nonrestartable)) for package in nonrestartable: ret += (package + ':\n') for program in packages[package]['processes']: ret += (program + '\n') return ret
Formats the output of the restartcheck module. Returns: String - formatted output. Args: kernel_restart: indicates that newer kernel is instaled packages: list of packages that should be restarted verbose: enables extensive output restartable: list of restartable packages nonrestartable: list of non-restartable packages restartservicecommands: list of commands to restart services restartinitcommands: list of commands to restart init.d scripts
codesearchnet
def get_cartesian_coords(self, fractional_coords: Vector3Like) -> np.ndarray: return dot(fractional_coords, self._matrix)
Returns the cartesian coordinates given fractional coordinates. Args: fractional_coords (3x1 array): Fractional coords. Returns: Cartesian coordinates
codesearchnet
def _set_advertising_data(self, packet_type, data): payload = struct.pack(('<BB%ss' % len(data)), packet_type, len(data), bytes(data)) response = self._send_command(6, 9, payload) (result,) = unpack('<H', response.payload) if (result != 0): return (False, {'reason': 'Error code from BLED112 setting advertising data', 'code': result}) return (True, None)
Set the advertising data for advertisements sent out by this bled112 Args: packet_type (int): 0 for advertisement, 1 for scan response data (bytearray): the data to set
codesearchnet
def _sign_operation(op): md5 = hashlib.md5() md5.update(op.consumerId.encode('utf-8')) md5.update(b'\x00') md5.update(op.operationName.encode('utf-8')) if op.labels: signing.add_dict_to_hash(md5, encoding.MessageToPyValue(op.labels)) return md5.digest()
Obtains a signature for an operation in a ReportRequest. Args: op (:class:`endpoints_management.gen.servicecontrol_v1_messages.Operation`): an operation used in a `ReportRequest` Returns: string: a unique signature for that operation
codesearchnet
def check_signature_supported(func, warn=False): function_name = func.__name__ sig_params = get_signature_params(func) has_kwargs_param = False has_kwonly_param = False for (keyword_name, parameter) in sig_params: if (parameter.kind == Parameter.VAR_KEYWORD): has_kwargs_param = True if (parameter.kind == Parameter.KEYWORD_ONLY): has_kwonly_param = True if has_kwargs_param: message = 'The function {} has a **kwargs argument, which is currently not supported.'.format(function_name) if warn: logger.warning(message) else: raise Exception(message) if has_kwonly_param: message = 'The function {} has a keyword only argument (defined after * or *args), which is currently not supported.'.format(function_name) if warn: logger.warning(message) else: raise Exception(message)
Check if we support the signature of this function. We currently do not allow remote functions to have **kwargs. We also do not support keyword arguments in conjunction with a *args argument. Args: func: The function whose signature should be checked. warn: If this is true, a warning will be printed if the signature is not supported. If it is false, an exception will be raised if the signature is not supported. Raises: Exception: An exception is raised if the signature is not supported.
codesearchnet
def __init__(self, yaml_definition=None): definitions_registry = registry.DataTypeDefinitionsRegistry() if yaml_definition: definitions_reader = reader.YAMLDataTypeDefinitionsFileReader() file_object = io.BytesIO(yaml_definition) definitions_reader.ReadFileObject(definitions_registry, file_object) super(DataTypeFabric, self).__init__(definitions_registry)
Initializes a data type fabric. Args: yaml_definition (str): YAML formatted data type definitions.
juraj-google-style
def get_mnemonics(self, mnemonics, uwis=None, alias=None): uwis = uwis or self.uwis wells = [w for w in self.__list if w.uwi in uwis] all_wells = [] for w in wells: this_well = [w.get_mnemonic(m, alias=alias) for m in mnemonics] all_wells.append(this_well) return all_wells
Looks at all the wells in turn and returns the highest thing in the alias table. Args: mnemonics (list) alias (dict) Returns: list. A list of lists.
juraj-google-style
def expect_exitstatus(self, exit_status): self.expect_end() logger.debug("Checking exit status of '{0}', output so far: {1}".format(self.name, self.get_output())) if (self._spawn.exitstatus is None): raise WrongExitStatusException(instance=self, expected=exit_status, output=self.get_output()) if (self._spawn.exitstatus is not exit_status): raise WrongExitStatusException(instance=self, expected=exit_status, got=self._spawn.exitstatus, output=self.get_output())
Wait for the running program to finish and expect some exit status. Args: exit_status (int): The expected exit status. Raises: WrongExitStatusException: The produced exit status is not the expected one.
codesearchnet
def score(text, *score_functions): if (not score_functions): raise ValueError('score_functions must not be empty') return statistics.mean((func(text) for func in score_functions))
Score ``text`` using ``score_functions``. Examples: >>> score("abc", function_a) >>> score("abc", function_a, function_b) Args: text (str): The text to score *score_functions (variable length argument list): functions to score with Returns: Arithmetic mean of scores Raises: ValueError: If score_functions is empty
codesearchnet
def is_disconnected(self, node_id): conn = self._conns.get(node_id) if (conn is None): return False return conn.disconnected()
Check whether the node connection has been disconnected or failed. A disconnected node has either been closed or has failed. Connection failures are usually transient and can be resumed in the next ready() call, but there are cases where transient failures need to be caught and re-acted upon. Arguments: node_id (int): the id of the node to check Returns: bool: True iff the node exists and is disconnected
codesearchnet
def _ParseStringOption(cls, options, argument_name, default_value=None): argument_value = getattr(options, argument_name, None) if argument_value is None: return default_value if isinstance(argument_value, py2to3.BYTES_TYPE): encoding = sys.stdin.encoding if not encoding: encoding = locale.getpreferredencoding() if not encoding: encoding = cls._PREFERRED_ENCODING try: argument_value = argument_value.decode(encoding) except UnicodeDecodeError as exception: raise errors.BadConfigOption(( 'Unable to convert option: {0:s} to Unicode with error: ' '{1!s}.').format(argument_name, exception)) elif not isinstance(argument_value, py2to3.UNICODE_TYPE): raise errors.BadConfigOption( 'Unsupported option: {0:s} string type required.'.format( argument_name)) return argument_value
Parses a string command line argument. Args: options (argparse.Namespace): parser options. argument_name (str): name of the command line argument. default_value (Optional[str]): default value of the command line argument. Returns: str: command line argument value or the default value if the command line argument is not set Raises: BadConfigOption: if the command line argument value cannot be converted to a Unicode string.
juraj-google-style
def f(a=1, b=2): return a + b
Compute the sum. Args: a: an integer. b: another integer. Returns: Sum of two integers.
github-repos
def set(self, key, value): data = self._load_file() data[key] = value self._save_file(data)
Set the value of a key Args: key (string): The key used to store this value value (string): The value to store
codesearchnet
def clean_program(self): program_id = self.cleaned_data[self.Fields.PROGRAM].strip() if (not program_id): return None try: client = CourseCatalogApiClient(self._user, self._enterprise_customer.site) program = (client.get_program_by_uuid(program_id) or client.get_program_by_title(program_id)) except MultipleProgramMatchError as exc: raise ValidationError(ValidationMessages.MULTIPLE_PROGRAM_MATCH.format(program_count=exc.programs_matched)) except (HttpClientError, HttpServerError): raise ValidationError(ValidationMessages.INVALID_PROGRAM_ID.format(program_id=program_id)) if (not program): raise ValidationError(ValidationMessages.INVALID_PROGRAM_ID.format(program_id=program_id)) if (program['status'] != ProgramStatuses.ACTIVE): raise ValidationError(ValidationMessages.PROGRAM_IS_INACTIVE.format(program_id=program_id, status=program['status'])) return program
Clean program. Try obtaining program treating form value as program UUID or title. Returns: dict: Program information if program found
codesearchnet
def get(name, *default): global g_config curr = g_config for part in name.split('.'): if (part in curr): curr = curr[part] elif default: return default[0] else: raise AttributeError("Config value '{}' does not exist".format(name)) return curr
Get config value with the given name and optional default. Args: name (str): The name of the config value. *default (Any): If given and the key doesn't not exist, this will be returned instead. If it's not given and the config value does not exist, AttributeError will be raised Returns: The requested config value. This is one of the global values defined in this file. If the value does not exist it will return `default` if give or raise `AttributeError`. Raises: AttributeError: If the value does not exist and `default` was not given.
codesearchnet
def get_all(cls, include_disabled=True): if cls == BaseAccount: raise InquisitorError('get_all on BaseAccount is not supported') account_type_id = db.AccountType.find_one(account_type=cls.account_type).account_type_id qry = db.Account.order_by(desc(Account.enabled), Account.account_type_id, Account.account_name) if not include_disabled: qry = qry.filter(Account.enabled == 1) accounts = qry.find(Account.account_type_id == account_type_id) return {res.account_id: cls(res) for res in accounts}
Returns a list of all accounts of a given type Args: include_disabled (`bool`): Include disabled accounts. Default: `True` Returns: list of account objects
juraj-google-style
def _op_to_matrix(self, op: Optional[ops.Operation], qubits: Tuple[(ops.Qid, ...)]) -> Optional[np.ndarray]: (q1, q2) = qubits matrix = protocols.unitary(op, None) if (matrix is None): return None assert (op is not None) if (op.qubits == qubits): return matrix if (op.qubits == (q2, q1)): return MergeInteractions._flip_kron_order(matrix) if (op.qubits == (q1,)): return np.kron(matrix, np.eye(2)) if (op.qubits == (q2,)): return np.kron(np.eye(2), matrix) return None
Determines the effect of an operation on the given qubits. If the operation is a 1-qubit operation on one of the given qubits, or a 2-qubit operation on both of the given qubits, and also the operation has a known matrix, then a matrix is returned. Otherwise None is returned. Args: op: The operation to understand. qubits: The qubits we care about. Order determines matrix tensor order. Returns: None, or else a matrix equivalent to the effect of the operation.
codesearchnet
def tf_step(self, time, variables, source_variables, **kwargs): assert all(util.shape(source) == util.shape(target) for source, target in zip(source_variables, variables)) last_sync = tf.get_variable( name='last-sync', shape=(), dtype=tf.int64, initializer=tf.constant_initializer(value=(-self.sync_frequency), dtype=tf.int64), trainable=False ) def sync(): deltas = list() for source_variable, target_variable in zip(source_variables, variables): delta = self.update_weight * (source_variable - target_variable) deltas.append(delta) applied = self.apply_step(variables=variables, deltas=deltas) last_sync_updated = last_sync.assign(value=time) with tf.control_dependencies(control_inputs=(applied, last_sync_updated)): return [delta + 0.0 for delta in deltas] def no_sync(): deltas = list() for variable in variables: delta = tf.zeros(shape=util.shape(variable)) deltas.append(delta) return deltas do_sync = (time - last_sync >= self.sync_frequency) return tf.cond(pred=do_sync, true_fn=sync, false_fn=no_sync)
Creates the TensorFlow operations for performing an optimization step. Args: time: Time tensor. variables: List of variables to optimize. source_variables: List of source variables to synchronize with. **kwargs: Additional arguments, not used. Returns: List of delta tensors corresponding to the updates for each optimized variable.
juraj-google-style
def encode_field(self, field, value): if (isinstance(field, messages.IntegerField) and field.variant in (messages.Variant.INT64, messages.Variant.UINT64, messages.Variant.SINT64)): if value not in (None, [], ()): if isinstance(value, list): value = [str(subvalue) for subvalue in value] else: value = str(value) return value return super(EndpointsProtoJson, self).encode_field(field, value)
Encode a python field value to a JSON value. Args: field: A ProtoRPC field instance. value: A python value supported by field. Returns: A JSON serializable value appropriate for field.
juraj-google-style
def update(self, **kwargs): for arg in kwargs: if hasattr(self, arg): setattr(self, arg, kwargs[arg]) else: raise ValueError("Invalid RayParams parameter in" " update: %s" % arg) self._check_usage()
Update the settings according to the keyword arguments. Args: kwargs: The keyword arguments to set corresponding fields.
juraj-google-style
def forward(self, hidden_states: torch.Tensor, cu_seqlens: torch.Tensor) -> torch.Tensor: residual = hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) hidden_states = self.self_attn(hidden_states=hidden_states, cu_seqlens=cu_seqlens) hidden_states = residual + hidden_states residual = hidden_states hidden_states = self.final_layer_norm(hidden_states) hidden_states = self.fc1(hidden_states) hidden_states = self.activation_fn(hidden_states) hidden_states = self.fc2(hidden_states) hidden_states = residual + hidden_states if hidden_states.dtype == torch.float16: clamp_value = torch.finfo(hidden_states.dtype).max - 1000 hidden_states = torch.clamp(hidden_states, min=-clamp_value, max=clamp_value) outputs = (hidden_states,) return outputs
Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail.
github-repos
def _RegisterFlagByModuleId(self, module_id, flag): flags_by_module_id = self.FlagsByModuleIdDict() flags_by_module_id.setdefault(module_id, []).append(flag)
Records the module that defines a specific flag. Args: module_id: An int, the ID of the Python module. flag: A Flag object, a flag that is key to the module.
juraj-google-style
def response_list(data, key): if key not in data: return None if isinstance(data[key], list): return data[key] else: return [data[key],]
Obtain the relevant response data in a list. If the response does not already contain the result in a list, a new one will be created to ease iteration in the parser methods. Args: data (dict): API response. key (str): Attribute of the response that contains the result values. Returns: List of response items (usually dict) or None if the key is not present.
juraj-google-style
def set_servo_speed(self, goalspeed, led): if (goalspeed > 0): goalspeed_msb = ((int(goalspeed) & 65280) >> 8) goalspeed_lsb = (int(goalspeed) & 255) elif (goalspeed < 0): goalspeed_msb = (64 + (255 - ((int(goalspeed) & 65280) >> 8))) goalspeed_lsb = (abs(goalspeed) & 255) data = [] data.append(12) data.append(self.servoid) data.append(I_JOG_REQ) data.append(goalspeed_lsb) data.append(goalspeed_msb) data.append((2 | led)) data.append(self.servoid) data.append(0) send_data(data)
Set the Herkulex in continuous rotation mode Args: goalspeed (int): the speed , range -1023 to 1023 led (int): the LED color 0x00 LED off 0x04 GREEN 0x08 BLUE 0x10 RED
codesearchnet
def add_filter(ds, patterns): if not plugins.is_datasource(ds): raise Exception("Filters are applicable only to datasources.") delegate = dr.get_delegate(ds) if delegate.raw: raise Exception("Filters aren't applicable to raw datasources.") if not delegate.filterable: raise Exception("Filters aren't applicable to %s." % dr.get_name(ds)) if ds in _CACHE: del _CACHE[ds] if isinstance(patterns, six.string_types): FILTERS[ds].add(patterns) elif isinstance(patterns, list): FILTERS[ds] |= set(patterns) elif isinstance(patterns, set): FILTERS[ds] |= patterns else: raise TypeError("patterns must be string, list, or set.")
Add a filter or list of filters to a datasource. A filter is a simple string, and it matches if it is contained anywhere within a line. Args: ds (@datasource component): The datasource to filter patterns (str, [str]): A string, list of strings, or set of strings to add to the datasource's filters.
juraj-google-style
def _ReadStringDataTypeDefinition(self, definitions_registry, definition_values, definition_name, is_member=False): if is_member: supported_definition_values = self._SUPPORTED_DEFINITION_VALUES_STRING_MEMBER else: supported_definition_values = self._SUPPORTED_DEFINITION_VALUES_STRING definition_object = self._ReadElementSequenceDataTypeDefinition(definitions_registry, definition_values, data_types.StringDefinition, definition_name, supported_definition_values) encoding = definition_values.get('encoding', None) if (not encoding): error_message = 'missing encoding' raise errors.DefinitionReaderError(definition_name, error_message) definition_object.encoding = encoding return definition_object
Reads a string data type definition. Args: definitions_registry (DataTypeDefinitionsRegistry): data type definitions registry. definition_values (dict[str, object]): definition values. definition_name (str): name of the definition. is_member (Optional[bool]): True if the data type definition is a member data type definition. Returns: StringDefinition: string data type definition. Raises: DefinitionReaderError: if the definitions values are missing or if the format is incorrect.
codesearchnet