code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def create_box_field(self, box_key, name, field_type, **kwargs): self._raise_unimplemented_error() uri = '/'.join([self.api_uri, self.boxes_suffix, box_key, self.fields_suffix ]) code, data = self._create_field(uri, name, field_type, **kwargs) return code, data
Creates a box field with the provided attributes. Args: box_key specifying the box to add the field to name required name string field_type required type string [TEXT_INPUT, DATE or PERSON] kwargs {} return (status code, field dict)
juraj-google-style
def InsertNodesAfter(new_nodes, target): for node in reversed(new_nodes): _InsertNodeAt(node, target, after=True)
Insert new_nodes after the given target location in the tree. Arguments: new_nodes: a sequence of new nodes to insert (the nodes should not be in the tree). target: the target node after which the new node node will be inserted. Raises: RuntimeError: if the tree is corrupted, or the insertion would corrupt it.
github-repos
def get(self, key, index=None): records = self.get_multi([key], index=index) try: return records[0][1] except IndexError: return None
Retrieves a value associated with a key from the database Args: key (str): The key to retrieve
juraj-google-style
def FillDeviceCapabilities(device, descriptor): preparsed_data = PHIDP_PREPARSED_DATA(0) ret = hid.HidD_GetPreparsedData(device, ctypes.byref(preparsed_data)) if (not ret): raise ctypes.WinError() try: caps = HidCapabilities() ret = hid.HidP_GetCaps(preparsed_data, ctypes.byref(caps)) if (ret != HIDP_STATUS_SUCCESS): raise ctypes.WinError() descriptor.usage = caps.Usage descriptor.usage_page = caps.UsagePage descriptor.internal_max_in_report_len = caps.InputReportByteLength descriptor.internal_max_out_report_len = caps.OutputReportByteLength finally: hid.HidD_FreePreparsedData(preparsed_data)
Fill out device capabilities. Fills the HidCapabilitites of the device into descriptor. Args: device: A handle to the open device descriptor: DeviceDescriptor to populate with the capabilities Returns: none Raises: WindowsError when unable to obtain capabilitites.
codesearchnet
def wont_implement_method(base_type, name, reason=None, explanation=None): if reason is not None: if reason not in _WONT_IMPLEMENT_REASONS: raise AssertionError(f'reason must be one of {list(_WONT_IMPLEMENT_REASONS.keys())}, got {reason!r}') reason_data = _WONT_IMPLEMENT_REASONS[reason] elif explanation is not None: reason_data = {'explanation': explanation} else: raise ValueError('One of (reason, explanation) must be specified') def wrapper(*args, **kwargs): raise WontImplementError(f"'{name}' is not yet supported {reason_data['explanation']}", reason=reason) wrapper.__name__ = name wrapper.__doc__ = f':meth:`{_prettify_pandas_type(base_type)}.{name}` is not yet supported in the Beam DataFrame API {reason_data['explanation']}' if 'url' in reason_data: wrapper.__doc__ += f'\n\n For more information see {reason_data['url']}.' return wrapper
Generate a stub method that raises WontImplementError. Note either reason or explanation must be specified. If both are specified, explanation is ignored. Args: base_type: The pandas type of the method that this is trying to replicate. name: The name of the method that this is aiming to replicate. reason: If specified, use data from the corresponding entry in ``_WONT_IMPLEMENT_REASONS`` to generate a helpful exception message and docstring for the method. explanation: If specified, use this string as an explanation for why this operation is not supported when generating an exception message and docstring.
github-repos
def path(self, path): url = furl(self._request.rawurl) url.path = path self._request.url = url.url self.add_matcher(matcher('PathMatcher', path))
Defines a URL path to match. Only call this method if the URL has no path already defined. Arguments: path (str): URL path value to match. E.g: ``/api/users``. Returns: self: current Mock instance.
juraj-google-style
def _ParseAccountsData(self, account_data): if (not account_data): return {} lines = [line for line in account_data.splitlines() if line] user_map = {} for line in lines: if (not all(((ord(c) < 128) for c in line))): self.logger.info('SSH key contains non-ascii character: %s.', line) continue split_line = line.split(':', 1) if (len(split_line) != 2): self.logger.info('SSH key is not a complete entry: %s.', split_line) continue (user, key) = split_line if self._HasExpired(key): self.logger.debug('Expired SSH key for user %s: %s.', user, key) continue if (user not in user_map): user_map[user] = [] user_map[user].append(key) logging.debug('User accounts: %s.', user_map) return user_map
Parse the SSH key data into a user map. Args: account_data: string, the metadata server SSH key attributes data. Returns: dict, a mapping of the form: {'username': ['sshkey1, 'sshkey2', ...]}.
codesearchnet
def action_scope(self, action_fluents: Sequence[tf.Tensor]) -> Dict[(str, TensorFluent)]: return dict(zip(self.rddl.domain.action_fluent_ordering, action_fluents))
Returns a partial scope with current action-fluents. Args: action_fluents (Sequence[tf.Tensor]): The action fluents. Returns: A mapping from action fluent names to :obj:`rddl2tf.fluent.TensorFluent`.
codesearchnet
def make_sample_her_transitions(replay_strategy, replay_k, reward_fun): if replay_strategy == 'future': future_p = 1 - (1. / (1 + replay_k)) else: future_p = 0 def _sample_her_transitions(episode_batch, batch_size_in_transitions): T = episode_batch['u'].shape[1] rollout_batch_size = episode_batch['u'].shape[0] batch_size = batch_size_in_transitions episode_idxs = np.random.randint(0, rollout_batch_size, batch_size) t_samples = np.random.randint(T, size=batch_size) transitions = {key: episode_batch[key][episode_idxs, t_samples].copy() for key in episode_batch.keys()} her_indexes = np.where(np.random.uniform(size=batch_size) < future_p) future_offset = np.random.uniform(size=batch_size) * (T - t_samples) future_offset = future_offset.astype(int) future_t = (t_samples + 1 + future_offset)[her_indexes] future_ag = episode_batch['ag'][episode_idxs[her_indexes], future_t] transitions['g'][her_indexes] = future_ag info = {} for key, value in transitions.items(): if key.startswith('info_'): info[key.replace('info_', '')] = value reward_params = {k: transitions[k] for k in ['ag_2', 'g']} reward_params['info'] = info transitions['r'] = reward_fun(**reward_params) transitions = {k: transitions[k].reshape(batch_size, *transitions[k].shape[1:]) for k in transitions.keys()} assert(transitions['u'].shape[0] == batch_size_in_transitions) return transitions return _sample_her_transitions
Creates a sample function that can be used for HER experience replay. Args: replay_strategy (in ['future', 'none']): the HER replay strategy; if set to 'none', regular DDPG experience replay is used replay_k (int): the ratio between HER replays and regular replays (e.g. k = 4 -> 4 times as many HER replays as regular replays are used) reward_fun (function): function to re-compute the reward with substituted goals
juraj-google-style
def _UpdateAuthorizedKeys(self, user, ssh_keys): pw_entry = self._GetUser(user) if (not pw_entry): return uid = pw_entry.pw_uid gid = pw_entry.pw_gid home_dir = pw_entry.pw_dir ssh_dir = os.path.join(home_dir, '.ssh') authorized_keys_file = os.path.join(ssh_dir, 'authorized_keys') if (os.path.islink(ssh_dir) or os.path.islink(authorized_keys_file)): self.logger.warning('Not updating authorized keys for user %s. File is a symlink.', user) return if (not os.path.exists(home_dir)): file_utils.SetPermissions(home_dir, mode=493, uid=uid, gid=gid, mkdir=True) file_utils.SetPermissions(ssh_dir, mode=448, uid=uid, gid=gid, mkdir=True) prefix = (self.logger.name + '-') with tempfile.NamedTemporaryFile(mode='w', prefix=prefix, delete=True) as updated_keys: updated_keys_file = updated_keys.name if os.path.exists(authorized_keys_file): lines = open(authorized_keys_file).readlines() else: lines = [] google_lines = set() for (i, line) in enumerate(lines): if line.startswith(self.google_comment): google_lines.update([i, (i + 1)]) for (i, line) in enumerate(lines): if ((i not in google_lines) and line): line += ('\n' if (not line.endswith('\n')) else '') updated_keys.write(line) for ssh_key in ssh_keys: ssh_key += ('\n' if (not ssh_key.endswith('\n')) else '') updated_keys.write(('%s\n' % self.google_comment)) updated_keys.write(ssh_key) updated_keys.flush() shutil.copy(updated_keys_file, authorized_keys_file) file_utils.SetPermissions(authorized_keys_file, mode=384, uid=uid, gid=gid)
Update the authorized keys file for a Linux user with a list of SSH keys. Args: user: string, the name of the Linux user account. ssh_keys: list, the SSH key strings associated with the user. Raises: IOError, raised when there is an exception updating a file. OSError, raised when setting permissions or writing to a read-only file system.
codesearchnet
def baseline_optimizer_arguments(self, states, internals, reward): arguments = dict(time=self.global_timestep, variables=self.baseline.get_variables(), arguments=dict(states=states, internals=internals, reward=reward, update=tf.constant(value=True)), fn_reference=self.baseline.reference, fn_loss=self.fn_baseline_loss) if (self.global_model is not None): arguments['global_variables'] = self.global_model.baseline.get_variables() return arguments
Returns the baseline optimizer arguments including the time, the list of variables to optimize, and various functions which the optimizer might require to perform an update step. Args: states: Dict of state tensors. internals: List of prior internal state tensors. reward: Reward tensor. Returns: Baseline optimizer arguments as dict.
codesearchnet
def error_buckets(gold, pred, X=None): buckets = defaultdict(list) gold = arraylike_to_numpy(gold) pred = arraylike_to_numpy(pred) for (i, (y, l)) in enumerate(zip(pred, gold)): buckets[(y, l)].append((X[i] if (X is not None) else i)) return buckets
Group items by error buckets Args: gold: an array-like of gold labels (ints) pred: an array-like of predictions (ints) X: an iterable of items Returns: buckets: A dict of items where buckets[i,j] is a list of items with predicted label i and true label j. If X is None, return indices instead. For a binary problem with (1=positive, 2=negative): buckets[1,1] = true positives buckets[1,2] = false positives buckets[2,1] = false negatives buckets[2,2] = true negatives
codesearchnet
def __call__(self, *args, **kwargs): def replica_local_fn(*args, **kwargs): if any((isinstance(arg, keras_tensor.KerasTensor) for arg in nest.flatten((args, kwargs)))): update_op = None else: update_op = self.update_state(*args, **kwargs) update_ops = [] if update_op is not None: update_ops.append(update_op) with ops.control_dependencies(update_ops): result_t = self.result() result_t._metric_obj = self return result_t from tensorflow.python.keras.distribute import distributed_training_utils return distributed_training_utils.call_replica_local_fn(replica_local_fn, *args, **kwargs)
Accumulates statistics and then computes metric result value. Args: *args: **kwargs: A mini-batch of inputs to the Metric, passed on to `update_state()`. Returns: The metric value tensor.
github-repos
def ReleaseFileSystem(self, file_system): (identifier, cache_value) = self._file_system_cache.GetCacheValueByObject(file_system) if (not identifier): raise RuntimeError('Object not cached.') if (not cache_value): raise RuntimeError('Invalid cache value.') self._file_system_cache.ReleaseObject(identifier) result = cache_value.IsDereferenced() if result: self._file_system_cache.RemoveObject(identifier) return result
Releases a cached file system object. Args: file_system (FileSystem): file system object. Returns: bool: True if the file system object can be closed. Raises: PathSpecError: if the path specification is incorrect. RuntimeError: if the file system object is not cached or an inconsistency is detected in the cache.
codesearchnet
def _get_required_container_version(): if 'dev' in beam_version.__version__: return names.BEAM_DEV_SDK_CONTAINER_TAG else: return _get_container_image_tag()
For internal use only; no backwards-compatibility guarantees. Returns: str: The tag of worker container images in GCR that corresponds to current version of the SDK.
github-repos
def _rename_if_any_arg_found_transformer(parent, node, full_name, name, logs, arg_names=None, arg_ok_predicate=None, remove_if_ok=False, message=None): for arg_name in arg_names: rename_node = _rename_if_arg_found_transformer(parent, node, full_name, name, logs, arg_name, arg_ok_predicate, remove_if_ok, message) node = rename_node if rename_node else node return node
Replaces the given call with tf.compat.v1 if any of the arg_names is found. Args: parent: Parent of node. node: ast.Call node to modify. full_name: full name of function to modify. name: name of function to modify. logs: list of logs to append to. arg_names: list of names of the argument to look for. arg_ok_predicate: predicate callable with the ast of the argument value, returns whether the argument value is allowed. remove_if_ok: remove the argument if present and ok as determined by arg_ok_predicate. message: message to print if a non-ok arg is found (and hence, the function is renamed to its compat.v1 version). Returns: node, if it was modified, else None.
github-repos
def __call__(cls, *args, **kwargs): if cls.instance is None: with threading.Lock(): if cls.instance is None: cls.instance = super(Singleton, cls).__call__(*args, **kwargs) return cls.instance
Return singleton instance. Args: cls (type): the class. args (tuple/list): initializer function arguments. kwargs (dict): initializer function keyword arguments.
juraj-google-style
def multi_post(self, urls, query_params=None, data=None, to_json=True, send_as_file=False): return self._multi_request(MultiRequest._VERB_POST, urls, query_params, data, to_json=to_json, send_as_file=send_as_file)
Issue multiple POST requests. Args: urls - A string URL or list of string URLs query_params - None, a dict, or a list of dicts representing the query params data - None, a dict or string, or a list of dicts and strings representing the data body. to_json - A boolean, should the responses be returned as JSON blobs send_as_file - A boolean, should the data be sent as a file. Returns: a list of dicts if to_json is set of requests.response otherwise. Raises: InvalidRequestError - Can not decide how many requests to issue.
codesearchnet
def external_ids(self, **kwargs): path = self._get_series_id_season_number_episode_number_path('external_ids') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Get the external ids for a TV episode by combination of a season and episode number. Args: language: (optional) ISO 639 code. Returns: A dict respresentation of the JSON returned from the API.
codesearchnet
def draw(self, current_time, frame_time): self.set_default_viewport() self.timeline.draw(current_time, frame_time, self.fbo)
Draws a frame. Internally it calls the configured timeline's draw method. Args: current_time (float): The current time (preferrably always from the configured timer class) frame_time (float): The duration of the previous frame in seconds
codesearchnet
def ensure_dir(path): dirpath = os.path.dirname(path) if (dirpath and (not os.path.exists(dirpath))): os.makedirs(dirpath)
Ensure directory exists. Args: path(str): dir path
codesearchnet
def disconnect_async(self, connection_id, callback): try: context = self.connections.get_context(connection_id) except ArgumentError: callback(connection_id, self.id, False, "Could not find connection information") return self.connections.begin_disconnection(connection_id, callback, self.get_config('default_timeout')) self.bable.disconnect( connection_handle=context['connection_handle'], on_disconnected=[self._on_disconnection_finished, context] )
Asynchronously disconnect from a device that has previously been connected Args: connection_id (int): A unique identifier for this connection on the DeviceManager that owns this adapter. callback (callable): A function called as callback(connection_id, adapter_id, success, failure_reason) when the disconnection finishes. Disconnection can only either succeed or timeout.
juraj-google-style
def _build_param_string(params): pairs = [] for key, value in params.iteritems(): if value is None: value = '' pairs.append('{0}={1}'.format(key, value)) if len(pairs) > 0: return '?{0}'.format('&'.join(pairs)) return ''
Build query params string from a dictionary. Args: params (dict): A dictionary of params Returns: string: A valid url query params string.
juraj-google-style
def delete_credit_card(self, *, customer_id, credit_card_id): fmt = 'customers/{}/creditCards/{}'.format(customer_id, credit_card_id) return self.client._delete(self.url + fmt, headers=self.get_headers())
Delete a credit card (Token) associated with a user. Args: customer_id: Identifier of the client of whom you are going to delete the token. credit_card_id: Identifier of the token to be deleted. Returns:
juraj-google-style
def find_slot(self, wanted, slots=None): for slot in self.find_slots(wanted, slots): return slot return None
Searches the given slots or, if not given, active hotbar slot, hotbar, inventory, open window in this order. Args: wanted: function(Slot) or Slot or itemID or (itemID, metadata) Returns: Optional[Slot]: The first slot containing the item or None if not found.
juraj-google-style
def __x_google_quota_descriptor(self, metric_costs): return { 'metricCosts': { metric: cost for (metric, cost) in metric_costs.items() } } if metric_costs else None
Describes the metric costs for a call. Args: metric_costs: Dict of metric definitions to the integer cost value against that metric. Returns: A dict descriptor describing the Quota limits for the endpoint.
juraj-google-style
def create_token_type_ids_from_sequences(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]: sep = [self.sep_token_id] cls = [self.cls_token_id] if token_ids_1 is None: return len(cls + token_ids_0 + sep) * [0] return len(cls + token_ids_0 + sep + sep + token_ids_1 + sep) * [0]
Create a mask from the two sequences passed to be used in a sequence-pair classification task. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of zeros.
github-repos
def run_as_function_for_tape_gradients(make_op, inputs): if gradients_util.PossibleTapeGradientTypes(inputs) == gradients_util.POSSIBLE_GRADIENT_TYPES_HIGHER_ORDER and (not (ops.get_default_graph().building_function and 'cflow_gradient_wrapper' in ops.get_default_graph().name)): results = tracing_compilation.call_function((inputs,), tracing_options=tracing_compilation.TracingOptions(make_op, 'cflow_gradient_wrapper', autograph=False)) return results else: return make_op(inputs)
Fix higher-order tape gradients by wrapping `make_op` in a function. Args: make_op: A function that takes a list of inputs and returns a list of output tensors. This function should set any handle data relevant to its outputs before returning. inputs: A list of tensors to check for tape gradients and pass to `make_op`. These should include all tensors used in `make_op`. Returns: Tensors corresponding to `make_op`'s output.
github-repos
def array(self, size_chunk, start, bytesize): with open(self.img, 'rb') as f1: f1.seek((self.start_byte + (start * self.bytesize))) data = f1.read((size_chunk * self.bytesize)) Z = np.fromstring(data, dtype=self.dtype, count=size_chunk) if (self.grid == 'LOLA'): return (Z * float(self.SCALING_FACTOR)) else: return Z
Read part of the binary file Args: size_chunk (int) : Size of the chunk to read start (int): Starting byte bytesize (int): Ending byte Returns: (np.array): array of the corresponding values
codesearchnet
def _to_enos_roles(roles): def to_host(h): extra = {} for (nic, roles) in h['nics']: for role in roles: extra[role] = nic return Host(h['host'], user='root', extra=extra) enos_roles = {} for (role, hosts) in roles.items(): enos_roles[role] = [to_host(h) for h in hosts] logger.debug(enos_roles) return enos_roles
Transform the roles to use enoslib.host.Host hosts. Args: roles (dict): roles returned by :py:func:`enoslib.infra.provider.Provider.init`
codesearchnet
def VerifyGitkitToken(self, jwt): certs = self.rpc_helper.GetPublicCert() crypt.MAX_TOKEN_LIFETIME_SECS = (30 * 86400) parsed = None for aud in filter((lambda x: (x is not None)), [self.project_id, self.client_id]): try: parsed = crypt.verify_signed_jwt_with_certs(jwt, certs, aud) except crypt.AppIdentityError as e: if ('Wrong recipient' not in e.message): return None if parsed: return GitkitUser.FromToken(parsed) return None
Verifies a Gitkit token string. Args: jwt: string, the token to be checked Returns: GitkitUser, if the token is valid. None otherwise.
codesearchnet
def _dict_func(self, func, axis, *args, **kwargs): if ('axis' not in kwargs): kwargs['axis'] = axis if (axis == 0): index = self.columns else: index = self.index func = {idx: func[key] for key in func for idx in index.get_indexer_for([key])} def dict_apply_builder(df, func_dict={}): return pandas.DataFrame(df.apply(func_dict, *args, **kwargs)) result_data = self.data.apply_func_to_select_indices_along_full_axis(axis, dict_apply_builder, func, keep_remaining=False) full_result = self._post_process_apply(result_data, axis) return full_result
Apply function to certain indices across given axis. Args: func: The function to apply. axis: Target axis to apply the function along. Returns: A new PandasQueryCompiler.
codesearchnet
def op_or(self, *elements): expression = self.add_operator(Operator(',')) for element in elements: expression.add_element(element) return expression
Update the ``Expression`` by joining the specified additional ``elements`` using an "OR" ``Operator`` Args: *elements (BaseExpression): The ``Expression`` and/or ``Constraint`` elements which the "OR" ``Operator`` applies to. Returns: Expression: ``self`` or related ``Expression``.
codesearchnet
def fbresnet152(num_classes=1000, pretrained='imagenet'): model = FBResNet(Bottleneck, [3, 8, 36, 3], num_classes=num_classes) if pretrained is not None: settings = pretrained_settings['fbresnet152'][pretrained] assert num_classes == settings['num_classes'], \ "num_classes should be {}, but is {}".format(settings['num_classes'], num_classes) model.load_state_dict(model_zoo.load_url(settings['url'])) model.input_space = settings['input_space'] model.input_size = settings['input_size'] model.input_range = settings['input_range'] model.mean = settings['mean'] model.std = settings['std'] return model
Constructs a ResNet-152 model. Args: pretrained (bool): If True, returns a model pre-trained on ImageNet
juraj-google-style
def _ImportPythonModule(module_name): try: module_object = list(map(__import__, [module_name]))[0] except ImportError: return None if '.' in module_name: for submodule_name in module_name.split('.')[1:]: module_object = getattr(module_object, submodule_name, None) return module_object
Imports a Python module. Args: module_name (str): name of the module. Returns: module: Python module or None if the module cannot be imported.
juraj-google-style
def _force_edges_active_move(self, state: _STATE) -> _STATE: for _ in range(self._rand.randint(1, 4)): state = self._force_edge_active_move(state) return state
Move function which repeats _force_edge_active_move a few times. Args: state: Search state, not mutated. Returns: New search state which consists of incremental changes of the original state.
codesearchnet
class PatchTSMixerForTimeSeriesClassificationOutput(ModelOutput): loss: Optional[torch.FloatTensor] = None prediction_outputs: Optional[torch.FloatTensor] = None last_hidden_state: Optional[torch.FloatTensor] = None hidden_states: Optional[Tuple[torch.FloatTensor]] = None
Output type of [`PatchTSMixerForTimeSeriesClassificationOutput`]. Args: prediction_outputs (`torch.FloatTensor` of shape `(batch_size, num_labels)`): Prediction output from the classification head. last_hidden_state (`torch.FloatTensor` of shape `(batch_size, num_input_channels, num_patches, d_model)`): Backbone embeddings before passing through the head. hidden_states (`tuple(torch.FloatTensor)`, *optional*): Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. loss (*optional*, returned when `y` is provided, `torch.FloatTensor` of shape `()`): Total loss.
github-repos
def table_field_to_avro_field(table_field: Dict[str, Any], namespace: str) -> Dict[str, Any]: assert 'type' in table_field, 'Unable to get type for table field {}'.format(table_field) assert table_field['type'] in BIG_QUERY_TO_AVRO_TYPES, 'Unable to map BigQuery field type {} to avro type'.format(table_field['type']) avro_type = BIG_QUERY_TO_AVRO_TYPES[table_field['type']] if avro_type == 'record': element_type = get_record_schema_from_dict_table_schema(table_field['name'], table_field, namespace='.'.join((namespace, table_field['name']))) else: element_type = avro_type field_mode = table_field.get('mode', 'NULLABLE') if field_mode in (None, 'NULLABLE'): field_type = ['null', element_type] elif field_mode == 'REQUIRED': field_type = element_type elif field_mode == 'REPEATED': field_type = {'type': 'array', 'items': element_type} else: raise ValueError('Unknown BigQuery field mode: {}'.format(field_mode)) avro_field = {'type': field_type, 'name': table_field['name']} doc = table_field.get('description') if doc: avro_field['doc'] = doc return avro_field
Convert a BigQuery field to an avro field. Args: table_field (Dict[str, Any]): A BigQuery field in dict form. Returns: Dict[str, Any]: An equivalent Avro field in dict form.
github-repos
def get_application_configuration(name): _check() rc = _ec.get_application_configuration(name) if (rc is False): raise ValueError('Application configuration {0} not found.'.format(name)) return rc
Get a named application configuration. An application configuration is a named set of securely stored properties where each key and its value in the property set is a string. An application configuration object is used to store information that IBM Streams applications require, such as: * Database connection data * Credentials that your applications need to use to access external systems * Other data, such as the port numbers or URLs of external systems Arguments: name(str): Name of the application configuration. Returns: dict: Dictionary containing the property names and values for the application configuration. Raises: ValueError: Application configuration does not exist.
codesearchnet
def update_parameters(parameters, grads, learning_rate=1.2): W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] dW1 = grads["dW1"] db1 = grads["db1"] dW2 = grads["dW2"] db2 = grads["db2"] W1 -= learning_rate * dW1 b1 -= learning_rate * db1 W2 -= learning_rate * dW2 b2 -= learning_rate * db2 parameters = {"W1": W1, "b1": b1, "W2": W2, "b2": b2} return parameters
Updates parameters using the gradient descent update rule given above Arguments: parameters -- python dictionary containing your parameters grads -- python dictionary containing your gradients Returns: parameters -- python dictionary containing your updated parameters
juraj-google-style
def _ParseLogLine(self, parser_mediator, structure): if not self._xchat_year: return time_elements_tuple = self._GetTimeElementsTuple(structure) try: date_time = dfdatetime_time_elements.TimeElements( time_elements_tuple=time_elements_tuple) date_time.is_local_time = True except ValueError: parser_mediator.ProduceExtractionWarning( 'invalid date time value: {0!s}'.format(structure.date_time)) return self._last_month = time_elements_tuple[1] event_data = XChatLogEventData() event_data.nickname = structure.nickname event_data.text = ' '.join(structure.text.split()) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_ADDED, time_zone=parser_mediator.timezone) parser_mediator.ProduceEventWithEventData(event, event_data)
Parses a log line. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. structure (pyparsing.ParseResults): structure of tokens derived from a line of a text file.
juraj-google-style
def _find_metric_value(session_or_group, metric_name): for metric_value in session_or_group.metric_values: if ((metric_value.name.tag == metric_name.tag) and (metric_value.name.group == metric_name.group)): return metric_value
Returns the metric_value for a given metric in a session or session group. Args: session_or_group: A Session protobuffer or SessionGroup protobuffer. metric_name: A MetricName protobuffer. The metric to search for. Returns: A MetricValue protobuffer representing the value of the given metric or None if no such metric was found in session_or_group.
codesearchnet
def __init__(self, callback, callback_lock, options, queue_item): threading.Thread.__init__(self) self.__callback = callback self.__callback_lock = callback_lock self.__options = options self.__queue_item = queue_item
Constructs a crawler thread instance Args: callback (obj): The method to call when finished callback_lock (bool): The callback lock that prevents race conditions. options (:class:`nyawc.Options`): The settins/options object. queue_item (:class:`nyawc.QueueItem`): The queue item containing a request to execute.
juraj-google-style
def _make_flow(request, scopes, return_url=None): csrf_token = hashlib.sha256(os.urandom(1024)).hexdigest() request.session[_CSRF_KEY] = csrf_token state = json.dumps({ 'csrf_token': csrf_token, 'return_url': return_url, }) flow = client.OAuth2WebServerFlow( client_id=django_util.oauth2_settings.client_id, client_secret=django_util.oauth2_settings.client_secret, scope=scopes, state=state, redirect_uri=request.build_absolute_uri( urlresolvers.reverse("google_oauth:callback"))) flow_key = _FLOW_KEY.format(csrf_token) request.session[flow_key] = jsonpickle.encode(flow) return flow
Creates a Web Server Flow Args: request: A Django request object. scopes: the request oauth2 scopes. return_url: The URL to return to after the flow is complete. Defaults to the path of the current request. Returns: An OAuth2 flow object that has been stored in the session.
juraj-google-style
def add_tile(self, tile_source, **kw): tile_renderer = TileRenderer(tile_source=tile_source, **kw) self.renderers.append(tile_renderer) return tile_renderer
Adds new ``TileRenderer`` into ``Plot.renderers`` Args: tile_source (TileSource) : a tile source instance which contain tileset configuration Keyword Arguments: Additional keyword arguments are passed on as-is to the tile renderer Returns: TileRenderer : TileRenderer
codesearchnet
def _prep_binary_content(self): if ((not self.data) and (not self.location) and ('Content-Location' not in self.resource.headers.keys())): raise Exception('creating/updating NonRDFSource requires content from self.binary.data, self.binary.location, or the Content-Location header') elif ('Content-Location' in self.resource.headers.keys()): logger.debug('Content-Location header found, using') self.delivery = 'header' elif ('Content-Location' not in self.resource.headers.keys()): if self.location: self.resource.headers['Content-Location'] = self.location self.delivery = 'header' elif self.data: if isinstance(self.data, io.BufferedIOBase): logger.debug('detected file-like object') self.delivery = 'payload' else: logger.debug('detected bytes') self.delivery = 'payload'
Sets delivery method of either payload or header Favors Content-Location header if set Args: None Returns: None: sets attributes in self.binary and headers
codesearchnet
def framesToFrameRange(frames, sort=True, zfill=0, compress=False): if compress: frames = unique(set(), frames) frames = list(frames) if not frames: return '' if len(frames) == 1: return pad(frames[0], zfill) if sort: frames.sort() return ','.join(FrameSet.framesToFrameRanges(frames, zfill))
Converts an iterator of frames into a frame range string. Args: frames (collections.Iterable): sequence of frames to process sort (bool): sort the sequence before processing zfill (int): width for zero padding compress (bool): remove any duplicates before processing Returns: str:
juraj-google-style
def marquee(text='', width=78, mark='*'): if (not text): return (mark * width)[:width] nmark = ((((width - len(text)) - 2) if (nmark < 0): nmark = 0 marks = (mark * nmark) return ('%s %s %s' % (marks, text, marks))
Return the input string centered in a 'marquee'. Args: text (str): Input string width (int): Width of final output string. mark (str): Character used to fill string. :Examples: >>> marquee('A test', width=40) '**************** A test ****************' >>> marquee('A test', width=40, mark='-') '---------------- A test ----------------' marquee('A test',40, ' ') ' A test '
codesearchnet
def _get_connection(self): if (not getattr(self, '_connection', None)): logger.debug('Creating new connection.\n dsn: {}'.format(self._dsn)) d = parse_url_to_dict(self._dsn) self._connection = psycopg2.connect(database=d['path'].strip('/'), user=d['username'], password=d['password'], port=d['port'], host=d['hostname']) return self._connection
Returns connection to the postgres database. Returns: connection to postgres database who stores mpr data.
codesearchnet
def optimize(node): node = dead_code_elimination(node) node = constant_folding(node) node = assignment_propagation(node) return node
Perform a series of optimization passes. This function performs a series of optimizations (dead code elimination, constant folding, variable folding) on the given AST. It optimizes the code repeatedly until reaching a fixed point. The fixed point is determine roughly by checking whether the number of lines of generated source code changed after the latest pass. Args: node: The AST to optimize. Returns: The optimized AST.
codesearchnet
def _ReceiveItemOnActivity(self, zmq_socket): events = zmq_socket.poll( self._ZMQ_SOCKET_RECEIVE_TIMEOUT_MILLISECONDS) if events: try: received_object = self._zmq_socket.recv_pyobj() return received_object except zmq.error.Again: logger.error( '{0:s}. Failed to receive item in time.'.format( self.name)) raise except zmq.error.ZMQError as exception: if exception.errno == errno.EINTR: logger.error( 'ZMQ syscall interrupted in {0:s}. Queue aborting.'.format( self.name)) raise raise errors.QueueEmpty
Attempts to receive an item from a ZeroMQ socket. Args: zmq_socket (zmq.Socket): used to the receive the item. Returns: object: item from the socket. Raises: QueueEmpty: if no item could be received within the timeout. zmq.error.ZMQError: if an error occurs in ZeroMQ
juraj-google-style
def verify_docker_image_sha(chain, link): cot = link.cot task = link.task errors = [] if isinstance(task['payload'].get('image'), dict): docker_image_task_id = task['extra']['chainOfTrust']['inputs']['docker-image'] log.debug("Verifying {} {} against docker-image {}".format( link.name, link.task_id, docker_image_task_id )) if docker_image_task_id != task['payload']['image']['taskId']: errors.append("{} {} docker-image taskId isn't consistent!: {} vs {}".format( link.name, link.task_id, docker_image_task_id, task['payload']['image']['taskId'] )) else: path = task['payload']['image']['path'] image_hash = cot['environment']['imageArtifactHash'] alg, sha = image_hash.split(':') docker_image_link = chain.get_link(docker_image_task_id) upstream_sha = docker_image_link.cot['artifacts'].get(path, {}).get(alg) if upstream_sha is None: errors.append("{} {} docker-image docker sha {} is missing! {}".format( link.name, link.task_id, alg, docker_image_link.cot['artifacts'][path] )) elif upstream_sha != sha: errors.append("{} {} docker-image docker sha doesn't match! {} {} vs {}".format( link.name, link.task_id, alg, sha, upstream_sha )) else: log.debug("Found matching docker-image sha {}".format(upstream_sha)) else: prebuilt_task_types = chain.context.config['prebuilt_docker_image_task_types'] if prebuilt_task_types != "any" and link.task_type not in prebuilt_task_types: errors.append( "Task type {} not allowed to use a prebuilt docker image!".format( link.task_type ) ) raise_on_errors(errors)
Verify that built docker shas match the artifact. Args: chain (ChainOfTrust): the chain we're operating on. link (LinkOfTrust): the task link we're checking. Raises: CoTError: on failure.
juraj-google-style
def pop_stack(stack, op_id): if __debug__: (pushed_stack, pushed_op_id) = stack.pop() assert (pushed_op_id == op_id), ('Wanted %s, got %s' % (op_id, pushed_op_id)) else: pushed_stack = stack.pop() return pushed_stack
Proxy of pop, where we know we're popping a stack off of a stack. We know that we don't need to differentiate through this. See pop() for more. Args: stack: The stack to pop from. op_id: A unique variable that is also passed into the matching push. Allows optimization passes to track pairs of pushes and pops. Returns: The last value.
codesearchnet
def resize_positional_embeddings(positional_embeddings: torch.Tensor, spatial_shapes: torch.LongTensor, max_length: int) -> torch.Tensor: batch_size = spatial_shapes.shape[0] embed_dim = positional_embeddings.shape[-1] source_dtype = positional_embeddings.dtype resulted_positional_embeddings = torch.empty((batch_size, max_length, embed_dim), device=positional_embeddings.device, dtype=source_dtype) positional_embeddings = positional_embeddings.permute(2, 0, 1).unsqueeze(0) if positional_embeddings.device.type == 'cpu': positional_embeddings = positional_embeddings.to(torch.float32) for i in range(batch_size): height, width = spatial_shapes[i] resized_embeddings = F.interpolate(positional_embeddings, size=(height, width), mode='bilinear', align_corners=False, antialias=True) resized_embeddings = resized_embeddings.reshape(embed_dim, height * width).transpose(0, 1) resized_embeddings = resized_embeddings.to(source_dtype) resulted_positional_embeddings[i, :height * width] = resized_embeddings resulted_positional_embeddings[i, height * width:] = resized_embeddings[0] return resulted_positional_embeddings
Resize positional embeddings to image-specific size and pad to a fixed size. Args: positional_embeddings (`torch.Tensor`): Position embeddings of shape (height, width, embed_dim) spatial_shapes (`torch.LongTensor`): Spatial shapes of shape (batch_size, 2) to resize the positional embeddings to max_length (`int`): Maximum length of the positional embeddings to pad resized positional embeddings to Returns: `torch.Tensor`: Embeddings of shape (batch_size, max_length, embed_dim)
github-repos
def transfer(self, payment_id, data={}, **kwargs): url = "{}/{}/transfers".format(self.base_url, payment_id) return self.post_url(url, data, **kwargs)
Create Transfer for given Payment Id Args: payment_id : Id for which payment object has to be transfered Returns: Payment dict after getting transfered
juraj-google-style
def forward(self, hidden_states: torch.FloatTensor, p_mask: Optional[torch.FloatTensor]=None) -> torch.FloatTensor: x = self.dense(hidden_states).squeeze(-1) if p_mask is not None: if p_mask.dtype == torch.float16: x = x * (1 - p_mask) - 65500 * p_mask else: x = x * (1 - p_mask) - 1e+30 * p_mask return x
Args: hidden_states (`torch.FloatTensor` of shape `(batch_size, seq_len, hidden_size)`): The final hidden states of the model. p_mask (`torch.FloatTensor` of shape `(batch_size, seq_len)`, *optional*): Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked. Returns: `torch.FloatTensor`: The start logits for SQuAD.
github-repos
def graph_distances(start, edges, distances): adj = {x: [] for x in range(len(distances))} for n1, n2 in edges: adj[n1].append(n2) adj[n2].append(n1) to_visit = [] new_dist = {} for n in adj[start]: heapq.heappush(to_visit, (distances[start, n], n)) while to_visit: d, next_node = heapq.heappop(to_visit) if next_node not in new_dist: new_dist[next_node] = d for n in adj[next_node]: if n not in new_dist: heapq.heappush(to_visit, (d + distances[next_node, n], n)) return new_dist
Given an undirected adjacency list and a pairwise distance matrix between all nodes: calculates distances along graph from start node. Args: start (int): start node edges (list): adjacency list of tuples distances (array): 2d array of distances between nodes Returns: dict of node to distance from start
juraj-google-style
def extract_grid(self, longmin, longmax, latmin, latmax): (sample_min, sample_max) = map(int, map(self.sample_id, [longmin, longmax])) (line_min, line_max) = map(int, map(self.line_id, [latmax, latmin])) X = np.array(map(self.long_id, range(sample_min, sample_max, 1))) Y = np.array(map(self.lat_id, range(line_min, (line_max + 1), 1))) for (i, line) in enumerate(range(int(line_min), (int(line_max) + 1))): start = (((line - 1) * int(self.SAMPLE_LAST_PIXEL)) + sample_min) chunk_size = int((sample_max - sample_min)) Za = self.array(chunk_size, start, self.bytesize) if (i == 0): Z = Za else: Z = np.vstack((Z, Za)) (X, Y) = np.meshgrid(X, Y) return (X, Y, Z)
Extract part of the image ``img`` Args: longmin (float): Minimum longitude of the window longmax (float): Maximum longitude of the window latmin (float): Minimum latitude of the window latmax (float): Maximum latitude of the window Returns: A tupple of three arrays ``(X,Y,Z)`` with ``X`` contains the longitudes, ``Y`` contains the latitude and ``Z`` the values extracted from the window. Note: All return arrays have the same size. All coordinate are in degree.
codesearchnet
def _install_signal_handler(self, signal_number, signal_name): old_signal_handler = None def handler(handled_signal_number, frame): signal.signal(signal_number, signal.SIG_DFL) sys.stderr.write(('TensorBoard caught %s; exiting...\n' % signal_name)) if (old_signal_handler not in (signal.SIG_IGN, signal.SIG_DFL)): old_signal_handler(handled_signal_number, frame) sys.exit(0) old_signal_handler = signal.signal(signal_number, handler)
Set a signal handler to gracefully exit on the given signal. When this process receives the given signal, it will run `atexit` handlers and then exit with `0`. Args: signal_number: The numeric code for the signal to handle, like `signal.SIGTERM`. signal_name: The human-readable signal name.
codesearchnet
def print_terminal_table(headers, data_list, parse_row_fn): data_iter = iter(data_list) try: example = next(data_iter) example_row = parse_row_fn(example) data_iter = itertools.chain([example], data_iter) except StopIteration: example_row = ([''] * len(headers)) format_string = format_terminal_row(headers, example_row) top_row = format_string.format(*headers) print((top_row[0:(- 3)] if top_row.endswith('...') else top_row)) for data in data_iter: print(format_string.format(*parse_row_fn(data)))
Uses a set of headers, raw data, and a row parsing function, to print data to the terminal in a table of rows and columns. Args: headers (tuple of strings): The headers for each column of data data_list (list of dicts): Raw response data from the validator parse_row_fn (function): Parses a dict of data into a tuple of columns Expected args: data (dict): A single response object from the validator Expected return: cols (tuple): The properties to display in each column
codesearchnet
def send_raw_tx(self, serialized_tx, id=None, endpoint=None): return self._call_endpoint(SEND_TX, params=[serialized_tx], id=id, endpoint=endpoint)
Submits a serialized tx to the network Args: serialized_tx: (str) a hexlified string of a transaction id: (int, optional) id to use for response tracking endpoint: (RPCEndpoint, optional) endpoint to specify to use Returns: bool: whether the tx was accepted or not
juraj-google-style
def GetStatusInformation(self): status = processing_status.TasksStatus() with self._lock: status.number_of_abandoned_tasks = len(self._tasks_abandoned) status.number_of_queued_tasks = len(self._tasks_queued) status.number_of_tasks_pending_merge = (len(self._tasks_pending_merge) + len(self._tasks_merging)) status.number_of_tasks_processing = len(self._tasks_processing) status.total_number_of_tasks = self._total_number_of_tasks return status
Retrieves status information about the tasks. Returns: TasksStatus: tasks status information.
codesearchnet
def round_f1_macro(y_true, y_predicted): try: predictions = [np.round(x) for x in y_predicted] except TypeError: predictions = y_predicted return f1_score(np.array(y_true), np.array(predictions), average='macro')
Calculates F1 macro measure. Args: y_true: list of true values y_predicted: list of predicted values Returns: F1 score
codesearchnet
def sheets_write(config, auth, sheet_url_or_name, sheet_tab, sheet_range, data, append=False, valueInputOption='RAW'): if config.verbose: print('SHEETS WRITE', sheet_url_or_name, sheet_tab, sheet_range) sheet_id = sheets_id(config, auth, sheet_url_or_name) range = sheets_tab_range(sheet_tab, sheet_range) body = {'values': list(data)} if append: API_Sheets(config, auth).spreadsheets().values().append(spreadsheetId=sheet_id, range=range, body=body, valueInputOption=valueInputOption, insertDataOption='OVERWRITE').execute() else: API_Sheets(config, auth).spreadsheets().values().update(spreadsheetId=sheet_id, range=range, body=body, valueInputOption=valueInputOption).execute()
Write to sheets for specified range. Args: config - see starthinker/util/configuration.py auth - user or service sheet_url_or_name - one of: URL, document title, or id sheet_tab - name of tab to get id for sheet_range - A1 notation or blank if whole sheet data - list of lists representing rows. append - if true, data will be added after last row with data. valueInputOption - see APi docs. No Return
github-repos
def SyncSleep(delay, name=None): return examples_sync_sleep(delay=delay, name=name)
Pause for `delay` seconds (which need not be an integer). This is a synchronous (blocking) version of a sleep op. It's purpose is to be contrasted with Examples>AsyncSleep. Args: delay: tf.Tensor which is a scalar of type float. name: An optional name for the op. Returns: The `delay` value.
github-repos
def remove_all(self, filter, force=False, timeout=(- 1)): return self._client.delete_all(filter=filter, force=force, timeout=timeout)
Deletes the set of datacenters according to the specified parameters. A filter is required to identify the set of resources to be deleted. Args: filter: A general filter/query string to narrow the list of items that will be removed. force: If set to true, the operation completes despite any problems with network connectivity or errors on the resource itself. The default is false. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: bool: operation success
codesearchnet
def get_pixel_coordinates(self, point, ccdnum): hdulist_index = self.get_hdulist_idx(ccdnum) if isinstance(point[0], Quantity) and isinstance(point[1], Quantity): pix_point = point[0].value, point[1].value else: pix_point = point if self.reading.inverted: pix_point = self.reading.obs.naxis1 - pix_point[0] +1 , self.reading.obs.naxis2 - pix_point[1] + 1 (x, y) = self.hdulist[hdulist_index].converter.convert(pix_point) return x, y, hdulist_index
Retrieves the pixel location of a point within the current HDUList given the location in the original FITS image. This takes into account that the image may be a cutout of a larger original. Args: point: tuple(float, float) (x, y) in original. Returns: (x, y) pixel in this image. @param extno: the extno from the original Mosaic that the x/y coordinates are from.
juraj-google-style
def get_folder_items(self, folder_id, limit=100, offset=0, fields_list=None): qs = {'limit': limit, 'offset': offset} if fields_list: qs['fields'] = ','.join(fields_list) return self.__request('GET', ('folders/%s/items' % (folder_id,)), querystring=qs)
Get files and folders inside a given folder Args: folder_id (int): Where to get files and folders info. limit (int): The number of items to return. offset (int): The item at which to begin the response. fields_list (list): List of attributes to get. All attributes if None. Returns: dict. Response from Box. Raises: BoxError: An error response is returned from Box (status_code >= 400). BoxHttpResponseError: Response from Box is malformed. requests.exceptions.*: Any connection related problem.
codesearchnet
def loadfn(fname): if (fnmatch(fname, "*POSCAR*") or fnmatch(fname, "*CONTCAR*") or ".cif" in fname.lower()) or fnmatch(fname, "*.vasp"): return Structure.from_file(fname) elif fnmatch(fname, "*vasprun*"): from pymatgen.io.vasp import Vasprun return Vasprun(fname) elif fnmatch(fname, "*.json*"): from monty.serialization import loadfn return loadfn(fname)
Convenience method to perform quick loading of data from a filename. The type of object returned depends the file type. Args: fname (string): A filename. Returns: Note that fname is matched using unix-style, i.e., fnmatch. (Structure) if *POSCAR*/*CONTCAR*/*.cif (Vasprun) *vasprun* (obj) if *json* (passthrough to monty.serialization.loadfn)
juraj-google-style
def __init__(self, scope, parent, name): CodeControlFlow.__init__(self, scope, parent, name) self.declarations = None self.increment = None
Constructor for loops. Args: scope (CodeEntity): The program scope where this object belongs. parent (CodeEntity): This object's parent in the program tree. name (str): The name of the loop statement in the program.
juraj-google-style
def get_unique_families(hkls): def is_perm(hkl1, hkl2): h1 = np.abs(hkl1) h2 = np.abs(hkl2) return all([(i == j) for (i, j) in zip(sorted(h1), sorted(h2))]) unique = collections.defaultdict(list) for hkl1 in hkls: found = False for hkl2 in unique.keys(): if is_perm(hkl1, hkl2): found = True unique[hkl2].append(hkl1) break if (not found): unique[hkl1].append(hkl1) pretty_unique = {} for (k, v) in unique.items(): pretty_unique[sorted(v)[(- 1)]] = len(v) return pretty_unique
Returns unique families of Miller indices. Families must be permutations of each other. Args: hkls ([h, k, l]): List of Miller indices. Returns: {hkl: multiplicity}: A dict with unique hkl and multiplicity.
codesearchnet
def convert(self, y): if y is None: return None if isinstance(y, sparse_tensor.SparseTensor): return self._convert_sparse(y) assert isinstance(y, (tensor_lib.Tensor, ops.Operation)), y output = self._convert_helper(y) if isinstance(output, WrappedTensor): assert isinstance(y, tensor_lib.Tensor) return self._unwrap_or_tile(output) else: assert isinstance(y, ops.Operation) assert not y.outputs assert isinstance(output, ops.Operation) return output
Returns the converted value corresponding to y. Args: y: A Tensor or a ops.Operation object. If latter, y should not have any outputs. Returns: If y does not need to be converted, it returns y as is. Else it returns the "converted value" corresponding to y.
github-repos
def table(text): def table_bar(col_lengths): return "+-%s-+%s" % ( "-+-".join(["-" * length for length in col_lengths]), os.linesep, ) rows = [] for line in text.splitlines(): rows.append([part.strip() for part in line.split("|")]) max_cols = max(map(len, rows)) col_lengths = [0] * max_cols for row in rows: cols = len(row) if cols < max_cols: row.extend([""] * (max_cols - cols)) for i, col in enumerate(row): col_length = len(col) if col_length > col_lengths[i]: col_lengths[i] = col_length text = table_bar(col_lengths) for i, row in enumerate(rows): cols = [] for i, col in enumerate(row): cols.append(col.ljust(col_lengths[i])) text += "| %s |%s" % (" | ".join(cols), os.linesep) text += table_bar(col_lengths) return text
Format the text as a table. Text in format: first | second row 2 col 1 | 4 Will be formatted as:: +-------------+--------+ | first | second | +-------------+--------+ | row 2 col 1 | 4 | +-------------+--------+ Args: text (str): Text that needs to be formatted. Returns: str: Formatted string.
juraj-google-style
def ParseRow(self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) event_data = AndroidWebViewCacheEventData() event_data.content_length = self._GetRowValue( query_hash, row, 'contentlength') event_data.query = query event_data.url = self._GetRowValue(query_hash, row, 'url') timestamp = self._GetRowValue(query_hash, row, 'expires') if timestamp is not None: date_time = dfdatetime_java_time.JavaTime(timestamp=timestamp) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_EXPIRATION) parser_mediator.ProduceEventWithEventData(event, event_data) timestamp = self._GetRowValue(query_hash, row, 'lastmodify') if timestamp is not None: date_time = dfdatetime_java_time.JavaTime(timestamp=timestamp) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_MODIFICATION) parser_mediator.ProduceEventWithEventData(event, event_data)
Parses a row from the database. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row.
juraj-google-style
def alternatives(self, Class=None, set=None): for e in self.select(AlternativeLayers, None, True, ['Original', 'Suggestion']): if (Class is None): (yield e) elif (len(e) >= 1): for e2 in e: try: if isinstance(e2, Class): try: if ((set is None) or (e2.set == set)): (yield e) break except AttributeError: continue except AttributeError: continue
Generator over alternatives, either all or only of a specific annotation type, and possibly restrained also by set. Arguments: * ``Class`` - The Class you want to retrieve (e.g. PosAnnotation). Or set to None to select all alternatives regardless of what type they are. * ``set`` - The set you want to retrieve (defaults to None, which selects irregardless of set) Returns: Generator over Alternative elements
codesearchnet
def entropy(rho: Density, base: float = None) -> float: op = asarray(rho.asoperator()) probs = np.linalg.eigvalsh(op) probs = np.maximum(probs, 0.0) return scipy.stats.entropy(probs, base=base)
Returns the von-Neumann entropy of a mixed quantum state. Args: rho: A density matrix base: Optional logarithm base. Default is base e, and entropy is measures in nats. For bits set base to 2. Returns: The von-Neumann entropy of rho
juraj-google-style
def _trackable_needs_to_be_saved(obj): if hasattr(obj, '__dict__'): if '_serialize_to_tensors' in obj.__dict__ or '_gather_saveables_for_checkpoint' in obj.__dict__ or '_copy_trackable_to_cpu' in obj.__dict__: return True for t in type(obj).mro(): if t is base.Trackable: continue elif '_serialize_to_tensors' in t.__dict__ or '_gather_saveables_for_checkpoint' in t.__dict__ or '_copy_trackable_to_cpu' in t.__dict__: return True return False
Returns whether a trackable needs to be saved. Returns a bool to indicate whether obj's class has `_serialize_to_tensors`, `gather_saveables_for_checkpoint`, or `_copy_trackable_to_cpu` defined. Args: obj: A Trackable object.
github-repos
def verify_manylinux_compliance(auditwheel_log: str, compliance_tag: str) -> None: regex = 'following platform tag:\\s+"{}"'.format(compliance_tag) alt_regex = regex.replace('2014', '_2_17') if not (re.search(regex, auditwheel_log) or re.search(alt_regex, auditwheel_log)): raise RuntimeError('The wheel is not compliant with the tag {tag}.\n{result}'.format(tag=compliance_tag, result=auditwheel_log))
Verify manylinux compliance. Args: auditwheel_log: "auditwheel show" execution results compliance_tag: manyLinux compliance tag Raises: RuntimeError: if the wheel is not manyLinux compliant.
github-repos
def add_asset(self, asset, asset_name, asset_type): if not self.can_update(): self._tcex.handle_error(910, [self.type]) if asset == 'PHONE': return self.tc_requests.add_victim_phone_asset(self.unique_id, asset_name) if asset == 'EMAIL': return self.tc_requests.add_victim_email_asset(self.unique_id, asset_name, asset_type) if asset == 'NETWORK': return self.tc_requests.add_victim_network_asset(self.unique_id, asset_name, asset_type) if asset == 'SOCIAL': return self.tc_requests.add_victim_social_asset(self.unique_id, asset_name, asset_type) if asset == 'WEB': return self.tc_requests.add_victim_web_asset(self.unique_id, asset_name) self._tcex.handle_error( 925, ['asset_type', 'add_asset', 'asset_type', 'asset_type', asset_type] ) return None
Adds a asset to the Victim Valid asset_type: + PHONE + EMAIL + NETWORK + SOCIAL + WEB Args: asset: asset_name: asset_type: PHONE, EMAIL, NETWORK, SOCIAL, or WEB Returns:
juraj-google-style
def compatible_with(value, logical_value): if isinstance(value, abstract.List) and (not value.is_concrete): return True elif isinstance(value, abstract.Dict) and (not value.is_concrete): return not logical_value or bool(value.get_instance_type_parameter(abstract_utils.K).bindings) elif isinstance(value, abstract.LazyConcreteDict): return value.is_empty() != logical_value elif isinstance(value, abstract.PythonConstant): return bool(value.pyval) == logical_value elif isinstance(value, abstract.Instance): name = value.full_name if logical_value and name in _CONTAINER_NAMES: ret = value.has_instance_type_parameter(abstract_utils.T) and bool(value.get_instance_type_parameter(abstract_utils.T).bindings) return ret elif name == 'builtins.NoneType': return not logical_value elif name in NUMERIC: return True elif isinstance(value.cls, abstract.Class) and (not value.cls.overrides_bool): if getattr(value.cls, 'template', None): return True return logical_value return True elif isinstance(value, (abstract.Function, abstract.Class)): return logical_value else: return True
Returns the conditions under which the value could be True or False. Args: value: An abstract value. logical_value: Either True or False. Returns: False: If the value could not evaluate to logical_value under any circumstance (e.g. value is the empty list and logical_value is True). True: If it is possible for the value to evaluate to the logical_value, and any ambiguity cannot be resolved by additional bindings.
github-repos
def confirm(statement): prompt = '{statement} [y/n]'.format(statement=statement) answer = _ask(prompt, limited_to=['yes', 'no', 'y', 'n']) return (answer and answer.startswith('y'))
Ask the user for confirmation about the specified statement. Args: statement (unicode): statement to ask the user confirmation about. Returns: bool: whether or not specified statement was confirmed.
codesearchnet
def normalize_cluster_spec(cluster_spec): if isinstance(cluster_spec, (dict, cluster_pb2.ClusterDef)): return server_lib.ClusterSpec(cluster_spec) elif not isinstance(cluster_spec, server_lib.ClusterSpec): raise ValueError("`cluster_spec' should be dict or a `tf.train.ClusterSpec` or a `tf.train.ClusterDef` object") return cluster_spec
Makes `cluster_spec` into a `ClusterSpec` object. Args: cluster_spec: a dict, ClusterDef or ClusterSpec object specifying the cluster configurations. Returns: a `ClusterSpec` object. Raises: ValueError: if `cluster_spec` is not a dict or a `ClusterSpec` or a `ClusterDef`.
github-repos
def patch_addContext(self, patch, text): if len(text) == 0: return pattern = text[patch.start2 : patch.start2 + patch.length1] padding = 0 while (text.find(pattern) != text.rfind(pattern) and (self.Match_MaxBits == 0 or len(pattern) < self.Match_MaxBits - self.Patch_Margin - self.Patch_Margin)): padding += self.Patch_Margin pattern = text[max(0, patch.start2 - padding) : patch.start2 + patch.length1 + padding] padding += self.Patch_Margin prefix = text[max(0, patch.start2 - padding) : patch.start2] if prefix: patch.diffs[:0] = [(self.DIFF_EQUAL, prefix)] suffix = text[patch.start2 + patch.length1 : patch.start2 + patch.length1 + padding] if suffix: patch.diffs.append((self.DIFF_EQUAL, suffix)) patch.start1 -= len(prefix) patch.start2 -= len(prefix) patch.length1 += len(prefix) + len(suffix) patch.length2 += len(prefix) + len(suffix)
Increase the context until it is unique, but don't let the pattern expand beyond Match_MaxBits. Args: patch: The patch to grow. text: Source text.
juraj-google-style
def _ParseFile(self, file_obj, line_parser): lines = [ l.strip() for l in utils.ReadFileBytesAsUnicode(file_obj).splitlines() ] try: for index, line in enumerate(lines): if line: line_parser(line) except (IndexError, KeyError) as e: raise parser.ParseError("Invalid file at line %d: %s" % (index + 1, e))
Process a file line by line. Args: file_obj: The file to parse. line_parser: The parser method used to process and store line content. Raises: parser.ParseError if the parser is unable to process the line.
juraj-google-style
def parse_json_path(self, jsonpath): if (jsonpath not in self.parsed): try: self.parsed[jsonpath] = self.parser(jsonpath) except Exception: self.log(('Invalid Json Path: ' + jsonpath), 'error') raise InvalidJsonPathError('Invalid Json Path') return self.parsed[jsonpath]
Parse a jsonpath Args: jsonpath: str Returns: a parsed json path
codesearchnet
def GetContainingXLAContext(ctxt): while ctxt: if ctxt.IsXLAContext(): return ctxt ctxt = ctxt.outer_context return None
Returns the first ancestor XLAContext of `ctxt`. Returns `ctxt` if `ctxt` is a XLAContext, or None if `ctxt` is not in a while loop. Args: ctxt: ControlFlowContext Returns: `ctxt` if `ctxt` is a XLAContext, the most nested XLAContext containing `ctxt`, or None if `ctxt` is not in a while loop.
github-repos
def format(obj, options): formatters = {float_types: (lambda x: '{:.{}g}'.format(x, options.digits))} for (_types, fmtr) in formatters.items(): if isinstance(obj, _types): return fmtr(obj) try: if (six.PY2 and isinstance(obj, six.string_types)): return str(obj.encode('utf-8')) return str(obj) except: return 'OBJECT'
Return a string representation of the Python object Args: obj: The Python object options: Format options
codesearchnet
def _collect_process_tree(starting_pid): ret = [] stack = [starting_pid] while stack: pid = stack.pop() if platform.system() == 'Darwin': command = ['pgrep', '-P', str(pid)] else: command = ['ps', '-o', 'pid', '--ppid', str(pid), '--noheaders'] try: ps_results = subprocess.check_output(command).decode().strip() except subprocess.CalledProcessError: continue children_pid_list = [int(p.strip()) for p in ps_results.split('\n')] stack.extend(children_pid_list) ret.extend(children_pid_list) return ret
Collects PID list of the descendant processes from the given PID. This function only available on Unix like system. Args: starting_pid: The PID to start recursively traverse. Returns: A list of pid of the descendant processes.
github-repos
class DepthAnythingReassembleStage(nn.Module): def __init__(self, config): super().__init__() self.config = config self.layers = nn.ModuleList() for channels, factor in zip(config.neck_hidden_sizes, config.reassemble_factors): self.layers.append(DepthAnythingReassembleLayer(config, channels=channels, factor=factor)) def forward(self, hidden_states: List[torch.Tensor], patch_height=None, patch_width=None) -> List[torch.Tensor]: out = [] for i, hidden_state in enumerate(hidden_states): hidden_state = hidden_state[:, 1:] batch_size, _, num_channels = hidden_state.shape hidden_state = hidden_state.reshape(batch_size, patch_height, patch_width, num_channels) hidden_state = hidden_state.permute(0, 3, 1, 2).contiguous() hidden_state = self.layers[i](hidden_state) out.append(hidden_state) return out
This class reassembles the hidden states of the backbone into image-like feature representations at various resolutions. This happens in 3 stages: 1. Take the patch embeddings and reshape them to image-like feature representations. 2. Project the channel dimension of the hidden states according to `config.neck_hidden_sizes`. 3. Resizing the spatial dimensions (height, width). Args: config (`[DepthAnythingConfig]`): Model configuration class defining the model architecture.
github-repos
def client_credentials(self, client_id, client_secret, audience, grant_type='client_credentials'): return self.post('https:
Client credentials grant This is the OAuth 2.0 grant that server processes utilize in order to access an API. Use this endpoint to directly request an access_token by using the Application Credentials (a Client Id and a Client Secret). Args: grant_type (str): Denotes the flow you're using. For client credentials use client_credentials client_id (str): your application's client Id client_secret (str): your application's client Secret audience (str): The unique identifier of the target API you want to access. Returns: access_token
codesearchnet
def user_bounded_trie(namespace, name, metric, ptransform=None): labels = create_labels(ptransform=ptransform, namespace=namespace, name=name) return create_monitoring_info(USER_BOUNDED_TRIE_URN, BOUNDED_TRIE_TYPE, metric.to_proto().SerializeToString(), labels)
Return the string set monitoring info for the URN, metric and labels. Args: namespace: User-defined namespace of BoundedTrie. name: Name of BoundedTrie. metric: The BoundedTrieData representing the metrics. ptransform: The ptransform id used as a label.
github-repos
def _ParseFSMVariables(self, template): self.values = [] for line in template: self._line_num += 1 line = line.rstrip() if (not line): return if self.comment_regex.match(line): continue if line.startswith('Value '): try: value = TextFSMValue(fsm=self, max_name_len=self.MAX_NAME_LEN, options_class=self._options_cls) value.Parse(line) except TextFSMTemplateError as error: raise TextFSMTemplateError(('%s Line %s.' % (error, self._line_num))) if (value.name in self.header): raise TextFSMTemplateError(("Duplicate declarations for Value '%s'. Line: %s." % (value.name, self._line_num))) try: self._ValidateOptions(value) except TextFSMTemplateError as error: raise TextFSMTemplateError(('%s Line %s.' % (error, self._line_num))) self.values.append(value) self.value_map[value.name] = value.template elif (not self.values): raise TextFSMTemplateError('No Value definitions found.') else: raise TextFSMTemplateError(('Expected blank line after last Value entry. Line: %s.' % self._line_num))
Extracts Variables from start of template file. Values are expected as a contiguous block at the head of the file. These will be line separated from the State definitions that follow. Args: template: Valid template file, with Value definitions at the top. Raises: TextFSMTemplateError: If syntax or semantic errors are found.
codesearchnet
def assert_no_title(self, title, **kwargs): query = TitleQuery(title, **kwargs) @self.synchronize(wait=query.wait) def assert_no_title(): if query.resolves_for(self): raise ExpectationNotMet(query.negative_failure_message) return True return assert_no_title()
Asserts that the page doesn't have the given title. Args: title (str | RegexObject): The string that the title should include. **kwargs: Arbitrary keyword arguments for :class:`TitleQuery`. Returns: True Raises: ExpectationNotMet: If the assertion hasn't succeeded during the wait time.
juraj-google-style
def get_nc_attrs(nc): meta = { 'experiment': nc.experiment_id, 'frequency': nc.frequency, 'institute': nc.institute_id, 'model': nc.model_id, 'modeling_realm': nc.modeling_realm, 'ensemble_member': 'r{}i{}p{}'.format(nc.realization, nc.initialization_method, nc.physics_version), } variable_name = get_var_name(nc) if variable_name: meta.update({'variable_name': variable_name}) return meta
Gets netCDF file metadata attributes. Arguments: nc (netCDF4.Dataset): an open NetCDF4 Dataset to pull attributes from. Returns: dict: Metadata as extracted from the netCDF file.
juraj-google-style
def allconcat(self, x, mesh_axis, concat_axis, stack=False): x = x.to_laid_out_tensor() coord = self.laid_out_pcoord(mesh_axis) t = x.one_slice old_shape = t.shape.as_list() num_parts = self.shape[mesh_axis].size t = tf.expand_dims(t, concat_axis) t *= tf.reshape( tf.one_hot(coord.one_slice, num_parts, dtype=t.dtype), [num_parts if i == concat_axis else 1 for i in xrange(len(old_shape) + 1)]) if not stack: new_shape = old_shape[:] new_shape[concat_axis] *= num_parts t = tf.reshape(t, new_shape) return self.allreduce(self.LaidOutTensor([t]), [mesh_axis], "SUM")
Grouped allconcat (like MPI allgather followed by concat). TODO(noam): inefficient - replace with a XLA allconcat when available Args: x: a LaidOutTensor mesh_axis: an integer - the mesh axis along which to group concat_axis: an integer (the Tensor axis along which to concatenate) stack: a boolean - whether to stack instead of concat Returns: a LaidOutTensor
juraj-google-style
def processPhoneList(platformNames=[], numbers=[], excludePlatformNames=[]): platforms = platform_selection.getPlatformsByName(platformNames, mode="phonefy", excludePlatformNames=excludePlatformNames) results = [] for num in numbers: for pla in platforms: entities = pla.getInfo(query=num, process=True, mode="phonefy") if entities != {}: results+=json.loads(entities) return results
Method to perform searchs on a series of numbers. Args: ----- platformNames: List of names of the platforms. numbers: List of numbers to be queried. excludePlatformNames: A list of platforms not to be searched. Return: ------- A list of verified emails.
juraj-google-style
def prune_volumes(self, filters=None): params = {} if filters: params['filters'] = utils.convert_filters(filters) url = self._url('/volumes/prune') return self._result(self._post(url, params=params), True)
Delete unused volumes Args: filters (dict): Filters to process on the prune list. Returns: (dict): A dict containing a list of deleted volume names and the amount of disk space reclaimed in bytes. Raises: :py:class:`docker.errors.APIError` If the server returns an error.
juraj-google-style
def upload(self, content, content_type, filename=None): try: response = self.api.media_upload(content, content_type, filename) if "content_uri" in response: return response["content_uri"] else: raise MatrixUnexpectedResponse( "The upload was successful, but content_uri wasn't found." ) except MatrixRequestError as e: raise MatrixRequestError( code=e.code, content="Upload failed: %s" % e )
Upload content to the home server and recieve a MXC url. Args: content (bytes): The data of the content. content_type (str): The mimetype of the content. filename (str): Optional. Filename of the content. Raises: MatrixUnexpectedResponse: If the homeserver gave a strange response MatrixRequestError: If the upload failed for some reason.
juraj-google-style
def output_classes(self): return nest.map_structure(lambda component_spec: component_spec._to_legacy_output_classes(), self._element_spec)
Returns the class of each component of an element of this iterator. The expected values are `tf.Tensor` and `tf.SparseTensor`. Returns: A nested structure of Python `type` objects corresponding to each component of an element of this dataset.
github-repos
def _GetTfRecordEntries(self, path, max_entries, is_sequence, iterator_options): return self._GetEntries([path], max_entries, partial(tf.python_io.tf_record_iterator, options=iterator_options), is_sequence)
Extracts TFRecord examples into a dictionary of feature values. Args: path: The path to the TFRecord file(s). max_entries: The maximum number of examples to load. is_sequence: True if the input data from 'path' are tf.SequenceExamples, False if tf.Examples. Defaults to false. iterator_options: Options to pass to the iterator that reads the examples. Defaults to None. Returns: A tuple with two elements: - A dictionary of all features parsed thus far and arrays of their values. - The number of examples parsed.
codesearchnet