code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def getAll(self, event_name): raw_events = self._event_client.eventGetAll(self._id, event_name) return [snippet_event.from_dict(msg) for msg in raw_events]
Gets all the events of a certain name that have been received so far. This is a non-blocking call. Args: callback_id: The id of the callback. event_name: string, the name of the event to get. Returns: A list of SnippetEvent, each representing an event from the Java side.
juraj-google-style
def kill(self, exit_code: Any = None): self._force_kill.set() if exit_code is not None: self._exit_code = exit_code logger.info("Killing behavior {0} with exit code: {1}".format(self, exit_code))
Stops the behaviour Args: exit_code (object, optional): the exit code of the behaviour (Default value = None)
juraj-google-style
def __getitem__(self, k): chain = ChainMap(self.scopes, self.globals) return chain.__getitem__(k)
Look up a variable. Args: k (str): The name of the variable to look up. Returns: LispVal: The value assigned to the variable. Raises: KeyError: If the variable has not been assigned to.
juraj-google-style
def load(cls, campaign_dir): if (not Path(campaign_dir).is_absolute()): raise ValueError('Path is not absolute') if (not Path(campaign_dir).exists()): raise ValueError('Directory does not exist') filename = ('%s.json' % os.path.split(campaign_dir)[1]) filepath = os.path.join(campaign_dir, filename) try: tinydb = TinyDB(filepath) assert (set(tinydb.table('config').all()[0].keys()) == set(['script', 'params', 'commit'])) except: os.remove(filepath) raise ValueError('Specified campaign directory seems corrupt') return cls(tinydb, campaign_dir)
Initialize from an existing database. It is assumed that the database json file has the same name as its containing folder. Args: campaign_dir (str): The path to the campaign directory.
codesearchnet
def objects_get(self, bucket, key, projection='noAcl'): args = {} if projection is not None: args['projection'] = projection url = Api._ENDPOINT + (Api._OBJECT_PATH % (bucket, Api._escape_key(key))) return datalab.utils.Http.request(url, args=args, credentials=self._credentials)
Issues a request to retrieve information about an object. Args: bucket: the name of the bucket. key: the key of the object within the bucket. projection: the projection of the object to retrieve. Returns: A parsed object information dictionary. Raises: Exception if there is an error performing the operation.
juraj-google-style
def __call__(self, data): if _is_mutable_sequence_like(data) and len(data) > 0 and _is_sequence_like(data[0]): return '\n'.join([_CsvSerializer._serialize_row(row) for row in data]) return _CsvSerializer._serialize_row(data)
Take data of various data formats and serialize them into CSV. Args: data (object): Data to be serialized. Returns: object: Sequence of bytes to be used for the request body.
juraj-google-style
def expression(self, rbp=0): prev_token = self.consume() left = prev_token.nud(context=self) while (rbp < self.current_token.lbp): prev_token = self.consume() left = prev_token.led(left, context=self) return left
Extract an expression from the flow of tokens. Args: rbp (int): the "right binding power" of the previous token. This represents the (right) precedence of the previous token, and will be compared to the (left) precedence of next tokens. Returns: Whatever the led/nud functions of tokens returned.
codesearchnet
def _el_orb_tuple(string): el_orbs = [] for split in string.split(','): splits = split.split('.') el = splits[0] if len(splits) == 1: el_orbs.append(el) else: el_orbs.append((el, tuple(splits[1:]))) return el_orbs
Parse the element and orbital argument strings. The presence of an element without any orbitals means that we want to plot all of its orbitals. Args: string (`str`): The selected elements and orbitals in in the form: `"Sn.s.p,O"`. Returns: A list of tuples specifying which elements/orbitals to plot. The output for the above example would be: `[('Sn', ('s', 'p')), 'O']`
juraj-google-style
def mapped_repr(obj: Any, attributes: List[Tuple[str, str]], with_addr: bool = False, joiner: str = COMMA_SPACE) -> str: elements = ["{}={}".format(init_param_name, repr(getattr(obj, attr_name))) for attr_name, init_param_name in attributes] return repr_result(obj, elements, with_addr=with_addr, joiner=joiner)
Convenience function for :func:`__repr__`. Takes attribute names and corresponding initialization parameter names (parameters to :func:`__init__`). Args: obj: object to display attributes: list of tuples, each ``(attr_name, init_param_name)``. with_addr: include the memory address of ``obj`` joiner: string with which to join the elements Returns: string: :func:`repr`-style representation
juraj-google-style
def in_array_list(array_list, a, tol=1e-5): if len(array_list) == 0: return False axes = tuple(range(1, a.ndim + 1)) if not tol: return np.any(np.all(np.equal(array_list, a[None, :]), axes)) else: return np.any(np.sum(np.abs(array_list - a[None, :]), axes) < tol)
Extremely efficient nd-array comparison using numpy's broadcasting. This function checks if a particular array a, is present in a list of arrays. It works for arrays of any size, e.g., even matrix searches. Args: array_list ([array]): A list of arrays to compare to. a (array): The test array for comparison. tol (float): The tolerance. Defaults to 1e-5. If 0, an exact match is done. Returns: (bool)
juraj-google-style
def fix_image_flip_shape(image, result): image_shape = image.get_shape() if image_shape == tensor_shape.unknown_shape(): result.set_shape([None, None, None]) else: result.set_shape(image_shape) return result
Set the shape to 3 dimensional if we don't know anything else. Args: image: original image size result: flipped or transformed image Returns: An image whose shape is at least (None, None, None).
github-repos
def to_diff_dict(self) -> Dict[str, Any]: config_dict = self.to_dict() default_config_dict = CompressedTensorsConfig().to_dict() serializable_config_dict = {} for key, value in config_dict.items(): if key not in default_config_dict or value != default_config_dict[key]: serializable_config_dict[key] = value return serializable_config_dict
Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary. Returns: `Dict[str, Any]`: Dictionary of all the attributes that make up this configuration instance,
github-repos
def truepath(path, real=False): path = expanduser(path) path = expandvars(path) if real: path = realpath(path) else: path = abspath(path) path = normpath(path) return path
Normalizes a string representation of a path and does shell-like expansion. Args: path (PathLike): string representation of a path real (bool): if True, all symbolic links are followed. (default: False) Returns: PathLike : normalized path Note: This function is similar to the composition of expanduser, expandvars, normpath, and (realpath if `real` else abspath). However, on windows backslashes are then replaced with forward slashes to offer a consistent unix-like experience across platforms. On windows expanduser will expand environment variables formatted as %name%, whereas on unix, this will not occur. CommandLine: python -m ubelt.util_path truepath Example: >>> import ubelt as ub >>> assert ub.truepath('~/foo') == join(ub.userhome(), 'foo') >>> assert ub.truepath('~/foo') == ub.truepath('~/foo/bar/..') >>> assert ub.truepath('~/foo', real=True) == ub.truepath('~/foo')
codesearchnet
def port_get_tag(port): cmd = 'ovs-vsctl get port {0} tag'.format(port) result = __salt__['cmd.run_all'](cmd) retcode = result['retcode'] stdout = result['stdout'] return _stdout_list_split(retcode, stdout)
Lists tags of the port. Args: port: A string - port name. Returns: List of tags (or empty list), False on failure. .. versionadded:: 2016.3.0 CLI Example: .. code-block:: bash salt '*' openvswitch.port_get_tag tap0
juraj-google-style
def ParseEnum(field, value): enum_descriptor = field.enum_type try: number = int(value, 0) except ValueError: enum_value = enum_descriptor.values_by_name.get(value, None) if (enum_value is None): raise ValueError(('Enum type "%s" has no value named %s.' % (enum_descriptor.full_name, value))) else: enum_value = enum_descriptor.values_by_number.get(number, None) if (enum_value is None): raise ValueError(('Enum type "%s" has no value with number %d.' % (enum_descriptor.full_name, number))) return enum_value.number
Parse an enum value. The value can be specified by a number (the enum value), or by a string literal (the enum name). Args: field: Enum field descriptor. value: String value. Returns: Enum value number. Raises: ValueError: If the enum value could not be parsed.
codesearchnet
def _decorator(func): opname = func.__name__ func.__doc__ = '\n Assert the condition `x {sym} y` holds element-wise.\n\n This condition holds if for every pair of (possibly broadcast) elements\n `x[i]`, `y[i]`, we have `x[i] {sym} y[i]`.\n If both `x` and `y` are empty, this is trivially satisfied.\n\n When running in graph mode, you should add a dependency on this operation\n to ensure that it runs. Example of adding a dependency to an operation:\n\n ```python\n with tf.control_dependencies([tf.compat.v1.{opname}(x, y)]):\n output = tf.reduce_sum(x)\n ```\n\n Args:\n x: Numeric `Tensor`.\n y: Numeric `Tensor`, same dtype as and broadcastable to `x`.\n data: The tensors to print out if the condition is False. Defaults to\n error message and first few entries of `x`, `y`.\n summarize: Print this many entries of each tensor.\n message: A string to prefix to the default message.\n name: A name for this operation (optional). Defaults to "{opname}".\n\n Returns:\n Op that raises `InvalidArgumentError` if `x {sym} y` is False.\n\n Raises:\n InvalidArgumentError: if the check can be performed immediately and\n `x {sym} y` is False. The check can be performed immediately during\n eager execution or if `x` and `y` are statically known.\n\n @compatibility(TF2)\n `tf.compat.v1.{opname}` is compatible with eager execution and\n `tf.function`.\n Please use `tf.debugging.{opname}` instead when migrating to TF2. Apart\n from `data`, all arguments are supported with the same argument name.\n\n If you want to ensure the assert statements run before the\n potentially-invalid computation, please use `tf.control_dependencies`,\n as tf.function auto-control dependencies are insufficient for assert\n statements.\n\n return func
Generated decorator that adds the appropriate docstring to the function for symbol `sym`. Args: func: Function for a TensorFlow op Returns: A version of `func` with documentation attached.
github-repos
def average_datetimes(dt_list): if sys.version_info < (3, 3): import time def timestamp_func(dt): return time.mktime(dt.timetuple()) else: timestamp_func = datetime.timestamp total = [timestamp_func(dt) for dt in dt_list] return datetime.fromtimestamp(sum(total) / len(total))
Average a series of datetime objects. .. note:: This function assumes all datetime objects are naive and in the same time zone (UTC). Args: dt_list (iterable): Datetime objects to average Returns: Average datetime as a datetime object
juraj-google-style
def __init__(self, path, ignoreErrors=True): self._name = path self._members = {} self._pendingError = None try: self._members = self._readZipDirectory(fileObj=open(path, 'rb')) except Exception: debug.logger & debug.flagReader and debug.logger( 'ZIP file %s open failure: %s' % (self._name, sys.exc_info()[1])) if not ignoreErrors: self._pendingError = error.PySmiError('file %s access error: %s' % (self._name, sys.exc_info()[1]))
Create an instance of *ZipReader* serving a ZIP archive. Args: path (str): path to ZIP archive containing MIB files Keyword Args: ignoreErrors (bool): ignore ZIP archive access errors
juraj-google-style
def get_all_links_in_chain(self): if (self.is_decision() and self.get_link(self.task_id)): return self.links return ([self] + self.links)
Return all links in the chain of trust, including the target task. By default, we're checking a task and all its dependencies back to the tree, so the full chain is ``self.links`` + ``self``. However, we also support checking the decision task itself. In that case, we populate the decision task as a link in ``self.links``, and we don't need to add another check for ``self``. Returns: list: of all ``LinkOfTrust``s to verify.
codesearchnet
def extract_stack(stacklevel=1): thread_key = _get_thread_key() return _tf_stack.extract_stack(_source_mapper_stacks[thread_key][-1].internal_map, _source_filter_stacks[thread_key][-1].internal_set, stacklevel)
An eager-friendly alternative to traceback.extract_stack. Args: stacklevel: number of initial frames to skip when producing the stack. Returns: A list-like FrameSummary containing StackFrame-like objects, which are namedtuple-like objects with the following fields: filename, lineno, name, line, meant to masquerade as traceback.FrameSummary objects.
github-repos
def _export_files(self, bq): job_labels = self._get_bq_metadata().add_additional_bq_job_labels(self.bigquery_job_labels) export_job_name = bigquery_tools.generate_bq_job_name(self._job_name, self._source_uuid, bigquery_tools.BigQueryJobTypes.EXPORT, '%s_%s' % (int(time.time()), random.randint(0, 1000))) temp_location = self.options.view_as(GoogleCloudOptions).temp_location gcs_location = bigquery_export_destination_uri(self.gcs_location, temp_location, self._source_uuid) try: if self.use_json_exports: job_ref = bq.perform_extract_job([gcs_location], export_job_name, self.table_reference, bigquery_tools.FileFormat.JSON, project=self._get_project(), job_labels=job_labels, include_header=False) else: job_ref = bq.perform_extract_job([gcs_location], export_job_name, self.table_reference, bigquery_tools.FileFormat.AVRO, project=self._get_project(), include_header=False, job_labels=job_labels, use_avro_logical_types=True) bq.wait_for_bq_job(job_ref) except Exception as exn: logging.warning('Error exporting table: %s. Note that external tables cannot be exported: https: raise metadata_list = FileSystems.match([gcs_location])[0].metadata_list if isinstance(self.table_reference, vp.ValueProvider): table_ref = bigquery_tools.parse_table_reference(self.table_reference.get(), project=self.project) else: table_ref = self.table_reference table = bq.get_table(table_ref.projectId, table_ref.datasetId, table_ref.tableId) return (table.schema, metadata_list)
Runs a BigQuery export job. Returns: bigquery.TableSchema instance, a list of FileMetadata instances
github-repos
def set_metadata(self, token, data): req = requests.post(self.meta_url(('metadata/ocp/set/' + token)), json=data, verify=False) if (req.status_code != 200): raise RemoteDataUploadError(('Could not upload metadata: ' + req.json()['message'])) return req.json()
Insert new metadata into the OCP metadata database. Arguments: token (str): Token of the datum to set data (str): A dictionary to insert as metadata. Include `secret`. Returns: json: Info of the inserted ID (convenience) or an error message. Throws: RemoteDataUploadError: If the token is already populated, or if there is an issue with your specified `secret` key.
codesearchnet
def clean_dataframes(dfs): if isinstance(dfs, (list)): for df in dfs: df = clean_dataframe(df) return dfs else: return [clean_dataframe(dfs)]
Fill NaNs with the previous value, the next value or if all are NaN then 1.0 TODO: Linear interpolation and extrapolation Arguments: dfs (list of dataframes): list of dataframes that contain NaNs to be removed Returns: list of dataframes: list of dataframes with NaNs replaced by interpolated values
juraj-google-style
def _MergeMessageField(self, tokenizer, message, field): is_map_entry = _IsMapEntry(field) if tokenizer.TryConsume('<'): end_token = '>' else: tokenizer.Consume('{') end_token = '}' if (field.message_type.full_name == _ANY_FULL_TYPE_NAME and tokenizer.TryConsume('[')): packed_type_name = self._ConsumeAnyTypeUrl(tokenizer) tokenizer.Consume(']') tokenizer.TryConsume(':') if tokenizer.TryConsume('<'): expanded_any_end_token = '>' else: tokenizer.Consume('{') expanded_any_end_token = '}' if not self.descriptor_pool: raise ParseError('Descriptor pool required to parse expanded Any field') expanded_any_sub_message = _BuildMessageFromTypeName(packed_type_name, self.descriptor_pool) if not expanded_any_sub_message: raise ParseError('Type %s not found in descriptor pool' % packed_type_name) while not tokenizer.TryConsume(expanded_any_end_token): if tokenizer.AtEnd(): raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % (expanded_any_end_token,)) self._MergeField(tokenizer, expanded_any_sub_message) if field.label == descriptor.FieldDescriptor.LABEL_REPEATED: any_message = getattr(message, field.name).add() else: any_message = getattr(message, field.name) any_message.Pack(expanded_any_sub_message) elif field.label == descriptor.FieldDescriptor.LABEL_REPEATED: if field.is_extension: sub_message = message.Extensions[field].add() elif is_map_entry: sub_message = getattr(message, field.name).GetEntryClass()() else: sub_message = getattr(message, field.name).add() else: if field.is_extension: sub_message = message.Extensions[field] else: sub_message = getattr(message, field.name) sub_message.SetInParent() while not tokenizer.TryConsume(end_token): if tokenizer.AtEnd(): raise tokenizer.ParseErrorPreviousToken('Expected "%s".' % (end_token,)) self._MergeField(tokenizer, sub_message) if is_map_entry: value_cpptype = field.message_type.fields_by_name['value'].cpp_type if value_cpptype == descriptor.FieldDescriptor.CPPTYPE_MESSAGE: value = getattr(message, field.name)[sub_message.key] value.MergeFrom(sub_message.value) else: getattr(message, field.name)[sub_message.key] = sub_message.value
Merges a single scalar field into a message. Args: tokenizer: A tokenizer to parse the field value. message: The message of which field is a member. field: The descriptor of the field to be merged. Raises: ParseError: In case of text parsing problems.
juraj-google-style
def __add_min_max_value(parser, basename, default_min, default_max, initial, help_template): help_template = Template(help_template) parser.add('--{0}-min'.format(basename), default=default_min, type=float, required=False, help=help_template.substitute(mmi='min', name=basename)) parser.add('--{0}-max'.format(basename), default=default_max, type=float, required=False, help=help_template.substitute(mmi='max', name=basename)) parser.add('--{0}'.format(basename), default=initial, type=float, required=False, help=help_template.substitute(mmi='initial', name=basename))
Generates parser entries for options with a min, max, and default value. Args: parser: the parser to use. basename: the base option name. Generated options will have flags --basename-min, --basename-max, and --basename. default_min: the default min value default_max: the default max value initial: the default initial value help_template: the help string template. $mmi will be replaced with min, max, or initial. $name will be replaced with basename.
codesearchnet
def __init__(self, directory, jinja2_environment, logger=None, raise_exception_on_warning=False): super(Generator, self).__init__() self.__logger = logger self.__raise_exception_on_warning = raise_exception_on_warning if not os.path.isdir(directory): self.log_error('Main directory \'%s\' does not exists!' % directory) self.__root_directory = os.path.abspath(directory) self.__jinja2_environment = jinja2_environment self.__jinja2_predefined_filters = self.__jinja2_environment.filters.keys() self.__extensions = {} self.__actions = TreeMap() self.__default_action = None
Constructor of a :program:`cygenja` template machine. Args: directory (str): Absolute or relative base directory. Everything happens in that directory and sub-directories. jinja2_environment: :program:`Jinja2` environment. logger: A logger (from the standard ``logging``) or ``None`` is no logging is wanted. raise_exception_on_warning (bool): If set to ``True``, raise a ``RuntimeError`` when logging a warning.
juraj-google-style
def process_fidelity(channel1, channel2, require_cptp=True): is_cptp1 = None is_cptp2 = None if isinstance(channel1, (list, np.ndarray)): channel1 = Operator(channel1) if require_cptp: is_cptp1 = channel1.is_unitary() if isinstance(channel2, (list, np.ndarray)): channel2 = Operator(channel2) if require_cptp: is_cptp2 = channel2.is_unitary() s1 = SuperOp(channel1) s2 = SuperOp(channel2) if require_cptp: if (is_cptp1 is None): is_cptp1 = s1.is_cptp() if (not is_cptp1): raise QiskitError('channel1 is not CPTP') if (is_cptp2 is None): is_cptp2 = s2.is_cptp() if (not is_cptp2): raise QiskitError('channel2 is not CPTP') (input_dim1, output_dim1) = s1.dim (input_dim2, output_dim2) = s2.dim if ((input_dim1 != output_dim1) or (input_dim2 != output_dim2)): raise QiskitError('Input channels must have same size input and output dimensions.') if (input_dim1 != input_dim2): raise QiskitError('Input channels have different dimensions.') fidelity = (np.trace(s1.compose(s2.adjoint()).data) / (input_dim1 ** 2)) return fidelity
Return the process fidelity between two quantum channels. This is given by F_p(E1, E2) = Tr[S2^dagger.S1])/dim^2 where S1 and S2 are the SuperOp matrices for channels E1 and E2, and dim is the dimension of the input output statespace. Args: channel1 (QuantumChannel or matrix): a quantum channel or unitary matrix. channel2 (QuantumChannel or matrix): a quantum channel or unitary matrix. require_cptp (bool): require input channels to be CPTP [Default: True]. Returns: array_like: The state fidelity F(state1, state2). Raises: QiskitError: if inputs channels do not have the same dimensions, have different input and output dimensions, or are not CPTP with `require_cptp=True`.
codesearchnet
def all(self, predicate=bool): if self.closed(): raise ValueError('Attempt to call all() on a closed Queryable.') if (not is_callable(predicate)): raise TypeError('all() parameter predicate={0} is not callable'.format(repr(predicate))) return all(self.select(predicate))
Determine if all elements in the source sequence satisfy a condition. All of the source sequence will be consumed. Note: This method uses immediate execution. Args: predicate (callable): An optional single argument function used to test each elements. If omitted, the bool() function is used resulting in the elements being tested directly. Returns: True if all elements in the sequence meet the predicate condition, otherwise False. Raises: ValueError: If the Queryable is closed() TypeError: If predicate is not callable.
codesearchnet
def coco_to_pascal_voc(bboxes: np.ndarray) -> np.ndarray: bboxes[:, 2] = bboxes[:, 2] + bboxes[:, 0] - 1 bboxes[:, 3] = bboxes[:, 3] + bboxes[:, 1] - 1 return bboxes
Converts bounding boxes from the COCO format to the Pascal VOC format. In other words, converts from (top_left_x, top_left_y, width, height) format to (top_left_x, top_left_y, bottom_right_x, bottom_right_y). Args: bboxes (`np.ndarray` of shape `(batch_size, 4)): Bounding boxes in COCO format. Returns: `np.ndarray` of shape `(batch_size, 4) in Pascal VOC format.
github-repos
def _model_ready(self, sess: session.Session) -> Tuple[bool, Optional[str]]: return _ready(self._ready_op, sess, 'Model not ready')
Checks if the model is ready or not. Args: sess: A `Session`. Returns: A tuple (is_ready, msg), where is_ready is True if ready and False otherwise, and msg is `None` if the model is ready, a `String` with the reason why it is not ready otherwise.
github-repos
def languages(self, **kwargs): path = ('/projects/%s/languages' % self.get_id()) return self.manager.gitlab.http_get(path, **kwargs)
Get languages used in the project with percentage value. Args: **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabGetError: If the server failed to perform the request
codesearchnet
def kill(self, container, signal=None): url = self._url("/containers/{0}/kill", container) params = {} if signal is not None: if not isinstance(signal, six.string_types): signal = int(signal) params['signal'] = signal res = self._post(url, params=params) self._raise_for_status(res)
Kill a container or send a signal to a container. Args: container (str): The container to kill signal (str or int): The signal to send. Defaults to ``SIGKILL`` Raises: :py:class:`docker.errors.APIError` If the server returns an error.
juraj-google-style
def _write_init_models(self, filenames): self.write(destination=self.output_directory, filename="__init__.py", template_name="__init_model__.py.tpl", filenames=self._prepare_filenames(filenames), class_prefix=self._class_prefix, product_accronym=self._product_accronym, header=self.header_content)
Write init file Args: filenames (dict): dict of filename and classes
juraj-google-style
def begin_stream(self, command: Command) -> Reply: (yield from self._control_stream.write_command(command)) reply = (yield from self._control_stream.read_reply()) self.raise_if_not_match('Begin stream', (ReplyCodes.file_status_okay_about_to_open_data_connection, ReplyCodes.data_connection_already_open_transfer_starting), reply) return reply
Start sending content on the data stream. Args: command: A command that tells the server to send data over the data connection. Coroutine. Returns: The begin reply.
codesearchnet
def _get_js_files(cls, extra_files): return cls._get_media_files(packager=Packager(), media_packages=getattr(cls, 'js_packages', {}), media_type='js', extra_files=extra_files)
Return all JavaScript files from the Media class. Args: extra_files (list): The contents of the Media class's original :py:attr:`js` attribute, if one was provided. Returns: list: The JavaScript files to return for the :py:attr:`js` attribute.
codesearchnet
def get_models(self, uniprot_acc): if uniprot_acc in self.all_models: return self.all_models[uniprot_acc] else: log.error('{}: no SWISS-MODELs available'.format(uniprot_acc)) return None
Return all available models for a UniProt accession number. Args: uniprot_acc (str): UniProt ACC/ID Returns: dict: All available models in SWISS-MODEL for this UniProt entry
juraj-google-style
def _insert_stack(stack, sample_count, call_tree): curr_level = call_tree for func in stack: next_level_index = { node['stack']: node for node in curr_level['children']} if func not in next_level_index: new_node = {'stack': func, 'children': [], 'sampleCount': 0} curr_level['children'].append(new_node) curr_level = new_node else: curr_level = next_level_index[func] curr_level['sampleCount'] = sample_count
Inserts stack into the call tree. Args: stack: Call stack. sample_count: Sample count of call stack. call_tree: Call tree.
juraj-google-style
def sample(self, num_rows): sampled_values = [] for i in range(num_rows): sampled_values.append(self._sample_row()) return pd.DataFrame(sampled_values, columns=self.columns)
Sample new rows. Args: num_rows(int): Number of rows to sample Returns: pandas.DataFrame
codesearchnet
def create_knowledge_base(project_id, display_name): import dialogflow_v2beta1 as dialogflow client = dialogflow.KnowledgeBasesClient() project_path = client.project_path(project_id) knowledge_base = dialogflow.types.KnowledgeBase(display_name=display_name) response = client.create_knowledge_base(project_path, knowledge_base) print('Knowledge Base created:\n') print('Display Name: {}\n'.format(response.display_name)) print('Knowledge ID: {}\n'.format(response.name))
Creates a Knowledge base. Args: project_id: The GCP project linked with the agent. display_name: The display name of the Knowledge base.
codesearchnet
def get_alarms(zone=None): if zone is None: zone = discovery.any_soco() response = zone.alarmClock.ListAlarms() alarm_list = response['CurrentAlarmList'] tree = XML.fromstring(alarm_list.encode('utf-8')) alarms = tree.findall('Alarm') result = set() for alarm in alarms: values = alarm.attrib alarm_id = values['ID'] if Alarm._all_alarms.get(alarm_id): instance = Alarm._all_alarms.get(alarm_id) else: instance = Alarm(None) instance._alarm_id = alarm_id Alarm._all_alarms[instance._alarm_id] = instance instance.start_time = datetime.strptime( values['StartTime'], "%H:%M:%S").time() instance.duration = None if values['Duration'] == '' else\ datetime.strptime(values['Duration'], "%H:%M:%S").time() instance.recurrence = values['Recurrence'] instance.enabled = values['Enabled'] == '1' instance.zone = next((z for z in zone.all_zones if z.uid == values['RoomUUID']), None) if instance.zone is None: continue instance.program_uri = None if values['ProgramURI'] ==\ "x-rincon-buzzer:0" else values['ProgramURI'] instance.program_metadata = values['ProgramMetaData'] instance.play_mode = values['PlayMode'] instance.volume = values['Volume'] instance.include_linked_zones = values['IncludeLinkedZones'] == '1' result.add(instance) return result
Get a set of all alarms known to the Sonos system. Args: zone (`SoCo`, optional): a SoCo instance to query. If None, a random instance is used. Defaults to `None`. Returns: set: A set of `Alarm` instances Note: Any existing `Alarm` instance will have its attributes updated to those currently stored on the Sonos system.
juraj-google-style
def evaluate_cut(uncut_subsystem, cut, unpartitioned_ces): log.debug('Evaluating %s...', cut) cut_subsystem = uncut_subsystem.apply_cut(cut) if config.ASSUME_CUTS_CANNOT_CREATE_NEW_CONCEPTS: mechanisms = unpartitioned_ces.mechanisms else: mechanisms = set((unpartitioned_ces.mechanisms + list(cut_subsystem.cut_mechanisms))) partitioned_ces = ces(cut_subsystem, mechanisms) log.debug('Finished evaluating %s.', cut) phi_ = ces_distance(unpartitioned_ces, partitioned_ces) return SystemIrreducibilityAnalysis(phi=phi_, ces=unpartitioned_ces, partitioned_ces=partitioned_ces, subsystem=uncut_subsystem, cut_subsystem=cut_subsystem)
Compute the system irreducibility for a given cut. Args: uncut_subsystem (Subsystem): The subsystem without the cut applied. cut (Cut): The cut to evaluate. unpartitioned_ces (CauseEffectStructure): The cause-effect structure of the uncut subsystem. Returns: SystemIrreducibilityAnalysis: The |SystemIrreducibilityAnalysis| for that cut.
codesearchnet
def _launch_flow(self, client, name, args): flow = self._check_approval_wrapper( client, client.CreateFlow, name=name, args=args) flow_id = flow.flow_id print('{0:s}: Scheduled'.format(flow_id)) if self.keepalive: keepalive_flow = client.CreateFlow( name='KeepAlive', args=flows_pb2.KeepAliveArgs()) print('KeepAlive Flow:{0:s} scheduled'.format(keepalive_flow.flow_id)) return flow_id
Create specified flow, setting KeepAlive if requested. Args: client: GRR Client object on which to launch the flow. name: string containing flow name. args: proto (*FlowArgs) for type of flow, as defined in GRR flow proto. Returns: string containing ID of launched flow
juraj-google-style
def _get_endpoint(self, sub_domain): storage_parameters = (self._storage_parameters or dict()) account_name = storage_parameters.get('account_name') if (not account_name): raise ValueError('"account_name" is required for Azure storage') suffix = storage_parameters.get('endpoint_suffix', 'core.windows.net') self._endpoint = ('http%s: return (account_name, suffix.replace('.', '\\.'))
Get endpoint information from storage parameters. Update system with endpoint information and return information required to define roots. Args: self (pycosio._core.io_system.SystemBase subclass): System. sub_domain (str): Azure storage sub-domain. Returns: tuple of str: account_name, endpoint_suffix
codesearchnet
def replace_vars(config, env): if isinstance(config, dict): for k, v in list(config.items()): if isinstance(v, dict) or isinstance(v, list) or isinstance(v, tuple): replace_vars(v, env) elif isinstance(v, basestring): config[k] = expand_var(v, env) elif isinstance(config, list): for i, v in enumerate(config): if isinstance(v, dict) or isinstance(v, list) or isinstance(v, tuple): replace_vars(v, env) elif isinstance(v, basestring): config[i] = expand_var(v, env) elif isinstance(config, tuple): for v in config: if isinstance(v, dict) or isinstance(v, list) or isinstance(v, tuple): replace_vars(v, env)
Replace variable references in config using the supplied env dictionary. Args: config: the config to parse. Can be a tuple, list or dict. env: user supplied dictionary. Raises: Exception if any variable references are not found in env.
juraj-google-style
def try_pick_piece_of_work(self, worker_id, submission_id=None): client = self._datastore_client unclaimed_work_ids = None if submission_id: unclaimed_work_ids = [ k for k, v in iteritems(self.work) if is_unclaimed(v) and (v['submission_id'] == submission_id) ] if not unclaimed_work_ids: unclaimed_work_ids = [k for k, v in iteritems(self.work) if is_unclaimed(v)] if unclaimed_work_ids: next_work_id = random.choice(unclaimed_work_ids) else: return None try: with client.transaction() as transaction: work_key = client.key(KIND_WORK_TYPE, self._work_type_entity_id, KIND_WORK, next_work_id) work_entity = client.get(work_key, transaction=transaction) if not is_unclaimed(work_entity): return None work_entity['claimed_worker_id'] = worker_id work_entity['claimed_worker_start_time'] = get_integer_time() transaction.put(work_entity) except Exception: return None return next_work_id
Tries pick next unclaimed piece of work to do. Attempt to claim work piece is done using Cloud Datastore transaction, so only one worker can claim any work piece at a time. Args: worker_id: ID of current worker submission_id: if not None then this method will try to pick piece of work for this submission Returns: ID of the claimed work piece
juraj-google-style
def is_attribute_deprecated(self, attribute): rule_set = self._attribute_rule_sets.get(attribute) if rule_set.version_deprecated: if self._version >= rule_set.version_deprecated: return True else: return False else: return False
Check if the attribute is deprecated by the current KMIP version. Args: attribute (string): The name of the attribute (e.g., 'Unique Identifier'). Required.
juraj-google-style
def References(self): if (self.__references is None): refs = {} for (hash, group) in groupby(self.inputs, (lambda x: x.PrevHash)): (tx, height) = GetBlockchain().GetTransaction(hash.ToBytes()) if (tx is not None): for input in group: refs[input] = tx.outputs[input.PrevIndex] self.__references = refs return self.__references
Get all references. Returns: dict: Key (UInt256): input PrevHash Value (TransactionOutput): object.
codesearchnet
def HandleBlockHeadersReceived(self, inventory): try: inventory = IOHelper.AsSerializableWithType(inventory, 'neo.Network.Payloads.HeadersPayload.HeadersPayload') if inventory is not None: logger.debug(f"{self.prefix} received headers") self.heart_beat(HEARTBEAT_HEADERS) BC.Default().AddHeaders(inventory.Headers) except Exception as e: logger.debug(f"Error handling Block headers {e}")
Process a block header inventory payload. Args: inventory (neo.Network.Inventory):
juraj-google-style
def upload_to_metta(train_features_path, train_labels_path, test_features_path, test_labels_path, train_quarter, test_quarter, num_dimensions): train_config = metta_config(train_quarter, num_dimensions) test_config = metta_config(test_quarter, num_dimensions) X_train = pd.read_csv(train_features_path, sep=',') X_train.columns = [('doc2vec_' + str(i)) for i in range(X_train.shape[1])] Y_train = pd.read_csv(train_labels_path) Y_train.columns = ['onet_soc_code'] train = pd.concat([X_train, Y_train], axis=1) X_test = pd.read_csv(test_features_path, sep=',') X_test.columns = [('doc2vec_' + str(i)) for i in range(X_test.shape[1])] Y_test = pd.read_csv(test_labels_path) Y_test.columns = ['onet_soc_code'] test = pd.concat([X_test, Y_test], axis=1) metta.archive_train_test(train_config, X_train, test_config, X_test, directory='wdi')
Store train and test matrices using metta Args: train_features_path (str) Path to matrix with train features train_labels_path (str) Path to matrix with train labels test_features_path (str) Path to matrix with test features test_labels_path (str) Path to matrix with test labels train_quarter (str) Quarter of train matrix test_quarter (str) Quarter of test matrix num_dimensions (int) Number of features
codesearchnet
def _CompositeFoldByteStream( self, mapped_value, context=None, **unused_kwargs): context_state = getattr(context, 'state', {}) attribute_index = context_state.get('attribute_index', 0) subcontext = context_state.get('context', None) if not subcontext: subcontext = DataTypeMapContext(values={ type(mapped_value).__name__: mapped_value}) data_attributes = [] for attribute_index in range(attribute_index, self._number_of_attributes): attribute_name = self._attribute_names[attribute_index] data_type_map = self._data_type_maps[attribute_index] member_value = getattr(mapped_value, attribute_name, None) if data_type_map is None or member_value is None: continue member_data = data_type_map.FoldByteStream( member_value, context=subcontext) if member_data is None: return None data_attributes.append(member_data) if context: context.state = {} return b''.join(data_attributes)
Folds the data type into a byte stream. Args: mapped_value (object): mapped value. context (Optional[DataTypeMapContext]): data type map context. Returns: bytes: byte stream. Raises: FoldingError: if the data type definition cannot be folded into the byte stream.
juraj-google-style
def get_rprof(step, var): if (var in step.rprof.columns): rprof = step.rprof[var] rad = None if (var in phyvars.RPROF): meta = phyvars.RPROF[var] else: meta = phyvars.Varr(var, None, '1') elif (var in phyvars.RPROF_EXTRA): meta = phyvars.RPROF_EXTRA[var] (rprof, rad) = meta.description(step) meta = phyvars.Varr(misc.baredoc(meta.description), meta.kind, meta.dim) else: raise UnknownRprofVarError(var) (rprof, _) = step.sdat.scale(rprof, meta.dim) if (rad is not None): (rad, _) = step.sdat.scale(rad, 'm') return (rprof, rad, meta)
Extract or compute and rescale requested radial profile. Args: step (:class:`~stagpy.stagyydata._Step`): a step of a StagyyData instance. var (str): radial profile name, a key of :data:`stagpy.phyvars.RPROF` or :data:`stagpy.phyvars.RPROF_EXTRA`. Returns: tuple of :class:`numpy.array` and :class:`stagpy.phyvars.Varr`: rprof, rad, meta rprof is the requested profile, rad the radial position at which it is evaluated (set to None if it is the position of profiles output by StagYY), and meta is a :class:`stagpy.phyvars.Varr` instance holding metadata of the requested variable.
codesearchnet
def combine_slices(self, slices, tensor_shape, device=None): if tensor_shape.ndims == 0: return slices[0] ret = slices[:] tensor_layout = self.tensor_layout(tensor_shape) for mesh_dim, tensor_axis in zip( self.shape, tensor_layout.mesh_axis_to_tensor_axis(self.ndims)): slice_size = len(ret) if tensor_axis is None: ret = ret[:slice_size] else: if device: devices = [device] * slice_size else: devices = [ret[i].device for i in xrange(slice_size)] concat_inputs = [] for i in xrange(slice_size): concat_inputs.append( [ret[i + slice_size * j] for j in xrange(mesh_dim.size)]) ret = parallel( devices, tf.concat, concat_inputs, axis=[tensor_axis] * len(devices)) assert len(ret) == 1 return ret[0]
Turns a set of slices into a single tensor. Args: slices: list of tf.Tensor with length self.size. tensor_shape: Shape. device: optional str. If absent, we use the devices of the slices. Returns: tf.Tensor.
juraj-google-style
def reset_network(roles, extra_vars=None): logger.debug('Reset the constraints') if not extra_vars: extra_vars = {} tmpdir = os.path.join(os.getcwd(), TMP_DIRNAME) _check_tmpdir(tmpdir) utils_playbook = os.path.join(ANSIBLE_DIR, 'utils.yml') options = {'enos_action': 'tc_reset', 'tc_output_dir': tmpdir} options.update(extra_vars) run_ansible([utils_playbook], roles=roles, extra_vars=options)
Reset the network constraints (latency, bandwidth ...) Remove any filter that have been applied to shape the traffic. Args: roles (dict): role->hosts mapping as returned by :py:meth:`enoslib.infra.provider.Provider.init` inventory (str): path to the inventory
juraj-google-style
def get_relative_modpath(module_fpath): modsubdir_list = get_module_subdir_list(module_fpath) (_, ext) = splitext(module_fpath) rel_modpath = (join(*modsubdir_list) + ext) rel_modpath = ensure_crossplat_path(rel_modpath) return rel_modpath
Returns path to module relative to the package root Args: module_fpath (str): module filepath Returns: str: modname Example: >>> # ENABLE_DOCTEST >>> from utool.util_path import * # NOQA >>> import utool as ut >>> module_fpath = ut.util_path.__file__ >>> rel_modpath = ut.get_relative_modpath(module_fpath) >>> rel_modpath = rel_modpath.replace('.pyc', '.py') # allow pyc or py >>> result = ensure_crossplat_path(rel_modpath) >>> print(result) utool/util_path.py
codesearchnet
def delete_duplicates(seq): seen = set() seen_add = seen.add return [x for x in seq if not (x in seen or seen_add(x))]
Remove duplicates from an iterable, preserving the order. Args: seq: Iterable of various type. Returns: list: List of unique objects.
juraj-google-style
def fwd(self, x_data): x_data = numpy.asfarray(x_data) shape = x_data.shape x_data = x_data.reshape(len(self), -1) lower, upper = evaluation.evaluate_bound(self, x_data) q_data = numpy.zeros(x_data.shape) indices = x_data > upper q_data[indices] = 1 indices = ~indices & (x_data >= lower) q_data[indices] = numpy.clip(evaluation.evaluate_forward( self, x_data), a_min=0, a_max=1)[indices] q_data = q_data.reshape(shape) return q_data
Forward Rosenblatt transformation. Args: x_data (numpy.ndarray): Location for the distribution function. ``x_data.shape`` must be compatible with distribution shape. Returns: (numpy.ndarray): Evaluated distribution function values, where ``out.shape==x_data.shape``.
juraj-google-style
def __init__(self, requests, expert_capacity): self._requests = tf.to_float(requests) self._expert_capacity = expert_capacity expert_capacity_f = tf.to_float(expert_capacity) self._batch, self._length, self._num_experts = tf.unstack( tf.shape(self._requests), num=3) position_in_expert = tf.cumsum(self._requests, axis=1, exclusive=True) self._gates = self._requests * tf.to_float( tf.less(position_in_expert, expert_capacity_f)) batch_index = tf.reshape( tf.to_float(tf.range(self._batch)), [self._batch, 1, 1]) length_index = tf.reshape( tf.to_float(tf.range(self._length)), [1, self._length, 1]) expert_index = tf.reshape( tf.to_float(tf.range(self._num_experts)), [1, 1, self._num_experts]) flat_position = ( position_in_expert + batch_index * (tf.to_float(self._num_experts) * expert_capacity_f) + expert_index * expert_capacity_f) self._indices = tf.unsorted_segment_sum( data=tf.reshape((length_index + 1.0) * self._gates, [-1]), segment_ids=tf.to_int32(tf.reshape(flat_position, [-1])), num_segments=self._batch * self._num_experts * expert_capacity) self._indices = tf.reshape( self._indices, [self._batch, self._num_experts, expert_capacity]) self._nonpadding = tf.minimum(self._indices, 1.0) self._indices = tf.nn.relu(self._indices - 1.0) self._flat_indices = tf.to_int32( self._indices + (tf.reshape(tf.to_float(tf.range(self._batch)), [-1, 1, 1]) * tf.to_float(self._length))) self._indices = tf.to_int32(self._indices)
Create a TruncatingDispatcher. Args: requests: a boolean `Tensor` of shape `[batch, length, num_experts]`. Alternatively, a float or int Tensor containing zeros and ones. expert_capacity: a Scalar - maximum number of examples per expert per batch element. Returns: a TruncatingDispatcher
juraj-google-style
def seek(self, offset, whence=Seek.set): _whence = int(whence) if (_whence == Seek.current): offset += self._pos if ((_whence == Seek.current) or (_whence == Seek.set)): if (offset < 0): raise ValueError('Negative seek position {}'.format(offset)) elif (_whence == Seek.end): if (offset > 0): raise ValueError('Positive seek position {}'.format(offset)) offset += self._end else: raise ValueError('Invalid whence ({}, should be {}, {} or {})'.format(_whence, Seek.set, Seek.current, Seek.end)) if (offset < self._pos): self._f = self._zip.open(self.name) self._pos = 0 self.read((offset - self._pos)) return self._pos
Change stream position. Change the stream position to the given byte offset. The offset is interpreted relative to the position indicated by ``whence``. Arguments: offset (int): the offset to the new position, in bytes. whence (int): the position reference. Possible values are: * `Seek.set`: start of stream (the default). * `Seek.current`: current position; offset may be negative. * `Seek.end`: end of stream; offset must be negative. Returns: int: the new absolute position. Raises: ValueError: when ``whence`` is not known, or ``offset`` is invalid. Note: Zip compression does not support seeking, so the seeking is emulated. Seeking somewhere else than the current position will need to either: * reopen the file and restart decompression * read and discard data to advance in the file
codesearchnet
def model_from_yaml(yaml_string, custom_objects=None): raise RuntimeError('Method `model_from_yaml()` has been removed due to security risk of arbitrary code execution. Please use `Model.to_json()` and `model_from_json()` instead.')
Parses a yaml model configuration file and returns a model instance. Note: Since TF 2.6, this method is no longer supported and will raise a RuntimeError. Args: yaml_string: YAML string or open file encoding a model configuration. custom_objects: Optional dictionary mapping names (strings) to custom classes or functions to be considered during deserialization. Returns: A Keras model instance (uncompiled). Raises: RuntimeError: announces that the method poses a security risk
github-repos
def prune_layer(layer: nn.Linear | Conv1D, index: torch.LongTensor, dim: int | None=None) -> nn.Linear | Conv1D: if isinstance(layer, nn.Linear): return prune_linear_layer(layer, index, dim=0 if dim is None else dim) elif isinstance(layer, Conv1D): return prune_conv1d_layer(layer, index, dim=1 if dim is None else dim) else: raise ValueError(f"Can't prune layer of class {layer.__class__}")
Prune a Conv1D or linear layer to keep only entries in index. Used to remove heads. Args: layer (`Union[torch.nn.Linear, Conv1D]`): The layer to prune. index (`torch.LongTensor`): The indices to keep in the layer. dim (`int`, *optional*): The dimension on which to keep the indices. Returns: `torch.nn.Linear` or [`~pytorch_utils.Conv1D`]: The pruned layer as a new layer with `requires_grad=True`.
github-repos
def pixelate(x, severity=1): c = [0.6, 0.5, 0.4, 0.3, 0.25][severity - 1] shape = x.shape x = tfds.core.lazy_imports.PIL_Image.fromarray(x.astype(np.uint8)) x = x.resize((int(shape[1] * c), int(shape[0] * c))) x = x.resize((shape[1], shape[0])) return np.asarray(x)
Pixelate images. Conduct pixelating corruptions to images by first shrinking the images and then resizing to original size. Args: x: numpy array, uncorrupted image, assumed to have uint8 pixel in [0,255]. severity: integer, severity of corruption. Returns: numpy array, image with uint8 pixels in [0,255]. Applied pixelating corruption.
juraj-google-style
def project_texture_on_surface(texture, surface, angle=DEFAULT_ANGLE): projected_surface = project_surface(surface, angle) texture_x, _ = texture texture_y = map_texture_to_surface(texture, projected_surface) return texture_x, texture_y
Maps a texture onto a surface, then projects to 2D and returns a layer. Args: texture (texture): the texture to project surface (surface): the surface to project onto angle (float): the projection angle in degrees (0 = top-down, 90 = side view) Returns: layer: A layer.
juraj-google-style
def turb44(msg): d = hex2bin(data(msg)) if d[46] == '0': return None turb = bin2int(d[47:49]) return turb
Turblence. Args: msg (String): 28 bytes hexadecimal message string Returns: int: turbulence level. 0=NIL, 1=Light, 2=Moderate, 3=Severe
juraj-google-style
def visualize_conv_weights(filters, name): with tf.name_scope('visualize_w_' + name): filters = tf.transpose(filters, (3, 2, 0, 1)) filters = tf.unstack(filters) filters = tf.concat(filters, 1) filters = tf.unstack(filters) filters = tf.concat(filters, 1) filters = tf.expand_dims(filters, 0) filters = tf.expand_dims(filters, -1) tf.summary.image('visualize_w_' + name, filters)
Visualize use weights in convolution filters. Args: filters: tensor containing the weights [H,W,Cin,Cout] name: label for tensorboard Returns: image of all weight
juraj-google-style
def set(cls, values): cls.mrc_out_el.text = values.get("mrc", "") cls.oai_out_el.text = values.get("oai", "") cls.dc_out_el.text = values.get("dc", "") cls.filename = values.get("fn", "fn") cls.values = values
Set the elements from the data obtained from REST API. Args: values (dict): Dict with ``mrc``, ``oai``, ``dc`` and ``fn`` keys.
juraj-google-style
def get_by(self, field, value): if ((field == 'userName') or (field == 'name')): return self._client.get(((self.URI + '/') + value)) elif (field == 'role'): value = value.replace(' ', '%20') return self._client.get(((self.URI + '/roles/users/') + value))['members'] else: raise HPOneViewException('Only userName, name and role can be queried for this resource.')
Gets all Users that match the filter. The search is case-insensitive. Args: field: Field name to filter. Accepted values: 'name', 'userName', 'role' value: Value to filter. Returns: list: A list of Users.
codesearchnet
def create_test_suite(cls, name: str, path: str): return type(name, (unittest.TestCase,), dict(cls.parse_test_methods(path)))
Dynamically creates a unittest.TestCase subclass with generated tests. This method takes a suite name and a path (or glob pattern). It uses `parse_test_methods` to find YAML files at the given path and generate individual test methods for each. These generated test methods are then added as attributes to a new class, which is a subclass of `unittest.TestCase`. Args: name: The desired name for the dynamically created test suite class. path: A string representing the path or glob pattern to search for YAML example files, which will be used to generate test methods. Returns: A new class, subclass of `unittest.TestCase`, containing dynamically generated test methods based on the YAML files found at the given path.
github-repos
def get_variables(scope=None, suffix=None): candidates = tf.get_collection(MODEL_VARIABLES, scope)[:] if suffix is not None: candidates = [var for var in candidates if var.op.name.endswith(suffix)] return candidates
Gets the list of variables, filtered by scope and/or suffix. Args: scope: an optional scope for filtering the variables to return. suffix: an optional suffix for filtering the variables to return. Returns: a copied list of variables with scope and suffix.
juraj-google-style
def _merge_section(original, to_merge): if not original: return to_merge or '' if not to_merge: return original or '' try: index = original.index(':') + 1 except ValueError: index = original.index('\n') name = original[:index].strip() section = '\n '.join( (original[index + 1:].lstrip(), to_merge[index + 1:].lstrip()) ).rstrip() return '{name}\n {section}'.format(name=name, section=section)
Merge two sections together. Args: original: The source of header and initial section lines. to_merge: The source for the additional section lines to append. Returns: A new section string that uses the header of the original argument and the section lines from both.
juraj-google-style
def parseFloat(self, words): def pointFloat(words): m = re.search('(.*) point (.*)', words) if m: whole = m.group(1) frac = m.group(2) total = 0.0 coeff = 0.1 for digit in frac.split(' '): total += (coeff * self.parse(digit)) coeff /= 10.0 return (self.parseInt(whole) + total) return None def fractionFloat(words): m = re.search('(.*) and (.*)', words) if m: whole = self.parseInt(m.group(1)) frac = m.group(2) frac = re.sub('(\\w+)s(\\b)', '\\g<1>\\g<2>', frac) frac = re.sub('(\\b)a(\\b)', '\\g<1>one\\g<2>', frac) split = frac.split(' ') num = split[:1] denom = split[1:] while denom: try: num_value = self.parse(' '.join(num)) denom_value = self.parse(' '.join(denom)) return (whole + (float(num_value) / denom_value)) except: num += denom[:1] denom = denom[1:] return None result = pointFloat(words) if result: return result result = fractionFloat(words) if result: return result return self.parseInt(words)
Convert a floating-point number described in words to a double. Supports two kinds of descriptions: those with a 'point' (e.g., "one point two five") and those with a fraction (e.g., "one and a quarter"). Args: words (str): Description of the floating-point number. Returns: A double representation of the words.
codesearchnet
def media_download(self, mxcurl, allow_remote=True): query_params = {} if (not allow_remote): query_params['allow_remote'] = False if mxcurl.startswith('mxc: return self._send('GET', mxcurl[6:], api_path='/_matrix/media/r0/download/', query_params=query_params, return_json=False) else: raise ValueError(("MXC URL '%s' did not begin with 'mxc:
Download raw media from provided mxc URL. Args: mxcurl (str): mxc media URL. allow_remote (bool): indicates to the server that it should not attempt to fetch the media if it is deemed remote. Defaults to true if not provided.
codesearchnet
def setdim(P, dim=None): P = P.copy() ldim = P.dim if (not dim): dim = (ldim + 1) if (dim == ldim): return P P.dim = dim if (dim > ldim): key = numpy.zeros(dim, dtype=int) for lkey in P.keys: key[:ldim] = lkey P.A[tuple(key)] = P.A.pop(lkey) else: key = numpy.zeros(dim, dtype=int) for lkey in P.keys: if ((not sum(lkey[(ldim - 1):])) or (not sum(lkey))): P.A[lkey[:dim]] = P.A.pop(lkey) else: del P.A[lkey] P.keys = sorted(P.A.keys(), key=sort_key) return P
Adjust the dimensions of a polynomial. Output the results into Poly object Args: P (Poly) : Input polynomial dim (int) : The dimensions of the output polynomial. If omitted, increase polynomial with one dimension. If the new dim is smaller then P's dimensions, variables with cut components are all cut. Examples: >>> x,y = chaospy.variable(2) >>> P = x*x-x*y >>> print(chaospy.setdim(P, 1)) q0^2
codesearchnet
def to_dict(self): output = copy.deepcopy(self.__dict__) output['semantic_config'] = self.semantic_config.to_dict() output['coarse_acoustics_config'] = self.coarse_acoustics_config.to_dict() output['fine_acoustics_config'] = self.fine_acoustics_config.to_dict() output['model_type'] = self.__class__.model_type return output
Serializes this instance to a Python dictionary. Override the default [`~PretrainedConfig.to_dict`]. Returns: `Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance,
github-repos
def write_temporary_file(content, prefix='', suffix=''): temp = tempfile.NamedTemporaryFile(prefix=prefix, suffix=suffix, mode='w+t', delete=False) temp.writelines(content) temp.close() return temp.name
Generating a temporary file with content. Args: content (str): file content (usually a script, Dockerfile, playbook or config file) prefix (str): the filename starts with this prefix (default: no prefix) suffix (str): the filename ends with this suffix (default: no suffix) Returns: str: name of the temporary file Note: You are responsible for the deletion of the file.
codesearchnet
def revnet(name, x, hparams, reverse=True): with tf.variable_scope(name, reuse=tf.AUTO_REUSE): steps = np.arange(hparams.depth) if reverse: steps = steps[::-1] objective = 0.0 for step in steps: x, curr_obj = revnet_step( "revnet_step_%d" % step, x, hparams, reverse=reverse) objective += curr_obj return x, objective
hparams.depth' steps of generative flow. Args: name: variable scope for the revnet block. x: 4-D Tensor, shape=(NHWC). hparams: HParams. reverse: bool, forward or backward pass. Returns: x: 4-D Tensor, shape=(NHWC). objective: float.
juraj-google-style
def _separate_words(string): words = [] separator = '' i = 1 s = 0 p = string[0:1] was_upper = False if string.isupper(): string = string.lower() was_upper = True while (i <= len(string)): c = string[i:(i + 1)] split = False if (i < len(string)): if UPPER.match(c): split = True elif (NOTSEP.match(c) and SEP.match(p)): split = True elif (SEP.match(c) and NOTSEP.match(p)): split = True else: split = True if split: if NOTSEP.match(p): words.append(string[s:i]) else: if (not separator): separator = string[s:(s + 1)] words.append(None) s = i i += 1 p = c return (words, separator, was_upper)
Segment string on separator into list of words. Arguments: string -- the string we want to process Returns: words -- list of words the string got minced to separator -- the separator char intersecting words was_upper -- whether string happened to be upper-case
codesearchnet
def _Commit(self): if not self.temp_cache_file.closed: self.temp_cache_file.flush() os.fsync(self.temp_cache_file.fileno()) self.temp_cache_file.close() else: self.log.debug('temp cache file was already closed before Commit') try: shutil.copymode(self.GetCompatFilename(), self.temp_cache_filename) stat_info = os.stat(self.GetCompatFilename()) uid = stat_info.st_uid gid = stat_info.st_gid os.chown(self.temp_cache_filename, uid, gid) except OSError as e: if e.errno == errno.ENOENT: if self.map_name == 'sshkey': os.chmod(self.temp_cache_filename, stat.S_IRUSR | stat.S_IRGRP | stat.S_IROTH) else: os.chmod(self.temp_cache_filename, stat.S_IRUSR | stat.S_IWUSR | stat.S_IRGRP | stat.S_IROTH) self.log.debug('committing temporary cache file %r to %r', self.temp_cache_filename, self.GetCacheFilename()) os.rename(self.temp_cache_filename, self.GetCacheFilename()) return True
Ensure the cache is now the active data source for NSS. Perform an atomic rename on the cache file to the location expected by the NSS module. No verification of database validity or consistency is performed here. Returns: Always returns True
github-repos
def prediction_step(self, model: nn.Module, inputs: dict[str, Union[torch.Tensor, Any]], prediction_loss_only: bool, ignore_keys: Optional[list[str]]=None) -> tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: inputs = self._prepare_inputs(inputs) gen_kwargs = {'max_length': self.data_args.val_max_target_length if self.data_args is not None else self.config.max_length, 'num_beams': self.data_args.eval_beams if self.data_args is not None else self.config.num_beams} if self.args.predict_with_generate and (not self.args.prediction_loss_only): generated_tokens = self.model.generate(inputs['input_ids'], attention_mask=inputs['attention_mask'], **gen_kwargs) if generated_tokens.shape[-1] < gen_kwargs['max_length']: generated_tokens = self._pad_tensors_to_max_len(generated_tokens, gen_kwargs['max_length']) labels = inputs.pop('labels') with torch.no_grad(): loss, logits = self._compute_loss(model, inputs, labels) loss = loss.mean().detach() if self.args.prediction_loss_only: return (loss, None, None) logits = generated_tokens if self.args.predict_with_generate else logits if labels.shape[-1] < gen_kwargs['max_length']: labels = self._pad_tensors_to_max_len(labels, gen_kwargs['max_length']) return (loss, logits, labels)
Perform an evaluation step on :obj:`model` using obj:`inputs`. Subclass and override to inject custom behavior. Args: model (:obj:`nn.Module`): The model to evaluate. inputs (:obj:`Dict[str, Union[torch.Tensor, Any]]`): The inputs and targets of the model. The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument :obj:`labels`. Check your model's documentation for all accepted arguments. prediction_loss_only (:obj:`bool`): Whether or not to return the loss only. Return: Tuple[Optional[float], Optional[torch.Tensor], Optional[torch.Tensor]]: A tuple with the loss, logits and labels (each being optional).
github-repos
def _do_sampling(self, logits, num_samples, sampler): with test_util.use_gpu(): random_seed.set_random_seed(1618) op = sampler(constant_op.constant(logits), num_samples) d = self.evaluate(op) batch_size, num_classes = logits.shape freqs_mat = [] for i in range(batch_size): cnts = dict(collections.Counter(d[i, :])) self.assertLess(max(cnts.keys()), num_classes) self.assertGreaterEqual(min(cnts.keys()), 0) freqs = [cnts[k] * 1.0 / num_samples if k in cnts else 0 for k in range(num_classes)] freqs_mat.append(freqs) return freqs_mat
Samples using the supplied sampler and inputs. Args: logits: Numpy ndarray of shape [batch_size, num_classes]. num_samples: Int; number of samples to draw. sampler: A sampler function that takes (1) a [batch_size, num_classes] Tensor, (2) num_samples and returns a [batch_size, num_samples] Tensor. Returns: Frequencies from sampled classes; shape [batch_size, num_classes].
github-repos
class TFConvNextV2Layer(keras.layers.Layer): def __init__(self, config: ConvNextV2Config, dim: int, drop_path: float=0.0, **kwargs): super().__init__(**kwargs) self.dim = dim self.config = config self.dwconv = keras.layers.Conv2D(filters=dim, kernel_size=7, padding='same', groups=dim, kernel_initializer=get_initializer(config.initializer_range), bias_initializer=keras.initializers.Zeros(), name='dwconv') self.layernorm = keras.layers.LayerNormalization(epsilon=1e-06, name='layernorm') self.pwconv1 = keras.layers.Dense(units=4 * dim, kernel_initializer=get_initializer(config.initializer_range), bias_initializer=keras.initializers.Zeros(), name='pwconv1') self.act = get_tf_activation(config.hidden_act) self.grn = TFConvNextV2GRN(config, 4 * dim, dtype=tf.float32, name='grn') self.pwconv2 = keras.layers.Dense(units=dim, kernel_initializer=get_initializer(config.initializer_range), bias_initializer=keras.initializers.Zeros(), name='pwconv2') self.drop_path = TFConvNextV2DropPath(drop_path, name='drop_path') if drop_path > 0.0 else keras.layers.Activation('linear', name='drop_path') def call(self, hidden_states, training=False): input = hidden_states x = self.dwconv(hidden_states) x = self.layernorm(x) x = self.pwconv1(x) x = self.act(x) x = self.grn(x) x = self.pwconv2(x) x = self.drop_path(x, training=training) x = input + x return x def build(self, input_shape=None): if self.built: return self.built = True if getattr(self, 'dwconv', None) is not None: with tf.name_scope(self.dwconv.name): self.dwconv.build([None, None, None, self.dim]) if getattr(self, 'layernorm', None) is not None: with tf.name_scope(self.layernorm.name): self.layernorm.build([None, None, None, self.dim]) if getattr(self, 'pwconv1', None) is not None: with tf.name_scope(self.pwconv1.name): self.pwconv1.build([None, None, self.dim]) if getattr(self, 'grn', None) is not None: with tf.name_scope(self.grn.name): self.grn.build(None) if getattr(self, 'pwconv2', None) is not None: with tf.name_scope(self.pwconv2.name): self.pwconv2.build([None, None, 4 * self.dim]) if getattr(self, 'drop_path', None) is not None: with tf.name_scope(self.drop_path.name): self.drop_path.build(None)
This corresponds to the `Block` class in the original implementation. There are two equivalent implementations: [DwConv, LayerNorm (channels_first), Conv, GELU,1x1 Conv]; all in (N, C, H, W) (2) [DwConv, Permute to (N, H, W, C), LayerNorm (channels_last), Linear, GELU, Linear]; Permute back The authors used (2) as they find it slightly faster in PyTorch. Since we already permuted the inputs to follow NHWC ordering, we can just apply the operations straight-away without the permutation. Args: config (`ConvNextV2Config`): Model configuration class. dim (`int`): Number of input channels. drop_path (`float`, *optional*, defaults to 0.0): Stochastic depth rate.
github-repos
def separate_resources(self): self._separate_hdxobjects(self.resources, 'resources', 'name', hdx.data.resource.Resource)
Move contents of resources key in internal dictionary into self.resources Returns: None
codesearchnet
def cluster_sites(mol, tol, give_only_index=False): dists = [[np.linalg.norm(site.coords), 0] for site in mol] import scipy.cluster as spcluster f = spcluster.hierarchy.fclusterdata(dists, tol, criterion='distance') clustered_dists = defaultdict(list) for (i, site) in enumerate(mol): clustered_dists[f[i]].append(dists[i]) avg_dist = {label: np.mean(val) for (label, val) in clustered_dists.items()} clustered_sites = defaultdict(list) origin_site = None for (i, site) in enumerate(mol): if (avg_dist[f[i]] < tol): if give_only_index: origin_site = i else: origin_site = site elif give_only_index: clustered_sites[(avg_dist[f[i]], site.species)].append(i) else: clustered_sites[(avg_dist[f[i]], site.species)].append(site) return (origin_site, clustered_sites)
Cluster sites based on distance and species type. Args: mol (Molecule): Molecule **with origin at center of mass**. tol (float): Tolerance to use. Returns: (origin_site, clustered_sites): origin_site is a site at the center of mass (None if there are no origin atoms). clustered_sites is a dict of {(avg_dist, species_and_occu): [list of sites]}
codesearchnet
def new(image): pointer = vips_lib.vips_region_new(image.pointer) if (pointer == ffi.NULL): raise Error('unable to make region') return pyvips.Region(pointer)
Make a region on an image. Returns: A new :class:`.Region`. Raises: :class:`.Error`
codesearchnet
def ResetSection(self, directive): self._section = self._INITIAL_SECTION self._last_header = '' if (directive in ('if', 'ifdef', 'ifndef')): self.include_list.append([]) elif (directive in ('else', 'elif')): self.include_list[(- 1)] = []
Reset section checking for preprocessor directive. Args: directive: preprocessor directive (e.g. "if", "else").
codesearchnet
def gradients(ys, xs, grad_ys=None): graph = ys[0].graph if (not grad_ys): grad_ys = [Constant(y.mesh, 1.0, y.shape, y.dtype).outputs[0] for y in ys] downstream = set(xs) for op in graph.operations: if op.has_gradient: if (set(op.inputs) & downstream): downstream |= set(op.outputs) tensor_to_gradient = dict(zip(ys, grad_ys)) for op in graph.operations[::(- 1)]: grad_outputs = [tensor_to_gradient.get(out) for out in op.outputs] if (op.has_gradient and any(grad_outputs) and (set(op.inputs) & downstream)): with tf.variable_scope((op.name + '/gradients')): input_grads = op.gradient(grad_outputs) for (inp, grad) in zip(op.inputs, input_grads): if ((inp in downstream) and (grad is not None)): if (inp in tensor_to_gradient): tensor_to_gradient[inp] += grad else: tensor_to_gradient[inp] = grad return [tensor_to_gradient.get(x, None) for x in xs]
Compute gradients in dtf. Args: ys: a list of Tensors xs: a list of Tensors grad_ys: an optional list of Tensors Returns: grad_xs: a list of Tensors
codesearchnet
def parse_node(self, node): spec = super(CamundaProcessParser, self).parse_node(node) spec.data = self._parse_input_data(node) spec.data['lane_data'] = self._get_lane_properties(node) spec.defines = spec.data service_class = node.get(full_attr('assignee')) if service_class: self.parsed_nodes[node.get('id')].service_class = node.get(full_attr('assignee')) return spec
Overrides ProcessParser.parse_node Parses and attaches the inputOutput tags that created by Camunda Modeller Args: node: xml task node Returns: TaskSpec
codesearchnet
def _add_asset_to_metagraph(meta_graph_def, asset_filename, asset_tensor): asset_proto = meta_graph_def.asset_file_def.add() asset_proto.filename = asset_filename asset_proto.tensor_info.name = asset_tensor.name
Builds an asset proto and adds it to the meta graph def. Args: meta_graph_def: The meta graph def to which the asset will be added. asset_filename: The filename of the asset to be added. asset_tensor: The asset tensor used to populate the tensor info of the asset proto.
github-repos
def retrieve_artifacts(self, compose_data, output_data_config, job_name): artifacts = os.path.join(self.container_root, 'artifacts') compressed_artifacts = os.path.join(self.container_root, 'compressed_artifacts') os.mkdir(artifacts) model_artifacts = os.path.join(artifacts, 'model') output_artifacts = os.path.join(artifacts, 'output') artifact_dirs = [model_artifacts, output_artifacts, compressed_artifacts] for d in artifact_dirs: os.mkdir(d) for host in self.hosts: volumes = compose_data['services'][str(host)]['volumes'] for volume in volumes: host_dir, container_dir = volume.split(':') if container_dir == '/opt/ml/model': sagemaker.local.utils.recursive_copy(host_dir, model_artifacts) elif container_dir == '/opt/ml/output': sagemaker.local.utils.recursive_copy(host_dir, output_artifacts) model_files = [os.path.join(model_artifacts, name) for name in os.listdir(model_artifacts)] output_files = [os.path.join(output_artifacts, name) for name in os.listdir(output_artifacts)] sagemaker.utils.create_tar_file(model_files, os.path.join(compressed_artifacts, 'model.tar.gz')) sagemaker.utils.create_tar_file(output_files, os.path.join(compressed_artifacts, 'output.tar.gz')) if output_data_config['S3OutputPath'] == '': output_data = 'file: else: output_data = sagemaker.local.utils.move_to_destination( compressed_artifacts, output_data_config['S3OutputPath'], job_name, self.sagemaker_session) _delete_tree(model_artifacts) _delete_tree(output_artifacts) return os.path.join(output_data, 'model.tar.gz')
Get the model artifacts from all the container nodes. Used after training completes to gather the data from all the individual containers. As the official SageMaker Training Service, it will override duplicate files if multiple containers have the same file names. Args: compose_data(dict): Docker-Compose configuration in dictionary format. Returns: Local path to the collected model artifacts.
juraj-google-style
def latex(self, aliases=None): self._initialize_latex_array(aliases) self._build_latex_array(aliases) header_1 = '% \\documentclass[preview]{standalone}\n% If the image is too large to fit on this documentclass use\n\\documentclass[draft]{beamer}\n' beamer_line = '\\usepackage[size=custom,height=%d,width=%d,scale=%.1f]{beamerposter}\n' header_2 = '% instead and customize the height and width (in cm) to fit.\n% Large images may run out of memory quickly.\n% To fix this use the LuaLaTeX compiler, which dynamically\n% allocates memory.\n\\usepackage[braket, qm]{qcircuit}\n\\usepackage{amsmath}\n\\pdfmapfile{+sansmathaccent.map}\n% \\usepackage[landscape]{geometry}\n% Comment out the above line if using the beamer documentclass.\n\\begin{document}\n\\begin{equation*}' qcircuit_line = '\n \\Qcircuit @C=%.1fem @R=%.1fem @!R {\n' output = io.StringIO() output.write(header_1) output.write(('%% img_width = %d, img_depth = %d\n' % (self.img_width, self.img_depth))) output.write((beamer_line % self._get_beamer_page())) output.write(header_2) output.write((qcircuit_line % (self.column_separation, self.row_separation))) for i in range(self.img_width): output.write('\t \t') for j in range((self.img_depth + 1)): cell_str = self._latex[i][j] if ('barrier' in cell_str): output.write(cell_str) else: cell_str = re.sub('[-+]?\\d*\\.\\d{2,}|\\d{2,}', _truncate_float, cell_str) output.write(cell_str) if (j != self.img_depth): output.write(' & ') else: output.write(('\\\\' + '\n')) output.write('\t }\n') output.write('\\end{equation*}\n\n') output.write('\\end{document}') contents = output.getvalue() output.close() return contents
Return LaTeX string representation of circuit. This method uses the LaTeX Qconfig package to create a graphical representation of the circuit. Returns: string: for writing to a LaTeX file.
codesearchnet
def removeChild(self, child, end_tag_too=True): if _is_iterable(child): for x in child: self.removeChild(child=x, end_tag_too=end_tag_too) return if not self.childs: return end_tag = None if end_tag_too: end_tag = child.endtag for e in self.childs: if e != child: e.removeChild(child, end_tag_too) continue if end_tag_too and end_tag in self.childs: self.childs.remove(end_tag) self.childs.remove(e)
Remove subelement (`child`) specified by reference. Note: This can't be used for removing subelements by value! If you want to do such thing, try:: for e in dom.find("value"): dom.removeChild(e) Args: child (obj): :class:`HTMLElement` instance which will be removed from this element. end_tag_too (bool, default True): Remove also `child` endtag.
juraj-google-style
def _multi_request(self, verb, urls, query_params, data, to_json=True, send_as_file=False): if (not urls): raise InvalidRequestError('No URL supplied') request_params = self._zip_request_params(urls, query_params, data) batch_of_params = [request_params[pos:(pos + self._max_requests)] for pos in range(0, len(request_params), self._max_requests)] all_responses = [] for param_batch in batch_of_params: if self._rate_limiter: self._rate_limiter.make_calls(num_calls=len(param_batch)) prepared_requests = [self._create_request(verb, url, query_params=query_param, data=datum, send_as_file=send_as_file) for (url, query_param, datum) in param_batch] responses = self._wait_for_response(prepared_requests) for response in responses: if response: all_responses.append((self._convert_to_json(response) if to_json else response)) else: all_responses.append(None) return all_responses
Issues multiple batches of simultaneous HTTP requests and waits for responses. Args: verb - MultiRequest._VERB_POST or MultiRequest._VERB_GET urls - A string URL or list of string URLs query_params - None, a dict, or a list of dicts representing the query params data - None, a dict or string, or a list of dicts and strings representing the data body. to_json - A boolean, should the responses be returned as JSON blobs Returns: If multiple requests are made - a list of dicts if to_json, a list of requests responses otherwise If a single request is made, the return is not a list Raises: InvalidRequestError - if no URL is supplied or if any of the requests returns 403 Access Forbidden response
codesearchnet
def getqualifiedname(namespace, object_, max_depth=5, visited=None): if visited is None: visited = set() namespace = dict(namespace) for name in namespace: if object_ is namespace[name]: return name parent = tf_inspect.getmodule(object_) if parent is not None and parent is not object_ and (parent is not namespace): parent_name = getqualifiedname(namespace, parent, max_depth=0, visited=visited) if parent_name is not None: name_in_parent = getqualifiedname(parent.__dict__, object_, max_depth=0, visited=visited) assert name_in_parent is not None, 'An object should always be found in its owner module' return '{}.{}'.format(parent_name, name_in_parent) if max_depth: for name in namespace.keys(): value = namespace[name] if tf_inspect.ismodule(value) and id(value) not in visited: visited.add(id(value)) name_in_module = getqualifiedname(value.__dict__, object_, max_depth - 1, visited) if name_in_module is not None: return '{}.{}'.format(name, name_in_module) return None
Returns the name by which a value can be referred to in a given namespace. If the object defines a parent module, the function attempts to use it to locate the object. This function will recurse inside modules, but it will not search objects for attributes. The recursion depth is controlled by max_depth. Args: namespace: Dict[str, Any], the namespace to search into. object_: Any, the value to search. max_depth: Optional[int], a limit to the recursion depth when searching inside modules. visited: Optional[Set[int]], ID of modules to avoid visiting. Returns: Union[str, None], the fully-qualified name that resolves to the value o, or None if it couldn't be found.
github-repos
def set_nsxcontroller_port(self, **kwargs): name = kwargs.pop('name') port = str(kwargs.pop('port')) port_args = dict(name=name, port=port) method_name = 'nsx_controller_connection_addr_port' method_class = self._brocade_tunnels nsxcontroller_attr = getattr(method_class, method_name) config = nsxcontroller_attr(**port_args) output = self._callback(config) return output
Set Nsx Controller pot on the switch Args: port (int): 1 to 65535. callback (function): A function executed upon completion of the method. Returns: Return value of `callback`. Raises: None
codesearchnet
def __init__(self, dev): self._dev = dev self._dev_handle = None self._scanchain = None self._jtagon = False self._speed = None
Initialize general controller driver values with defaults. Args: dev (usb1.USBDevice) - Device entry the driver will control.
juraj-google-style
def get_create_batch_env_fun(batch_env_fn, time_limit): def create_env_fun(game_name=None, sticky_actions=None): del game_name, sticky_actions batch_env = batch_env_fn(in_graph=False) batch_env = ResizeBatchObservation(batch_env) batch_env = DopamineBatchEnv(batch_env, max_episode_steps=time_limit) return batch_env return create_env_fun
Factory for dopamine environment initialization function. Args: batch_env_fn: function(in_graph: bool) -> batch environment. time_limit: time steps limit for environment. Returns: function (with optional, unused parameters) initializing environment.
codesearchnet
def find(self, package, **kwargs): for finder in self.finders: package_spec = finder.find(package, **kwargs) if package_spec: return package_spec return None
Find a package using package finders. Return the first package found. Args: package (str): package to find. **kwargs (): additional keyword arguments used by finders. Returns: PackageSpec: if package found, else None
juraj-google-style
def get_balance(self, asset_hash, id=None, endpoint=None): return self._call_endpoint(GET_BALANCE, params=[asset_hash], id=id, endpoint=endpoint)
Get balance by asset hash Args: asset_hash: (str) asset to lookup, example would be 'c56f33fc6ecfcd0c225c4ab356fee59390af8560be0e930faebe74a6daff7c9b' id: (int, optional) id to use for response tracking endpoint: (RPCEndpoint, optional) endpoint to specify to use Returns: json object of the result or the error encountered in the RPC call
juraj-google-style
def cumsum(x, axis=0, exclusive=False): if not is_xla_compiled(): return tf.cumsum(x, axis=axis, exclusive=exclusive) x_shape = shape_list(x) rank = len(x_shape) length = x_shape[axis] my_range = tf.range(length) comparator = tf.less if exclusive else tf.less_equal mask = tf.cast( comparator(tf.expand_dims(my_range, 1), tf.expand_dims(my_range, 0)), x.dtype) ret = tf.tensordot(x, mask, axes=[[axis], [0]]) if axis != rank - 1: ret = tf.transpose( ret, list(range(axis)) + [rank - 1] + list(range(axis, rank - 1))) return ret
TPU hack for tf.cumsum. This is equivalent to tf.cumsum and is faster on TPU as of 04/2018 unless the axis dimension is very large. Args: x: a Tensor axis: an integer exclusive: a boolean Returns: Tensor of the same shape as x.
juraj-google-style
def _CountStoredAttributeContainers(self, container_type): if not container_type in self._CONTAINER_TYPES: raise ValueError('Attribute container type {0:s} is not supported'.format( container_type)) if not self._HasTable(container_type): return 0 query = 'SELECT MAX(_ROWID_) FROM {0:s} LIMIT 1'.format(container_type) self._cursor.execute(query) row = self._cursor.fetchone() if not row: return 0 return row[0] or 0
Counts the number of attribute containers of the given type. Args: container_type (str): attribute container type. Returns: int: number of attribute containers of the given type. Raises: ValueError: if an unsupported container_type is provided.
juraj-google-style
def get_type(self, index): if ((index < 0) or (index >= len(self._types))): raise ValueError('Index for getting order parameter type out-of-bounds!') return self._types[index]
Return type of order parameter at the index provided and represented by a short string. Args: index (int): index of order parameter for which type is to be returned. Returns: str: OP type.
codesearchnet