code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _get_contexts_for_squash(self, batch_signature): batch = self._batches_by_id[batch_signature].batch index = self._batches.index(batch) contexts = [] txns_added_predecessors = [] for b in self._batches[index::-1]: batch_is_valid = True contexts_from_batch = [] for txn in b.transactions[::-1]: result = self._txn_results[txn.header_signature] if not result.is_valid: batch_is_valid = False break else: txn_id = txn.header_signature if txn_id not in txns_added_predecessors: txns_added_predecessors.append( self._txn_predecessors[txn_id]) contexts_from_batch.append(result.context_id) if batch_is_valid: contexts.extend(contexts_from_batch) return contexts
Starting with the batch referenced by batch_signature, iterate back through the batches and for each valid batch collect the context_id. At the end remove contexts for txns that are other txn's predecessors. Args: batch_signature (str): The batch to start from, moving back through the batches in the scheduler Returns: (list): Context ids that haven't been previous base contexts.
juraj-google-style
def VerifyStructure(self, parser_mediator, line): try: structure = self._LINE.parseString(line) except pyparsing.ParseException: logger.debug('Not a SkyDrive old log file') return False day_of_month, month, year, hours, minutes, seconds, milliseconds = ( structure.date_time) time_elements_tuple = ( year, month, day_of_month, hours, minutes, seconds, milliseconds) try: dfdatetime_time_elements.TimeElementsInMilliseconds( time_elements_tuple=time_elements_tuple) except ValueError: logger.debug( 'Not a SkyDrive old log file, invalid date and time: {0!s}'.format( structure.date_time)) return False return True
Verify that this file is a SkyDrive old log file. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. line (str): line from a text file. Returns: bool: True if the line is in the expected format, False if not.
juraj-google-style
def filter_children(self, ctype: ContentType=None) -> List[SchemaNode]: if (ctype is None): ctype = self.content_type() return [c for c in self.children if ((not isinstance(c, (RpcActionNode, NotificationNode))) and ((c.content_type().value & ctype.value) != 0))]
Return receiver's children based on content type. Args: ctype: Content type.
codesearchnet
def _process_book(html_chunk): title, url = _parse_title_url(html_chunk) book_format, pages, isbn = _parse_format_pages_isbn(html_chunk) pub = Publication( title=title, authors=_parse_authors(html_chunk), price=_parse_price(html_chunk), publisher="Grada" ) pub.optionals.URL = url pub.optionals.ISBN = isbn pub.optionals.pages = pages pub.optionals.format = book_format pub.optionals.sub_title = _parse_subtitle(html_chunk) pub.optionals.description = _parse_description(html_chunk) return pub
Parse available informations about book from the book details page. Args: html_chunk (obj): HTMLElement containing slice of the page with details. Returns: obj: :class:`structures.Publication` instance with book details.
juraj-google-style
def warning_handler(self, handler): if (not self.opened()): handler = (handler or util.noop) self._warning_handler = enums.JLinkFunctions.LOG_PROTOTYPE(handler) self._dll.JLINKARM_SetWarnOutHandler(self._warning_handler)
Setter for the warning handler function. If the DLL is open, this function is a no-op, so it should be called prior to calling ``open()``. Args: self (JLink): the ``JLink`` instance handler (function): function to call on warning messages Returns: ``None``
codesearchnet
def circuit_to_instruction(circuit): instruction = Instruction(name=circuit.name, num_qubits=sum([qreg.size for qreg in circuit.qregs]), num_clbits=sum([creg.size for creg in circuit.cregs]), params=[]) instruction.control = None def find_bit_position(bit): 'find the index of a given bit (Register, int) within\n a flat ordered list of bits of the circuit\n ' if isinstance(bit[0], QuantumRegister): ordered_regs = circuit.qregs else: ordered_regs = circuit.cregs reg_index = ordered_regs.index(bit[0]) return (sum([reg.size for reg in ordered_regs[:reg_index]]) + bit[1]) definition = circuit.data.copy() if (instruction.num_qubits > 0): q = QuantumRegister(instruction.num_qubits, 'q') if (instruction.num_clbits > 0): c = ClassicalRegister(instruction.num_clbits, 'c') definition = list(map((lambda x: (x[0], list(map((lambda y: (q, find_bit_position(y))), x[1])), list(map((lambda y: (c, find_bit_position(y))), x[2])))), definition)) instruction.definition = definition return instruction
Build an ``Instruction`` object from a ``QuantumCircuit``. The instruction is anonymous (not tied to a named quantum register), and so can be inserted into another circuit. The instruction will have the same string name as the circuit. Args: circuit (QuantumCircuit): the input circuit. Return: Instruction: an instruction equivalent to the action of the input circuit. Upon decomposition, this instruction will yield the components comprising the original circuit.
codesearchnet
def random_string_generator(size=6, chars=string.ascii_uppercase): try: return ''.join((random.choice(chars) for _ in range(size))) except: (line, filename, synerror) = trace() raise ArcRestHelperError({'function': 'random_string_generator', 'line': line, 'filename': filename, 'synerror': synerror}) finally: pass
Generates a random string from a set of characters. Args: size (int): The length of the resultant string. Defaults to 6. chars (str): The characters to be used by :py:func:`random.choice`. Defaults to :py:const:`string.ascii_uppercase`. Returns: str: The randomly generated string. Examples: >>> arcresthelper.common.random_string_generator() 'DCNYWU' >>> arcresthelper.common.random_string_generator(12, "arcREST") 'cESaTTEacTES'
codesearchnet
def wait_for_job(self, job, poll=5): desc = _wait_until_training_done((lambda last_desc: _train_done(self.sagemaker_client, job, last_desc)), None, poll) self._check_job_status(job, desc, 'TrainingJobStatus') return desc
Wait for an Amazon SageMaker training job to complete. Args: job (str): Name of the training job to wait for. poll (int): Polling interval in seconds (default: 5). Returns: (dict): Return value from the ``DescribeTrainingJob`` API. Raises: ValueError: If the training job fails.
codesearchnet
def get_push_pop_stack(): push = copy.deepcopy(PUSH_STACK) pop = copy.deepcopy(POP_STACK) anno.setanno(push, 'pop', pop) anno.setanno(push, 'gen_push', True) anno.setanno(pop, 'push', push) op_id = _generate_op_id() return (push, pop, op_id)
Create pop and push nodes for substacks that are linked. Returns: A push and pop node which have `push_func` and `pop_func` annotations respectively, identifying them as such. They also have a `pop` and `push` annotation respectively, which links the push node to the pop node and vice versa.
codesearchnet
def get_locations_list(self, lower_bound=0, upper_bound=None): real_upper_bound = upper_bound if (upper_bound is None): real_upper_bound = self.nbr_of_sub_locations() try: return self._locations_list[lower_bound:real_upper_bound] except: return list()
Return the internal location list. Args: lower_bound: upper_bound: Returns:
codesearchnet
def serialize_to_display(self, doc_format='pretty-xml', *args, **kwargs): return super(ResourceMap, self).serialize(*args, format=doc_format, encoding=None, **kwargs).decode('utf-8')
Serialize ResourceMap to an XML doc that is pretty printed for display. Args: doc_format: str One of: ``xml``, ``n3``, ``turtle``, ``nt``, ``pretty-xml``, ``trix``, ``trig`` and ``nquads``. args and kwargs: Optional arguments forwarded to rdflib.ConjunctiveGraph.serialize(). Returns: str: Pretty printed Resource Map XML doc Note: Only the default, "xml", is automatically indexed by DataONE.
codesearchnet
def _print_tensor_info(tensor_info, indent=0): indent_str = ' ' * indent def in_print(s): print(indent_str + s) in_print(' dtype: ' + {value: key for key, value in types_pb2.DataType.items()}[tensor_info.dtype]) if tensor_info.tensor_shape.unknown_rank: shape = 'unknown_rank' else: dims = [str(dim.size) for dim in tensor_info.tensor_shape.dim] shape = ', '.join(dims) shape = '(' + shape + ')' in_print(' shape: ' + shape) in_print(' name: ' + tensor_info.name)
Prints details of the given tensor_info. Args: tensor_info: TensorInfo object to be printed. indent: How far (in increments of 2 spaces) to indent each line output
github-repos
def create_stub(generated_create_stub, channel=None, service_path=None, service_port=None, credentials=None, scopes=None, ssl_credentials=None): if (channel is None): target = '{}:{}'.format(service_path, service_port) if (credentials is None): credentials = _grpc_google_auth.get_default_credentials(scopes) channel = _grpc_google_auth.secure_authorized_channel(credentials, target, ssl_credentials=ssl_credentials) return generated_create_stub(channel)
Creates a gRPC client stub. Args: generated_create_stub (Callable): The generated gRPC method to create a stub. channel (grpc.Channel): A Channel object through which to make calls. If None, a secure channel is constructed. If specified, all remaining arguments are ignored. service_path (str): The domain name of the API remote host. service_port (int): The port on which to connect to the remote host. credentials (google.auth.credentials.Credentials): The authorization credentials to attach to requests. These credentials identify your application to the service. scopes (Sequence[str]): The OAuth scopes for this service. This parameter is ignored if a credentials is specified. ssl_credentials (grpc.ChannelCredentials): gRPC channel credentials used to create a secure gRPC channel. If not specified, SSL credentials will be created using default certificates. Returns: grpc.Client: A gRPC client stub.
codesearchnet
def query(self, minhash, k): if k <= 0: raise ValueError("k must be positive") if len(minhash) < self.k*self.l: raise ValueError("The num_perm of MinHash out of range") results = set() r = self.k while r > 0: for key in self._query(minhash, r, self.l): results.add(key) if len(results) >= k: return list(results) r -= 1 return list(results)
Return the approximate top-k keys that have the highest Jaccard similarities to the query set. Args: minhash (datasketch.MinHash): The MinHash of the query set. k (int): The maximum number of keys to return. Returns: `list` of at most k keys.
juraj-google-style
def add_region_feature(self, start_resnum, end_resnum, feat_type=None, feat_id=None, qualifiers=None): if self.feature_file: raise ValueError('Feature file associated with sequence, please remove file association to append ' 'additional features.') if not feat_type: feat_type = 'Manually added protein sequence region feature' newfeat = SeqFeature(location=FeatureLocation(start_resnum-1, end_resnum), type=feat_type, id=feat_id, qualifiers=qualifiers) self.features.append(newfeat)
Add a feature to the features list describing a region of the protein sequence. Args: start_resnum (int): Start residue number of the protein sequence feature end_resnum (int): End residue number of the protein sequence feature feat_type (str, optional): Optional description of the feature type (ie. 'binding domain') feat_id (str, optional): Optional ID of the feature type (ie. 'TM1')
juraj-google-style
def get_concept_item_mapping(self, concepts=None, lang=None): if (concepts is None): concepts = self.filter(active=True) if (lang is not None): concepts = concepts.filter(lang=lang) if (lang is None): languages = set([concept.lang for concept in concepts]) if (len(languages) > 1): raise Exception('Concepts has multiple languages') lang = list(languages)[0] item_lists = Item.objects.filter_all_reachable_leaves_many([json.loads(concept.query) for concept in concepts], lang) return dict(zip([c.pk for c in concepts], item_lists))
Get mapping of concepts to items belonging to concept. Args: concepts (list of Concept): Defaults to None meaning all concepts lang (str): language of concepts, if None use language of concepts Returns: dict: concept (int) -> list of item ids (int)
codesearchnet
def convert_selu(params, w_name, scope_name, inputs, layers, weights, names): print('Converting selu ...') if names == 'short': tf_name = 'SELU' + random_string(4) elif names == 'keep': tf_name = w_name else: tf_name = w_name + str(random.random()) selu = keras.layers.Activation('selu', name=tf_name) layers[scope_name] = selu(layers[inputs[0]])
Convert selu layer. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
juraj-google-style
def spherical_vert(script, radius=1.0, center_pt=(0.0, 0.0, 0.0)): function = 'sqrt((x-{})^2+(y-{})^2+(z-{})^2)<={}'.format(center_pt[0], center_pt[1], center_pt[2], radius) vert_function(script, function=function) return None
Select all vertices within a spherical radius Args: radius (float): radius of the sphere center_pt (3 coordinate tuple or list): center point of the sphere Layer stack: No impacts MeshLab versions: 2016.12 1.3.4BETA
codesearchnet
def recipe_bigquery_function(config, auth, function, dataset): bigquery(config, {'auth': auth, 'function': function, 'to': {'dataset': dataset}})
Add a custom function or table to a dataset. Args: auth (authentication) - Credentials used for writing function. function (choice) - Function or table to create. dataset (string) - Existing BigQuery dataset.
github-repos
def apply_grads(self, grads, variables): ops = [] for (grad, var) in zip(grads, variables): ops.extend(self.apply_grad(grad, var)) if (not ops): return ops return variables[0].graph.combine_assignments(ops)
Apply gradients to variables. Call this function externally instead of apply_grad(). This causes the operations to be combined, which is necessary for stacking variables see mtf.rewrite_stack_variables(). Args: grads: a list of Tensor variables: a list of Variables Returns: a list of Operations
codesearchnet
def merge(self, dataset): def merge_data(source, dest): for (key, value) in source.items(): if isinstance(value, dict): merge_data(value, dest.setdefault(key, {})) else: dest[key] = value return dest merge_data(dataset.data, self._data) for h in dataset.task_history: if (h not in self._task_history): self._task_history.append(h)
Merge the specified dataset on top of the existing data. This replaces all values in the existing dataset with the values from the given dataset. Args: dataset (TaskData): A reference to the TaskData object that should be merged on top of the existing object.
codesearchnet
def score_intersect(self, term1, term2, **kwargs): t1_kde = self.kde(term1, **kwargs) t2_kde = self.kde(term2, **kwargs) overlap = np.minimum(t1_kde, t2_kde) return np.trapz(overlap)
Compute the geometric area of the overlap between the kernel density estimates of two terms. Args: term1 (str) term2 (str) Returns: float
codesearchnet
def call(self, hidden_states: tf.Tensor, attention_mask: np.ndarray | tf.Tensor | None, layer_head_mask: tf.Tensor | None, training: Optional[bool]=False) -> tf.Tensor: residual = hidden_states hidden_states, self_attn_weights, _ = self.self_attn(hidden_states=hidden_states, attention_mask=attention_mask, layer_head_mask=layer_head_mask) tf.debugging.assert_equal(shape_list(hidden_states), shape_list(residual), message=f'Self attn modified the shape of query {shape_list(residual)} to {shape_list(hidden_states)}') hidden_states = self.dropout(hidden_states, training=training) hidden_states = residual + hidden_states hidden_states = self.self_attn_layer_norm(hidden_states) residual = hidden_states hidden_states = self.activation_fn(self.fc1(hidden_states)) hidden_states = self.activation_dropout(hidden_states, training=training) hidden_states = self.fc2(hidden_states) hidden_states = self.dropout(hidden_states, training=training) hidden_states = residual + hidden_states hidden_states = self.final_layer_norm(hidden_states) return (hidden_states, self_attn_weights)
Args: hidden_states (`tf.Tensor`): input to the layer of shape `(batch, seq_len, embed_dim)` attention_mask (`tf.Tensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`tf.Tensor`): mask for attention heads in a given layer of size `(encoder_attention_heads,)`
github-repos
def _ParseFileHeader(self, file_object): file_header_map = self._GetDataTypeMap('chrome_cache_data_block_file_header') try: (file_header, _) = self._ReadStructureFromFileObject(file_object, 0, file_header_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError('Unable to parse data block file header with error: {0!s}'.format(exception)) if (file_header.signature != self._FILE_SIGNATURE): raise errors.ParseError('Unsupported data block file signature') format_version = '{0:d}.{1:d}'.format(file_header.major_version, file_header.minor_version) if (format_version not in ('2.0', '2.1')): raise errors.ParseError('Unsupported data block file format version: {0:s}'.format(format_version)) if (file_header.block_size not in (256, 1024, 4096)): raise errors.ParseError('Unsupported data block file block size: {0:d}'.format(file_header.block_size))
Parses the file header. Args: file_object (dfvfs.FileIO): a file-like object to parse. Raises: ParseError: if the file header cannot be read.
codesearchnet
def get_authorization_url(self, client_id=None, instance_id=None, redirect_uri=None, region=None, scope=None, state=None): client_id = client_id or self.client_id instance_id = instance_id or self.instance_id redirect_uri = redirect_uri or self.redirect_uri region = region or self.region scope = scope or self.scope state = state or str(uuid.uuid4()) self.state = state return Request( 'GET', self.auth_base_url, params={ 'client_id': client_id, 'instance_id': instance_id, 'redirect_uri': redirect_uri, 'region': region, 'response_type': 'code', 'scope': scope, 'state': state } ).prepare().url, state
Generate authorization URL. Args: client_id (str): OAuth2 client ID. Defaults to ``None``. instance_id (str): App Instance ID. Defaults to ``None``. redirect_uri (str): Redirect URI. Defaults to ``None``. region (str): App Region. Defaults to ``None``. scope (str): Permissions. Defaults to ``None``. state (str): UUID to detect CSRF. Defaults to ``None``. Returns: str, str: Auth URL, state
juraj-google-style
def get_type(self): raise NotImplementedError('Base class should not be called directly!')
This function returns the type of the sniffer. Returns: The type (string) of the sniffer. Corresponds to the 'Type' key of the sniffer configuration.
github-repos
def create_heroku_connect_schema(using=DEFAULT_DB_ALIAS): connection = connections[using] with connection.cursor() as cursor: cursor.execute(_SCHEMA_EXISTS_QUERY, [settings.HEROKU_CONNECT_SCHEMA]) schema_exists = cursor.fetchone()[0] if schema_exists: return False cursor.execute('CREATE SCHEMA %s;', [AsIs(settings.HEROKU_CONNECT_SCHEMA)]) with connection.schema_editor() as editor: for model in get_heroku_connect_models(): editor.create_model(model) editor.execute('CREATE EXTENSION IF NOT EXISTS "hstore";') from heroku_connect.models import TriggerLog, TriggerLogArchive for cls in [TriggerLog, TriggerLogArchive]: editor.create_model(cls) return True
Create Heroku Connect schema. Note: This function is only meant to be used for local development. In a production environment the schema will be created by Heroku Connect. Args: using (str): Alias for database connection. Returns: bool: ``True`` if the schema was created, ``False`` if the schema already exists.
codesearchnet
def parse_date(date_string, ignoretz=True): try: return parser.parse(date_string, ignoretz=ignoretz) except TypeError: return None
Parse a string as a date. If the string fails to parse, `None` will be returned instead >>> parse_date('2017-08-15T18:24:31') datetime.datetime(2017, 8, 15, 18, 24, 31) Args: date_string (`str`): Date in string format to parse ignoretz (`bool`): If set ``True``, ignore time zones and return a naive :class:`datetime` object. Returns: `datetime`, `None`
juraj-google-style
def isubset(self, *keys): return ww.g((key, self[key]) for key in keys)
Return key, self[key] as generator for key in keys. Raise KeyError if a key does not exist Args: keys: Iterable containing keys Example: >>> from ww import d >>> list(d({1: 1, 2: 2, 3: 3}).isubset(1, 3)) [(1, 1), (3, 3)]
juraj-google-style
def init_cache(self, batch_size, max_length, encoder_outputs): decoder_input_ids = jnp.ones((batch_size, max_length), dtype='i4') decoder_attention_mask = jnp.ones_like(decoder_input_ids) decoder_position_ids = jnp.broadcast_to(jnp.arange(jnp.atleast_2d(decoder_input_ids).shape[-1]), decoder_input_ids.shape) def _decoder_forward(module, decoder_input_ids, decoder_attention_mask, decoder_position_ids, **kwargs): decoder_module = module._get_decoder_module() return decoder_module(decoder_input_ids, decoder_attention_mask, decoder_position_ids, **kwargs) init_variables = self.module.init(jax.random.PRNGKey(0), decoder_input_ids=decoder_input_ids, decoder_attention_mask=decoder_attention_mask, decoder_position_ids=decoder_position_ids, encoder_hidden_states=encoder_outputs[0], init_cache=True, method=_decoder_forward) return unfreeze(init_variables['cache'])
Args: batch_size (`int`): batch_size used for fast auto-regressive decoding. Defines the batch size of the initialized cache. max_length (`int`): maximum possible length for auto-regressive decoding. Defines the sequence length of the initialized cache. encoder_outputs (`Union[FlaxBaseModelOutput, tuple(tuple(jnp.ndarray)]`): `encoder_outputs` consists of (`last_hidden_state`, *optional*: `hidden_states`, *optional*: `attentions`). `last_hidden_state` of shape `(batch_size, sequence_length, hidden_size)`, *optional*) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder.
github-repos
def beam_sql(self, line: str, cell: Optional[str]=None) -> Optional[PValue]: input_str = line if cell: input_str += ' ' + cell parsed = self._parser.parse(input_str.strip().split()) if not parsed: return output_name = parsed.output_name verbose = parsed.verbose query = parsed.query runner = parsed.runner if output_name and (not output_name.isidentifier()) or keyword.iskeyword(output_name): on_error('The output_name "%s" is not a valid identifier. Please supply a valid identifier that is not a Python keyword.', line) return if not query: on_error('Please supply the SQL query to be executed.') return if runner and runner not in _SUPPORTED_RUNNERS: on_error('Runner "%s" is not supported. Supported runners are %s.', runner, _SUPPORTED_RUNNERS) return query = ' '.join(query) found = find_pcolls(query, pcoll_by_name(), verbose=verbose) schemas = set() main_session = importlib.import_module('__main__') for _, pcoll in found.items(): if not match_is_named_tuple(pcoll.element_type): on_error('PCollection %s of type %s is not a NamedTuple. See https: return register_coder_for_schema(pcoll.element_type, verbose=verbose) if hasattr(main_session, pcoll.element_type.__name__): schemas.add(pcoll.element_type) if runner in ('DirectRunner', None): collect_data_for_local_run(query, found) output_name, output, chain = apply_sql(query, output_name, found) chain.current.schemas = schemas cache_output(output_name, output) return output output_name, current_node, chain = apply_sql(query, output_name, found, False) current_node.schemas = schemas if runner == 'DataflowRunner': _ = chain.to_pipeline() _ = DataflowOptionsForm(output_name, pcoll_by_name()[output_name], verbose).display_for_input() return None else: raise ValueError('Unsupported runner %s.', runner)
The beam_sql line/cell magic that executes a Beam SQL. Args: line: the string on the same line after the beam_sql magic. cell: everything else in the same notebook cell as a string. If None, beam_sql is used as line magic. Otherwise, cell magic. Returns None if running into an error or waiting for user input (running on a selected runner remotely), otherwise a PValue as if a SqlTransform is applied.
github-repos
def get_experiment_from_key(self, experiment_key): experiment = self.experiment_key_map.get(experiment_key) if experiment: return experiment self.logger.error(('Experiment key "%s" is not in datafile.' % experiment_key)) self.error_handler.handle_error(exceptions.InvalidExperimentException(enums.Errors.INVALID_EXPERIMENT_KEY_ERROR)) return None
Get experiment for the provided experiment key. Args: experiment_key: Experiment key for which experiment is to be determined. Returns: Experiment corresponding to the provided experiment key.
codesearchnet
def get_type_info(obj): if isinstance(obj, primitive_types): return ('primitive', type(obj).__name__) if isinstance(obj, sequence_types): return ('sequence', type(obj).__name__) if isinstance(obj, array_types): return ('array', type(obj).__name__) if isinstance(obj, key_value_types): return ('key-value', type(obj).__name__) if isinstance(obj, types.ModuleType): return ('module', type(obj).__name__) if isinstance(obj, (types.FunctionType, types.MethodType)): return ('function', type(obj).__name__) if isinstance(obj, type): if hasattr(obj, '__dict__'): return ('class', obj.__name__) if isinstance(type(obj), type): if hasattr(obj, '__dict__'): cls_name = type(obj).__name__ if cls_name == 'classobj': cls_name = obj.__name__ return ('class', '{}'.format(cls_name)) if cls_name == 'instance': cls_name = obj.__class__.__name__ return ('instance', '{} instance'.format(cls_name)) return ('unknown', type(obj).__name__)
Get type information for a Python object Args: obj: The Python object Returns: tuple: (object type "catagory", object type name)
juraj-google-style
async def freeze(self, *args, **kwargs): uid = kwargs.get('uid', 0) coinid = kwargs.get('coinid') amount = kwargs.get('amount') address = kwargs.get('address') try: coinid = coinid.replace('TEST', '') except: pass try: uid = int(uid) except: return (await self.error_400('User id must be integer. ')) try: amount = int(amount) except: return (await self.error_400('Amount must be integer. ')) try: assert (amount > 0) except: return (await self.error_400('Amount must be positive integer. ')) if ((not uid) and address): uid = (await self.get_uid_by_address(address=address, coinid=coinid)) if isinstance(uid, dict): return uid database = self.client[self.collection] collection = database[coinid] balance = (await collection.find_one({'uid': uid})) if (not balance): return (await self.error_404(('Freeze. Balance with uid:%s and type:%s not found.' % (uid, coinid)))) difference = (int(balance['amount_active']) - int(amount)) if (difference < 0): return (await self.error_403('Freeze. Insufficient amount in the account')) amount_frozen = (int(balance['amount_frozen']) + int(amount)) (await collection.find_one_and_update({'uid': uid}, {'$set': {'amount_active': str(difference), 'amount_frozen': str(amount_frozen)}})) result = (await collection.find_one({'uid': uid})) result['amount_frozen'] = int(result['amount_frozen']) result['amount_active'] = int(result['amount_active']) del result['_id'] return result
Freeze users balance Accepts: - uid [integer] (users id from main server) - coinid [string] (blockchain type in uppercase) - amount [integer] (amount for freezing) Returns: - uid [integer] (users id from main server) - coinid [string] (blockchain type in uppercase) - amount_active [integer] (activae users amount) - amount_frozen [integer] (frozen users amount)
codesearchnet
def delete_panel(self, panel_obj): res = self.panel_collection.delete_one({'_id': panel_obj['_id']}) LOG.warning(('Deleting panel %s, version %s' % (panel_obj['panel_name'], panel_obj['version']))) return res
Delete a panel by '_id'. Args: panel_obj(dict) Returns: res(pymongo.DeleteResult)
codesearchnet
def unique_timestamps(self: EventSetOrNode) -> EventSetOrNode: from temporian.core.operators.unique_timestamps import unique_timestamps return unique_timestamps(self)
Removes events with duplicated timestamps from an [`EventSet`][temporian.EventSet]. Returns a feature-less EventSet where each timestamp from the original one only appears once. If the input is indexed, the unique operation is applied independently for each index. Usage example: ```python >>> a = tp.event_set(timestamps=[5, 9, 9, 16], features={'f': [1,2,3,4]}) >>> b = a.unique_timestamps() >>> b indexes: [] features: [] events: (3 events): timestamps: [ 5. 9. 16.] ... ``` Returns: EventSet without features with unique timestamps in the input.
github-repos
def group_by_mimetype(content: ProcessorContent) -> dict[str, ProcessorContent]: grouped_content = {} for mimetype, part in content.items(): if mimetype not in grouped_content: grouped_content[mimetype] = ProcessorContent() grouped_content[mimetype] += part return grouped_content
Groups content by mimetype. The order of parts within each mimetype grouping is preserved, maintaining the same order as they appeared in the original input `content`. Args: content: The content to group. Returns: A dictionary mapping mimetypes to ProcessorContent objects, with the same order as in the original input `content`.
github-repos
def compare_versions(ver1='', oper='==', ver2=''): if not ver1: raise SaltInvocationError('compare_version, ver1 is blank') if not ver2: raise SaltInvocationError('compare_version, ver2 is blank') if ver1 == 'latest': ver1 = six.text_type(sys.maxsize) if ver2 == 'latest': ver2 = six.text_type(sys.maxsize) if ver1 == 'Not Found': ver1 = '0.0.0.0.0' if ver2 == 'Not Found': ver2 = '0.0.0.0.0' return salt.utils.versions.compare(ver1, oper, ver2, ignore_epoch=True)
Compare software package versions Args: ver1 (str): A software version to compare oper (str): The operand to use to compare ver2 (str): A software version to compare Returns: bool: True if the comparison is valid, otherwise False CLI Example: .. code-block:: bash salt '*' pkg.compare_versions 1.2 >= 1.3
juraj-google-style
def colored(cls, color, message): return getattr(cls, color.upper()) + message + cls.DEFAULT
Small function to wrap a string around a color Args: color (str): name of the color to wrap the string with, must be one of the class properties message (str): String to wrap with the color Returns: str: the colored string
juraj-google-style
def commit_author(sha1=''): with conf.within_proj_dir(): cmd = 'git show -s --format="%an||%ae" {}'.format(sha1) result = shell.run(cmd, capture=True, never_pretend=True).stdout (name, email) = result.split('||') return Author(name, email)
Return the author of the given commit. Args: sha1 (str): The sha1 of the commit to query. If not given, it will return the sha1 for the current commit. Returns: Author: A named tuple ``(name, email)`` with the commit author details.
codesearchnet
def remove_config(self, id): url = self._url('/configs/{0}', id) res = self._delete(url) self._raise_for_status(res) return True
Remove a config Args: id (string): Full ID of the config to remove Returns (boolean): True if successful Raises: :py:class:`docker.errors.NotFound` if no config with that ID exists
juraj-google-style
def _transpose_batch_time(x): x_static_shape = x.get_shape() if x_static_shape.rank is not None and x_static_shape.rank < 2: return x x_rank = array_ops.rank(x) x_t = array_ops.transpose(x, array_ops.concat(([1, 0], math_ops.range(2, x_rank)), axis=0)) x_t.set_shape(tensor_shape.TensorShape([x_static_shape.dims[1].value, x_static_shape.dims[0].value]).concatenate(x_static_shape[2:])) return x_t
Transposes the batch and time dimensions of a Tensor. If the input tensor has rank < 2 it returns the original tensor. Retains as much of the static shape information as possible. Args: x: A Tensor. Returns: x transposed along the first two dimensions.
github-repos
def _forward_over_back_hessian(f, params, use_pfor, dtype=None): return _vectorize_parameters(functools.partial(_hvp, f, params), params, use_pfor=use_pfor, dtype=dtype)
Computes the full Hessian matrix for the scalar-valued f(*params). Args: f: A function taking `params` and returning a scalar. params: A possibly nested structure of tensors. use_pfor: If true, uses `tf.vectorized_map` calls instead of looping. dtype: Required if `use_pfor=False`. A possibly nested structure of dtypes (e.g. `tf.float32`) matching the structure of `f`'s returns. Returns: A possibly nested structure of matrix slices corresponding to `params`. Each slice has shape [P, p_s] where `p_s` is the number of parameters (`tf.size`) in the corresponding element of `params` and `P` is the total number of parameters (`sum_s(p_s)`). The full matrix can be obtained by concatenating along the second axis.
github-repos
def get(self, *, search, limit=0, headers=None): return self.transport.forward_request( method='GET', path=self.path, params={'search': search, 'limit': limit}, headers=headers )
Retrieves the assets that match a given text search string. Args: search (str): Text search string. limit (int): Limit the number of returned documents. Defaults to zero meaning that it returns all the matching assets. headers (dict): Optional headers to pass to the request. Returns: :obj:`list` of :obj:`dict`: List of assets that match the query.
juraj-google-style
def delete_asset(self, asset_id, asset_type): return self.asset(asset_id, asset_type=asset_type, action='DELETE')
Delete the asset with the provided asset_id. Args: asset_id: The id of the asset. asset_type: The asset type. Returns:
juraj-google-style
def __new__(cls, name, parents, dct): newClass = super(CommandMeta, cls).__new__(cls, name, parents, dct) if name != 'Command': for attribute in ['name', 'description', 'help']: if attribute not in dct or dct[attribute] is None: raise ValueError('%s cannot be None.' % attribute) CommandMeta.registry[name] = newClass return newClass
Creates a new Command class and validates it. Args: cls (Class): the class object being created name (name): the name of the class being created parents (list): list of parent classes dct (dictionary): class attributes Returns: ``Class``
juraj-google-style
def wrap_in_placeholder(self, arg, shape_info): if shape_info == 'known': return arg if isinstance(arg, ragged_tensor.RaggedTensor): return arg.with_flat_values(self.wrap_in_placeholder(arg.flat_values, shape_info)) if isinstance(arg, tensor_shape.TensorShape): if arg.ndims is None: return arg arg = constant_op.constant(arg.as_list()) if shape_info == 'unknown_rank': return array_ops.placeholder_with_default(arg, None) if shape_info == 'unknown_dims': return array_ops.placeholder_with_default(arg, [None] * arg.shape.rank) raise AssertionError('Unexpected shape_info %r' % shape_info)
Wraps `arg` in a placeholder to limit static shape info. Args: arg: The value to wrap. A Tensor, RaggedTensor, or TensorShape. shape_info: One of ['known', 'unknown_dims', 'unknown_rank']. Returns: * If shape_info is 'known': returns `arg`. * If shape_info is 'unknown_dims': returns a placeholder wrapping `arg` where the dimension sizes are unknown. If `arg` is a TensorShape, then convert it to a vector first. If `arg` is a RaggedTensor, then wrap the flat_values. * If shape_info is 'unknown_rank': returns a placeholder wrapping `arg` where the rank is unknown. If `arg` is a TensorShape, then convert it to a vector first. If `arg` is a RaggedTensor, then wrap the flat_values.
github-repos
def tensor_summary(name, tensor, summary_description=None, collections=None, summary_metadata=None, family=None, display_name=None): if summary_metadata is None: summary_metadata = _SummaryMetadata() if summary_description is not None: summary_metadata.summary_description = summary_description if display_name is not None: summary_metadata.display_name = display_name serialized_summary_metadata = summary_metadata.SerializeToString() if _distribute_summary_op_util.skip_summary(): return _constant_op.constant('') with _summary_op_util.summary_scope(name, family, values=[tensor]) as (tag, scope): val = _gen_logging_ops.tensor_summary_v2(tensor=tensor, tag=tag, name=scope, serialized_summary_metadata=serialized_summary_metadata) _summary_op_util.collect(val, collections, [_ops.GraphKeys.SUMMARIES]) return val
Outputs a `Summary` protocol buffer with a serialized tensor.proto. Args: name: A name for the generated node. If display_name is not set, it will also serve as the tag name in TensorBoard. (In that case, the tag name will inherit tf name scopes.) tensor: A tensor of any type and shape to serialize. summary_description: A long description of the summary sequence. Markdown is supported. collections: Optional list of graph collections keys. The new summary op is added to these collections. Defaults to `[GraphKeys.SUMMARIES]`. summary_metadata: Optional SummaryMetadata proto (which describes which plugins may use the summary value). family: Optional; if provided, used as the prefix of the summary tag, which controls the name used for display on TensorBoard when display_name is not set. display_name: A string used to name this data in TensorBoard. If this is not set, then the node name will be used instead. Returns: A scalar `Tensor` of type `string`. The serialized `Summary` protocol buffer.
github-repos
def _update_graph(self, vertex_dict=None, edge_dict=None): def set_attrs(ref, attrs): for attr_name, attr_val in attrs.items(): ref.set(attr_name, attr_val) with self._lock: if vertex_dict: for vertex, vertex_attrs in vertex_dict.items(): set_attrs(self._vertex_refs[vertex], vertex_attrs) if edge_dict: for edge, edge_attrs in edge_dict.items(): if isinstance(edge, tuple): set_attrs(self._edge_refs[edge], edge_attrs) else: for vertex_pair in self._edge_to_vertex_pairs[edge]: set_attrs(self._edge_refs[vertex_pair], edge_attrs)
Updates the pydot.Dot object with the given attribute update Args: vertex_dict: (Dict[str, Dict[str, str]]) maps vertex names to attributes edge_dict: This should be Either (Dict[str, Dict[str, str]]) which maps edge names to attributes Or (Dict[(str, str), Dict[str, str]]) which maps vertex pairs to edge attributes
github-repos
def invite_by_email(self, email, user, organization, **kwargs): try: invitee = self.user_model.objects.get(email__iexact=email) except self.user_model.DoesNotExist: invitee = None user_invitation = self.invitation_model.objects.create( invitee=invitee, invitee_identifier=email.lower(), invited_by=user, organization=organization, ) self.send_invitation(user_invitation) return user_invitation
Primary interface method by which one user invites another to join Args: email: request: **kwargs: Returns: an invitation instance Raises: MultipleObjectsReturned if multiple matching users are found
juraj-google-style
def add_presence_listener(self, callback): listener_uid = uuid4() self.presence_listeners[listener_uid] = callback return listener_uid
Add a presence listener that will send a callback when the client receives a presence update. Args: callback (func(roomchunk)): Callback called when a presence update arrives. Returns: uuid.UUID: Unique id of the listener, can be used to identify the listener.
codesearchnet
def position(msg0, msg1, t0, t1, lat_ref=None, lon_ref=None): tc0 = typecode(msg0) tc1 = typecode(msg1) if (5<=tc0<=8 and 5<=tc1<=8): if (not lat_ref) or (not lon_ref): raise RuntimeError("Surface position encountered, a reference \ position lat/lon required. Location of \ receiver can be used.") else: return surface_position(msg0, msg1, t0, t1, lat_ref, lon_ref) elif (9<=tc0<=18 and 9<=tc1<=18): return airborne_position(msg0, msg1, t0, t1) elif (20<=tc0<=22 and 20<=tc1<=22): return airborne_position(msg0, msg1, t0, t1) else: raise RuntimeError("incorrect or inconsistant message types")
Decode position from a pair of even and odd position message (works with both airborne and surface position messages) Args: msg0 (string): even message (28 bytes hexadecimal string) msg1 (string): odd message (28 bytes hexadecimal string) t0 (int): timestamps for the even message t1 (int): timestamps for the odd message Returns: (float, float): (latitude, longitude) of the aircraft
juraj-google-style
def hex_to_name(hexx): for (n, h) in defaults.COLOURS.items(): if ((len(n) > 1) and (h == hexx.upper())): return n.lower() return None
Convert hex to a color name, using matplotlib's colour names. Args: hexx (str): A hexadecimal colour, starting with '#'. Returns: str: The name of the colour, or None if not found.
codesearchnet
def save_summaries(frames, keys, selected_summaries, batch_dir, batch_name): if not frames: logger.info("Could save summaries - no summaries to save!") logger.info("You have no frames - aborting") return None if not keys: logger.info("Could save summaries - no summaries to save!") logger.info("You have no keys - aborting") return None selected_summaries_dict = create_selected_summaries_dict(selected_summaries) summary_df = pd.concat(frames, keys=keys, axis=1) for key, value in selected_summaries_dict.items(): _summary_file_name = os.path.join(batch_dir, "summary_%s_%s.csv" % ( key, batch_name)) _summary_df = summary_df.iloc[:, summary_df.columns.get_level_values(1) == value] _header = _summary_df.columns _summary_df.to_csv(_summary_file_name, sep=";") logger.info( "saved summary (%s) to:\n %s" % (key, _summary_file_name)) logger.info("finished saving summaries") return summary_df
Writes the summaries to csv-files Args: frames: list of ``cellpy`` summary DataFrames keys: list of indexes (typically run-names) for the different runs selected_summaries: list defining which summary data to save batch_dir: directory to save to batch_name: the batch name (will be used for making the file-name(s)) Returns: a pandas DataFrame with your selected summaries.
juraj-google-style
def check_num_tasks(chain, task_count): errors = [] min_decision_tasks = 1 if (task_count['decision'] < min_decision_tasks): errors.append('{} decision tasks; we must have at least {}!'.format(task_count['decision'], min_decision_tasks)) raise_on_errors(errors)
Make sure there are a specific number of specific task types. Currently we only check decision tasks. Args: chain (ChainOfTrust): the chain we're operating on task_count (dict): mapping task type to the number of links. Raises: CoTError: on failure.
codesearchnet
def load_institute(adapter, internal_id, display_name, sanger_recipients=None): institute_obj = build_institute( internal_id=internal_id, display_name=display_name, sanger_recipients=sanger_recipients ) log.info("Loading institute {0} with display name {1}" \ " into database".format(internal_id, display_name)) adapter.add_institute(institute_obj)
Load a institute into the database Args: adapter(MongoAdapter) internal_id(str) display_name(str) sanger_recipients(list(email))
juraj-google-style
def _Identity(tensor, name=None): tensor = ops.internal_convert_to_tensor_or_composite(tensor, as_ref=True) tensor = variable_utils.convert_variables_to_tensors(tensor) if isinstance(tensor, tensor_lib.Tensor): if tensor.dtype._is_ref_dtype: return gen_array_ops.ref_identity(tensor, name=name) else: return array_ops.identity(tensor, name=name) elif isinstance(tensor, composite_tensor.CompositeTensor): return nest.map_structure(_Identity, tensor, expand_composites=True) else: raise TypeError(f"'tensor' must be a Tensor or CompositeTensor. Received: {type(tensor)}.")
Return a tensor with the same shape and contents as the input tensor. Args: tensor: A Tensor. name: A name for this operation (optional). Returns: A Tensor with the same type and value as the input Tensor.
github-repos
def __init__(self, input_reader=None, output_writer=None): super(Log2TimelineTool, self).__init__( input_reader=input_reader, output_writer=output_writer) self._command_line_arguments = None self._enable_sigsegv_handler = False self._number_of_extraction_workers = 0 self._storage_serializer_format = definitions.SERIALIZER_FORMAT_JSON self._source_type = None self._status_view = status_view.StatusView(self._output_writer, self.NAME) self._status_view_mode = status_view.StatusView.MODE_WINDOW self._stdout_output_writer = isinstance( self._output_writer, tools.StdoutOutputWriter) self._worker_memory_limit = None self.dependencies_check = True self.list_hashers = False self.list_parsers_and_plugins = False self.list_profilers = False self.show_info = False self.show_troubleshooting = False
Initializes a log2timeline CLI tool. Args: input_reader (Optional[InputReader]): input reader, where None indicates that the stdin input reader should be used. output_writer (Optional[OutputWriter]): output writer, where None indicates that the stdout output writer should be used.
juraj-google-style
def save_attributes_to_hdf5_group(group, name, data): bad_attributes = [x for x in data if len(x) > HDF5_OBJECT_HEADER_LIMIT] if bad_attributes: raise RuntimeError('The following attributes cannot be saved to HDF5 file because they are larger than %d bytes: %s' % (HDF5_OBJECT_HEADER_LIMIT, ', '.join(bad_attributes))) data_npy = np.asarray(data) num_chunks = 1 chunked_data = np.array_split(data_npy, num_chunks) while any((x.nbytes > HDF5_OBJECT_HEADER_LIMIT for x in chunked_data)): num_chunks += 1 chunked_data = np.array_split(data_npy, num_chunks) if num_chunks > 1: for chunk_id, chunk_data in enumerate(chunked_data): group.attrs['%s%d' % (name, chunk_id)] = chunk_data else: group.attrs[name] = data
Saves attributes (data) of the specified name into the HDF5 group. This method deals with an inherent problem of HDF5 file which is not able to store data larger than HDF5_OBJECT_HEADER_LIMIT bytes. Args: group: A pointer to a HDF5 group. name: A name of the attributes to save. data: Attributes data to store. Raises: RuntimeError: If any single attribute is too large to be saved.
github-repos
def create_from_binary(cls, load_dataruns, binary_view): (attr_type, attr_len, non_resident, name_len, name_offset, flags, attr_id, start_vcn, end_vcn, rl_offset, compress_usize, alloc_sstream, curr_sstream, init_sstream) = cls._REPR.unpack(binary_view[:cls._REPR.size]) if name_len: name = binary_view[name_offset:(name_offset + (2 * name_len))].tobytes().decode('utf_16_le') else: name = None nw_obj = cls((AttrTypes(attr_type), attr_len, bool(non_resident), AttrFlags(flags), attr_id, name), (start_vcn, end_vcn, rl_offset, compress_usize, alloc_sstream, curr_sstream, init_sstream)) if load_dataruns: nw_obj.data_runs = DataRuns.create_from_binary(binary_view[nw_obj.rl_offset:]) _MOD_LOGGER.debug('NonResidentAttrHeader object created successfully') return nw_obj
Creates a new object NonResidentAttrHeader from a binary stream. The binary stream can be represented by a byte string, bytearray or a memoryview of the bytearray. Args: load_dataruns (bool) - Indicates if the dataruns are to be loaded binary_view (memoryview of bytearray) - A binary stream with the information of the attribute non_resident_offset (int) - The offset where the non resident header begins Returns: NonResidentAttrHeader: New object using hte binary stream as source
codesearchnet
def gen_encoder_output_proposals(self, enc_output, padding_mask, spatial_shapes): batch_size = enc_output.shape[0] proposals = [] _cur = 0 level_ids = [] for level, (height, width) in enumerate(spatial_shapes): mask_flatten_ = padding_mask[:, _cur:_cur + height * width].view(batch_size, height, width, 1) valid_height = torch.sum(~mask_flatten_[:, :, 0, 0], 1) valid_width = torch.sum(~mask_flatten_[:, 0, :, 0], 1) grid_y, grid_x = meshgrid(torch.linspace(0, height - 1, height, dtype=torch.float32, device=enc_output.device), torch.linspace(0, width - 1, width, dtype=torch.float32, device=enc_output.device), indexing='ij') grid = torch.cat([grid_x.unsqueeze(-1), grid_y.unsqueeze(-1)], -1) scale = torch.cat([valid_width.unsqueeze(-1), valid_height.unsqueeze(-1)], 1).view(batch_size, 1, 1, 2) grid = (grid.unsqueeze(0).expand(batch_size, -1, -1, -1) + 0.5) / scale width_height = torch.ones_like(grid) * 0.05 * 2.0 ** level proposal = torch.cat((grid, width_height), -1).view(batch_size, -1, 4) proposals.append(proposal) _cur += height * width level_ids.append(grid.new_ones(height * width, dtype=torch.long) * level) output_proposals = torch.cat(proposals, 1) output_proposals_valid = ((output_proposals > 0.01) & (output_proposals < 0.99)).all(-1, keepdim=True) output_proposals = torch.log(output_proposals / (1 - output_proposals)) output_proposals = output_proposals.masked_fill(padding_mask.unsqueeze(-1), float('inf')) output_proposals = output_proposals.masked_fill(~output_proposals_valid, float('inf')) object_query = enc_output object_query = object_query.masked_fill(padding_mask.unsqueeze(-1), float(0)) object_query = object_query.masked_fill(~output_proposals_valid, float(0)) object_query = self.enc_output_norm(self.enc_output(object_query)) level_ids = torch.cat(level_ids) return (object_query, output_proposals, level_ids)
Generate the encoder output proposals from encoded enc_output. Args: enc_output (Tensor[batch_size, sequence_length, hidden_size]): Output of the encoder. padding_mask (Tensor[batch_size, sequence_length]): Padding mask for `enc_output`. spatial_shapes (Tensor[num_feature_levels, 2]): Spatial shapes of the feature maps. Returns: `tuple(torch.FloatTensor)`: A tuple of feature map and bbox prediction. - object_query (Tensor[batch_size, sequence_length, hidden_size]): Object query features. Later used to directly predict a bounding box. (without the need of a decoder) - output_proposals (Tensor[batch_size, sequence_length, 4]): Normalized proposals, after an inverse sigmoid.
github-repos
def login_with_password(self, username, password, limit=10): warn('login_with_password is deprecated. Use login with sync=True.', DeprecationWarning) return self.login(username, password, limit, sync=True)
Deprecated. Use ``login`` with ``sync=True``. Login to the homeserver. Args: username (str): Account username password (str): Account password limit (int): Deprecated. How many messages to return when syncing. This will be replaced by a filter API in a later release. Returns: str: Access token Raises: MatrixRequestError
codesearchnet
def get(self, record_id): record_url = self.record_url(record_id) return self._get(record_url)
Retrieves a record by its id >>> record = airtable.get('recwPQIfs4wKPyc9D') Args: record_id(``str``): Airtable record id Returns: record (``dict``): Record
juraj-google-style
def parse_config(self, config): prefix = self.argument_prefix self.sources = config.get_sources(prefix) self.smart_sources = [self._get_smart_filename(s) for s in self.sources] self.index = config.get_index(prefix) self.source_roots = OrderedSet(config.get_paths(('%s_source_roots' % prefix))) for (arg, dest) in list(self.paths_arguments.items()): val = config.get_paths(arg) setattr(self, dest, val) for (arg, dest) in list(self.path_arguments.items()): val = config.get_path(arg) setattr(self, dest, val) self.formatter.parse_config(config)
Override this, making sure to chain up first, if your extension adds its own custom command line arguments, or you want to do any further processing on the automatically added arguments. The default implementation will set attributes on the extension: - 'sources': a set of absolute paths to source files for this extension - 'index': absolute path to the index for this extension Additionally, it will set an attribute for each argument added with `Extension.add_path_argument` or `Extension.add_paths_argument`, with the extension's `Extension.argument_prefix` stripped, and dashes changed to underscores. Args: config: a `config.Config` instance
codesearchnet
def calculate_uncertainty(self, logits: torch.Tensor) -> torch.Tensor: uncertainty_scores = -torch.abs(logits) return uncertainty_scores
In Mask2Former paper, uncertainty is estimated as L1 distance between 0.0 and the logit prediction in 'logits' for the foreground class in `classes`. Args: logits (`torch.Tensor`): A tensor of shape (R, 1, ...) for class-specific or class-agnostic, where R is the total number of predicted masks in all images and C is: the number of foreground classes. The values are logits. Returns: scores (`torch.Tensor`): A tensor of shape (R, 1, ...) that contains uncertainty scores with the most uncertain locations having the highest uncertainty score.
github-repos
def __similarity(s1, s2, ngrams_fn, n=3): (ngrams1, ngrams2) = (set(ngrams_fn(s1, n=n)), set(ngrams_fn(s2, n=n))) matches = ngrams1.intersection(ngrams2) return ((2 * len(matches)) / (len(ngrams1) + len(ngrams2)))
The fraction of n-grams matching between two sequences Args: s1: a string s2: another string n: an int for the n in n-gram Returns: float: the fraction of n-grams matching
codesearchnet
def find_all_template(im_source, im_search, threshold=0.5, maxcnt=0, rgb=False, bgremove=False): method = cv2.TM_CCOEFF_NORMED if rgb: s_bgr = cv2.split(im_search) i_bgr = cv2.split(im_source) weight = (0.3, 0.3, 0.4) resbgr = [0, 0, 0] for i in range(3): resbgr[i] = cv2.matchTemplate(i_bgr[i], s_bgr[i], method) res = resbgr[0]*weight[0] + resbgr[1]*weight[1] + resbgr[2]*weight[2] else: s_gray = cv2.cvtColor(im_search, cv2.COLOR_BGR2GRAY) i_gray = cv2.cvtColor(im_source, cv2.COLOR_BGR2GRAY) if bgremove: s_gray = cv2.Canny(s_gray, 100, 200) i_gray = cv2.Canny(i_gray, 100, 200) res = cv2.matchTemplate(i_gray, s_gray, method) w, h = im_search.shape[1], im_search.shape[0] result = [] while True: min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res) if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]: top_left = min_loc else: top_left = max_loc if DEBUG: print('templmatch_value(thresh:%.1f) = %.3f' %(threshold, max_val)) if max_val < threshold: break middle_point = (top_left[0]+w/2, top_left[1]+h/2) result.append(dict( result=middle_point, rectangle=(top_left, (top_left[0], top_left[1] + h), (top_left[0] + w, top_left[1]), (top_left[0] + w, top_left[1] + h)), confidence=max_val )) if maxcnt and len(result) >= maxcnt: break cv2.floodFill(res, None, max_loc, (-1000,), max_val-threshold+0.1, 1, flags=cv2.FLOODFILL_FIXED_RANGE) return result
Locate image position with cv2.templateFind Use pixel match to find pictures. Args: im_source(string): 图像、素材 im_search(string): 需要查找的图片 threshold: 阈值,当相识度小于该阈值的时候,就忽略掉 Returns: A tuple of found [(point, score), ...] Raises: IOError: when file read error
juraj-google-style
def get_membership(self, uuid=None): group_id = self.get_group_id(uuid=uuid) uri = 'group/{group_id}/member' mbr_data = self.get(uri.format(group_id=group_id), params=None) return mbr_data
Get membership data based on uuid. Args: uuid (str): optional uuid. defaults to self.cuuid Raises: PyLmodUnexpectedData: No data was returned. requests.RequestException: Exception connection error Returns: dict: membership json
codesearchnet
def mktemp(self, container: Container) -> str: logger.debug('creating a temporary file inside container %s', container.uid) response = self.command(container, 'mktemp') if (response.code != 0): msg = 'failed to create temporary file for container {}: [{}] {}' msg = msg.format(uid, response.code, response.output) logger.error(msg) raise Exception(msg) assert (response.code == 0), 'failed to create temporary file' fn = response.output.strip() logger.debug('created temporary file inside container %s: %s', container.uid, fn) return fn
Creates a named temporary file within a given container. Returns: the absolute path to the created temporary file.
codesearchnet
def masks_to_boxes(masks: np.ndarray) -> np.ndarray: if masks.size == 0: return np.zeros((0, 4)) h, w = masks.shape[-2:] y = np.arange(0, h, dtype=np.float32) x = np.arange(0, w, dtype=np.float32) y, x = np.meshgrid(y, x, indexing='ij') x_mask = masks * np.expand_dims(x, axis=0) x_max = x_mask.reshape(x_mask.shape[0], -1).max(-1) x = np.ma.array(x_mask, mask=~np.array(masks, dtype=bool)) x_min = x.filled(fill_value=100000000.0) x_min = x_min.reshape(x_min.shape[0], -1).min(-1) y_mask = masks * np.expand_dims(y, axis=0) y_max = y_mask.reshape(x_mask.shape[0], -1).max(-1) y = np.ma.array(y_mask, mask=~np.array(masks, dtype=bool)) y_min = y.filled(fill_value=100000000.0) y_min = y_min.reshape(y_min.shape[0], -1).min(-1) return np.stack([x_min, y_min, x_max, y_max], 1)
Compute the bounding boxes around the provided panoptic segmentation masks. Args: masks: masks in format `[number_masks, height, width]` where N is the number of masks Returns: boxes: bounding boxes in format `[number_masks, 4]` in xyxy format
github-repos
def session_new(self, **kwargs): path = self._get_path('session_new') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Generate a session id for user based authentication. A session id is required in order to use any of the write methods. Args: request_token: The token you generated for the user to approve. The token needs to be approved before being used here. Returns: A dict respresentation of the JSON returned from the API.
juraj-google-style
def DeregisterPlugin(cls, plugin_class): name = getattr( plugin_class, 'ARTIFACT_DEFINITION_NAME', plugin_class.__name__) name = name.lower() if name not in cls._plugins: raise KeyError( 'Artifact plugin class not set for name: {0:s}.'.format(name)) del cls._plugins[name] if name in cls._file_system_plugins: del cls._file_system_plugins[name] if name in cls._knowledge_base_plugins: del cls._knowledge_base_plugins[name] if name in cls._windows_registry_plugins: del cls._windows_registry_plugins[name]
Deregisters an preprocess plugin class. Args: plugin_class (type): preprocess plugin class. Raises: KeyError: if plugin class is not set for the corresponding name. TypeError: if the source type of the plugin class is not supported.
juraj-google-style
def smartupgrade(self, restart=True, dependencies=False, prerelease=False): if not self.check(): if self.verbose: print("Package {} already up-to-date!".format(self.pkg)) return if self.verbose: print("Upgrading {} ...".format(self.pkg)) self.upgrade(dependencies, prerelease, force=False) if restart: self.restart()
Upgrade the package if there is a later version available. Args: restart: restart app if True dependencies: update package dependencies if True (see pip --no-deps) prerelease: update to pre-release and development versions
juraj-google-style
def save_config(config, logdir=None): if logdir: with config.unlocked: config.logdir = logdir message = 'Start a new run and write summaries and checkpoints to {}.' tf.logging.info(message.format(config.logdir)) tf.gfile.MakeDirs(config.logdir) config_path = os.path.join(config.logdir, 'config.yaml') with tf.gfile.FastGFile(config_path, 'w') as file_: yaml.dump(config, file_, default_flow_style=False) else: message = ( 'Start a new run without storing summaries and checkpoints since no ' 'logging directory was specified.') tf.logging.info(message) return config
Save a new configuration by name. If a logging directory is specified, is will be created and the configuration will be stored there. Otherwise, a log message will be printed. Args: config: Configuration object. logdir: Location for writing summaries and checkpoints if specified. Returns: Configuration object.
juraj-google-style
def run(argv=None, save_main_session=True, test_pipeline=None) -> PipelineResult: known_args, pipeline_args = parse_known_args(argv) pipeline_options = PipelineOptions(pipeline_args) pipeline_options.view_as(SetupOptions).save_main_session = save_main_session milk_quality_data = pandas.read_csv(known_args.pipeline_input_data) start = time.mktime(time.strptime('2023/06/29 10:00:00', '%Y/%m/%d %H:%M:%S')) test_stream = TestStream() test_stream.advance_watermark_to(start) samples = [milk_quality_data.iloc[i:i + 1] for i in range(len(milk_quality_data))] for watermark_offset, sample in enumerate(samples, 1): test_stream.advance_watermark_to(start + watermark_offset) test_stream.add_elements([sample]) test_stream.advance_watermark_to_infinity() model_handler = XGBoostModelHandlerPandas(model_class=xgboost.XGBClassifier, model_state=known_args.model_state) with beam.Pipeline() as p: _ = p | test_stream | 'window' >> beam.WindowInto(window.SlidingWindows(30, 5)) | 'RunInference' >> RunInference(model_handler) | 'Count number of elements in window' >> beam.CombineGlobally(AggregateMilkQualityResults()).without_defaults() | 'Print' >> beam.Map(print)
Args: argv: Command line arguments defined for this example. save_main_session: Used for internal testing. test_pipeline: Used for internal testing.
github-repos
def easeInElastic(n, amplitude=1, period=0.3): _checkRange(n) return (1 - easeOutElastic((1 - n), amplitude=amplitude, period=period))
An elastic tween function that begins with an increasing wobble and then snaps into the destination. Args: n (float): The time progress, starting at 0.0 and ending at 1.0. Returns: (float) The line progress, starting at 0.0 and ending at 1.0. Suitable for passing to getPointOnLine().
codesearchnet
def text(self, tag, textdata, step=None): if (step is None): step = self._step else: self._step = step smd = SummaryMetadata(plugin_data=SummaryMetadata.PluginData(plugin_name='text')) if isinstance(textdata, (str, bytes)): tensor = tf.make_tensor_proto(values=[textdata.encode(encoding='utf_8')], shape=(1,)) else: textdata = onp.array(textdata) datashape = onp.shape(textdata) if (len(datashape) == 1): tensor = tf.make_tensor_proto(values=[td.encode(encoding='utf_8') for td in textdata], shape=(datashape[0],)) elif (len(datashape) == 2): tensor = tf.make_tensor_proto(values=[td.encode(encoding='utf_8') for td in onp.reshape(textdata, (- 1))], shape=(datashape[0], datashape[1])) summary = Summary(value=[Summary.Value(tag=tag, metadata=smd, tensor=tensor)]) self.add_summary(summary, step)
Saves a text summary. Args: tag: str: label for this data textdata: string, or 1D/2D list/numpy array of strings step: int: training step Note: markdown formatting is rendered by tensorboard.
codesearchnet
def _get_tf2_flags(parser): input_file_group = parser.add_mutually_exclusive_group() input_file_group.add_argument('--saved_model_dir', type=str, help='Full path of the directory containing the SavedModel.') input_file_group.add_argument('--keras_model_file', type=str, help='Full filepath of HDF5 file containing tf.Keras model.') parser.add_argument('--saved_model_tag_set', type=str, help='Comma-separated set of tags identifying the MetaGraphDef within the SavedModel to analyze. All tags must be present. In order to pass in an empty tag set, pass in "". (default "serve")') parser.add_argument('--saved_model_signature_key', type=str, help='Key identifying the SignatureDef containing inputs and outputs. (default DEFAULT_SERVING_SIGNATURE_DEF_KEY)') parser.add_argument('--enable_v1_converter', action='store_true', help='Enables the TensorFlow V1 converter in 2.0')
Returns ArgumentParser for tflite_convert for TensorFlow 2.0. Args: parser: ArgumentParser
github-repos
def _write_input(self, input_dir="."): with open(os.path.join(input_dir, self.input_file), 'wt', encoding="utf-8") as inp: for k, v in self.control_params.items(): inp.write('{} {}\n'.format(k, self._format_param_val(v))) for idx, mol in enumerate(self.mols): filename = os.path.join( input_dir, '{}.{}'.format( idx, self.control_params["filetype"])).encode("ascii") if self.control_params["filetype"] == "pdb": self.write_pdb(mol, filename, num=idx+1) else: a = BabelMolAdaptor(mol) pm = pb.Molecule(a.openbabel_mol) pm.write(self.control_params["filetype"], filename=filename, overwrite=True) inp.write("\n") inp.write( "structure {}.{}\n".format( os.path.join(input_dir, str(idx)), self.control_params["filetype"])) for k, v in self.param_list[idx].items(): inp.write(' {} {}\n'.format(k, self._format_param_val(v))) inp.write('end structure\n')
Write the packmol input file to the input directory. Args: input_dir (string): path to the input directory
juraj-google-style
def get_samples_live_last(self, sensor_id): url = "https: headers = self.__gen_headers() headers["Content-Type"] = "application/json" params = { "sensorId": sensor_id } url = self.__append_url_params(url, params) r = requests.get(url, headers=headers) return r.json()
Get the last sample recorded by the sensor. Args: sensor_id (string): hexadecimal id of the sensor to query, e.g. ``0x0013A20040B65FAD`` Returns: list: dictionary objects containing sample data
juraj-google-style
def recode(self, table: pd.DataFrame, validate=False) -> pd.DataFrame: return self._recode_output(self._recode_input(table, validate=validate), validate=validate)
Pass the appropriate columns through each recoder function sequentially and return the final result. Args: table (pd.DataFrame): A dataframe on which to apply recoding logic. validate (bool): If ``True``, recoded table must pass validation tests.
juraj-google-style
def read_cs_raw_symmetrized_tensors(self): header_pattern = '\\s+-{50,}\\s+\\s+Absolute Chemical Shift tensors\\s+\\s+-{50,}$' first_part_pattern = '\\s+UNSYMMETRIZED TENSORS\\s+$' row_pattern = '\\s+'.join((['([-]?\\d+\\.\\d+)'] * 3)) unsym_footer_pattern = '^\\s+SYMMETRIZED TENSORS\\s+$' with zopen(self.filename, 'rt') as f: text = f.read() unsym_table_pattern_text = (((header_pattern + first_part_pattern) + '(?P<table_body>.+)') + unsym_footer_pattern) table_pattern = re.compile(unsym_table_pattern_text, (re.MULTILINE | re.DOTALL)) rp = re.compile(row_pattern) m = table_pattern.search(text) if m: table_text = m.group('table_body') micro_header_pattern = 'ion\\s+\\d+' micro_table_pattern_text = (((micro_header_pattern + '\\s*^(?P<table_body>(?:\\s*') + row_pattern) + ')+)\\s+') micro_table_pattern = re.compile(micro_table_pattern_text, (re.MULTILINE | re.DOTALL)) unsym_tensors = [] for mt in micro_table_pattern.finditer(table_text): table_body_text = mt.group('table_body') tensor_matrix = [] for line in table_body_text.rstrip().split('\n'): ml = rp.search(line) processed_line = [float(v) for v in ml.groups()] tensor_matrix.append(processed_line) unsym_tensors.append(tensor_matrix) self.data['unsym_cs_tensor'] = unsym_tensors else: raise ValueError('NMR UNSYMMETRIZED TENSORS is not found')
Parse the matrix form of NMR tensor before corrected to table. Returns: nsymmetrized tensors list in the order of atoms.
codesearchnet
def save(self, target, format=None, encoding=None, **options): if (encoding is None): encoding = config.DEFAULT_ENCODING if (format is None): (_, format) = helpers.detect_scheme_and_format(target) writer_class = self.__custom_writers.get(format) if (writer_class is None): if (format not in config.WRITERS): message = ('Format "%s" is not supported' % format) raise exceptions.FormatError(message) writer_class = helpers.import_attribute(config.WRITERS[format]) writer_options = helpers.extract_options(options, writer_class.options) if options: message = 'Not supported options "%s" for format "%s"' message = (message % (', '.join(options), format)) raise exceptions.TabulatorException(message) writer = writer_class(**writer_options) writer.write(self.iter(), target, headers=self.headers, encoding=encoding)
Save stream to the local filesystem. Args: target (str): Path where to save the stream. format (str, optional): The format the stream will be saved as. If None, detects from the ``target`` path. Defaults to None. encoding (str, optional): Saved file encoding. Defaults to ``config.DEFAULT_ENCODING``. **options: Extra options passed to the writer.
codesearchnet
def single_device(cl_device_type='GPU', platform=None, fallback_to_any_device_type=False): if isinstance(cl_device_type, str): cl_device_type = device_type_from_string(cl_device_type) device = None if (platform is None): platforms = cl.get_platforms() else: platforms = [platform] for platform in platforms: devices = platform.get_devices(device_type=cl_device_type) for dev in devices: if device_supports_double(dev): try: env = CLEnvironment(platform, dev) return [env] except cl.RuntimeError: pass if (not device): if fallback_to_any_device_type: return cl.get_platforms()[0].get_devices() else: raise ValueError('No devices of the specified type ({}) found.'.format(cl.device_type.to_string(cl_device_type))) raise ValueError('No suitable OpenCL device found.')
Get a list containing a single device environment, for a device of the given type on the given platform. This will only fetch devices that support double (possibly only double with a pragma defined, but still, it should support double). Args: cl_device_type (cl.device_type.* or string): The type of the device we want, can be a opencl device type or a string matching 'GPU', 'CPU' or 'ALL'. platform (opencl platform): The opencl platform to select the devices from fallback_to_any_device_type (boolean): If True, try to fallback to any possible device in the system. Returns: list of CLEnvironment: List with one element, the CL runtime environment requested.
codesearchnet
def emit_region(self, timestamp: int, duration: int, pid: int, tid: int, category: str, name: str, args: Dict[str, Any]) -> None: event = self._create_event('X', category, name, pid, tid, timestamp) event['dur'] = duration event['args'] = args self._events.append(event)
Adds a region event to the trace. Args: timestamp: The start timestamp of this region as a long integer. duration: The duration of this region as a long integer. pid: Identifier of the process generating this event as an integer. tid: Identifier of the thread generating this event as an integer. category: The event category as a string. name: The event name as a string. args: A JSON-compatible dictionary of event arguments.
github-repos
def readMonthTariffs(self, months_type): self.setContext("readMonthTariffs") try: req_type = binascii.hexlify(str(months_type).zfill(1)) req_str = "01523102303031" + req_type + "282903" work_table = self.m_mons if months_type == ReadMonths.kWhReverse: work_table = self.m_rev_mons self.request(False) req_crc = self.calc_crc16(req_str[2:].decode("hex")) req_str += req_crc self.m_serial_port.write(req_str.decode("hex")) raw_ret = self.m_serial_port.getResponse(self.getContext()) self.serialPostEnd() unpacked_read = self.unpackStruct(raw_ret, work_table) self.convertData(unpacked_read, work_table, self.m_kwh_precision) return_crc = self.calc_crc16(raw_ret[1:-2]) if str(return_crc) == str(work_table["crc16"][MeterData.StringValue]): ekm_log("Months CRC success, type = " + str(req_type)) self.setContext("") return True except: ekm_log(traceback.format_exc(sys.exc_info())) self.setContext("") return False
Serial call to read month tariffs block into meter object buffer. Args: months_type (int): A :class:`~ekmmeters.ReadMonths` value. Returns: bool: True on completion.
juraj-google-style
def accuracy_score(gold, pred, ignore_in_gold=[], ignore_in_pred=[]): gold, pred = _preprocess(gold, pred, ignore_in_gold, ignore_in_pred) if len(gold) and len(pred): acc = np.sum(gold == pred) / len(gold) else: acc = 0 return acc
Calculate (micro) accuracy. Args: gold: A 1d array-like of gold labels pred: A 1d array-like of predicted labels (assuming abstain = 0) ignore_in_gold: A list of labels for which elements having that gold label will be ignored. ignore_in_pred: A list of labels for which elements having that pred label will be ignored. Returns: A float, the (micro) accuracy score
juraj-google-style
def reassign_label(cls, destination_cluster, label): conn = Qubole.agent(version=Cluster.api_version) data = {'destination_cluster': destination_cluster, 'label': label} return conn.put((cls.rest_entity_path + '/reassign-label'), data)
Reassign a label from one cluster to another. Args: `destination_cluster`: id/label of the cluster to move the label to `label`: label to be moved from the source cluster
codesearchnet
async def runCmdLine(self, line): if self.echoline: self.outp.printf(f'{self.cmdprompt}{line}') ret = None name = line.split(None, 1)[0] cmdo = self.getCmdByName(name) if (cmdo is None): self.printf(('cmd not found: %s' % (name,))) return try: ret = (await cmdo.runCmdLine(line)) except s_exc.CliFini: (await self.fini()) except asyncio.CancelledError: self.printf('Cmd cancelled') except Exception as e: exctxt = traceback.format_exc() self.printf(exctxt) self.printf(('error: %s' % e)) return ret
Run a single command line. Args: line (str): Line to execute. Examples: Execute the 'woot' command with the 'help' switch: await cli.runCmdLine('woot --help') Returns: object: Arbitrary data from the cmd class.
codesearchnet
def sub_location(self, nbr): assert (nbr > (- 1)), 'Sub location number must be greater or equal to 0!' assert (nbr < (self.nbr_of_sub_locations() - 1)), (('Sub location number must be lower than %d!' % self.nbr_of_sub_locations()) - 1) return self._locations_list[nbr]
Return a given sub location, 0-based. Args: nbr: Returns:
codesearchnet
def read_avg_core_poten(self): def pairwise(iterable): 's -> (s0,s1), (s1,s2), (s2, s3), ...' a = iter(iterable) return zip(a, a) with zopen(self.filename, 'rt') as foutcar: line = foutcar.readline() aps = [] while (line != ''): line = foutcar.readline() if ('the norm of the test charge is' in line): ap = [] while (line != ''): line = foutcar.readline() if ('E-fermi' in line): aps.append(ap) break data = line.split() for (i, pot) in pairwise(data): ap.append(float(pot)) return aps
Read the core potential at each ionic step. Returns: A list for each ionic step containing a list of the average core potentials for each atom: [[avg core pot]]. Example: The average core potential of the 2nd atom of the structure at the last ionic step is: [-1][1]
codesearchnet
def disconnect(self, container, *args, **kwargs): if isinstance(container, Container): container = container.id return self.client.api.disconnect_container_from_network( container, self.id, *args, **kwargs )
Disconnect a container from this network. Args: container (str): Container to disconnect from this network, as either an ID, name, or :py:class:`~docker.models.containers.Container` object. force (bool): Force the container to disconnect from a network. Default: ``False`` Raises: :py:class:`docker.errors.APIError` If the server returns an error.
juraj-google-style
def WriteEvent(self, event): self.WriteEventStart() try: self.WriteEventBody(event) except errors.NoFormatterFound as exception: error_message = 'unable to retrieve formatter with error: {0!s}'.format( exception) self._ReportEventError(event, error_message) except errors.WrongFormatter as exception: error_message = 'wrong formatter with error: {0!s}'.format(exception) self._ReportEventError(event, error_message) self.WriteEventEnd()
Writes the event to the output. Args: event (EventObject): event.
juraj-google-style
def get_enumerations_from_bit_mask(enumeration, mask): return [x for x in enumeration if ((x.value & mask) == x.value)]
A utility function that creates a list of enumeration values from a bit mask for a specific mask enumeration class. Args: enumeration (class): The enumeration class from which to draw enumeration values. mask (int): The bit mask from which to identify enumeration values. Returns: list: A list of enumeration values corresponding to the bit mask.
codesearchnet
def _process_worker(call_queue, result_queue, shutdown): while True: try: call_item = call_queue.get(block=True, timeout=0.1) except queue.Empty: if shutdown.is_set(): return else: try: r = call_item() except BaseException as e: result_queue.put(_ResultItem(call_item.work_id, exception=e)) else: result_queue.put(_ResultItem(call_item.work_id, result=r))
Evaluates calls from call_queue and places the results in result_queue. This worker is run in a seperate process. Args: call_queue: A multiprocessing.Queue of _CallItems that will be read and evaluated by the worker. result_queue: A multiprocessing.Queue of _ResultItems that will written to by the worker. shutdown: A multiprocessing.Event that will be set as a signal to the worker that it should exit when call_queue is empty.
juraj-google-style
def _infer_shape(self, dimensions): n = np.prod(dimensions) m = np.prod(abs(np.array(self._shape))) v = np.array(self._shape) v[(v == (- 1))] = (n return tuple(v)
Replaces the -1 wildcard in the output shape vector. This function infers the correct output shape given the input dimensions. Args: dimensions: List of input non-batch dimensions. Returns: Tuple of non-batch output dimensions.
codesearchnet
def get_servo_temperature(self): data = [] data.append(0x09) data.append(self.servoid) data.append(RAM_READ_REQ) data.append(TEMPERATURE_RAM) data.append(BYTE2) send_data(data) rxdata = [] try: rxdata = SERPORT.read(13) return ord(rxdata[9]) except HerkulexError: raise HerkulexError("Could not communicate with motors")
Gets the current temperature of Herkulex Args: none Returns: int: the current temperature register of Herkulex Raises: SerialException: Error occured while opening serial port
juraj-google-style
def __rmod__(self, other): try: other = as_dimension(other) except (TypeError, ValueError): return NotImplemented return other % self
Returns `other` modulo `self`. Args: other: Another Dimension, or a value accepted by `as_dimension`. Returns: A Dimension whose value is `other` modulo `self`.
juraj-google-style
def __init__(self, datastore_client, entity_kind_batches, entity_kind_images): self._datastore_client = datastore_client self._entity_kind_batches = entity_kind_batches self._entity_kind_images = entity_kind_images self._data = {}
Initialize ImageBatchesBase. Args: datastore_client: instance of the CompetitionDatastoreClient entity_kind_batches: Cloud Datastore entity kind which is used to store batches of images. entity_kind_images: Cloud Datastore entity kind which is used to store individual images.
juraj-google-style
def generators_from_logdir(logdir): subdirs = io_wrapper.GetLogdirSubdirectories(logdir) generators = [ itertools.chain(*[ generator_from_event_file(os.path.join(subdir, f)) for f in tf.io.gfile.listdir(subdir) if io_wrapper.IsTensorFlowEventsFile(os.path.join(subdir, f)) ]) for subdir in subdirs ] return generators
Returns a list of event generators for subdirectories with event files. The number of generators returned should equal the number of directories within logdir that contain event files. If only logdir contains event files, returns a list of length one. Args: logdir: A log directory that contains event files. Returns: List of event generators for each subdirectory with event files.
juraj-google-style