code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def read_chunk_header(self): try: chunk_size_hex = (yield from self._connection.readline()) except ValueError as error: raise ProtocolError('Invalid chunk size: {0}'.format(error)) from error if (not chunk_size_hex.endswith(b'\n')): raise NetworkError('Connection closed.') try: chunk_size = int(chunk_size_hex.split(b';', 1)[0].strip(), 16) except ValueError as error: raise ProtocolError('Invalid chunk size: {0}'.format(error)) from error if (chunk_size < 0): raise ProtocolError('Chunk size cannot be negative.') self._chunk_size = self._bytes_left = chunk_size return (chunk_size, chunk_size_hex)
Read a single chunk's header. Returns: tuple: 2-item tuple with the size of the content in the chunk and the raw header byte string. Coroutine.
codesearchnet
def _call_with_structured_signature(self, args, kwargs): bound_args = function_type_utils.canonicalize_function_inputs(args, kwargs, self.function_type) filtered_flat_args = self.function_type.unpack_inputs(bound_args) return self._call_flat(filtered_flat_args, captured_inputs=self.captured_inputs)
Executes the wrapped function with the structured signature. Args: args: Positional arguments to the concrete function. kwargs: Keyword arguments to the concrete function. Returns: The result of applying the function on the Tensors/Variables contained in `args` and `kwargs`. Raises: TypeError: if `args` and `kwargs` do not match the structured signature of this `ConcreteFunction`.
github-repos
def run(self, args): jlink = self.create_jlink(args) erased = jlink.erase() print('Bytes Erased: %d' % erased)
Erases the device connected to the J-Link. Args: self (EraseCommand): the ``EraseCommand`` instance args (Namespace): the arguments passed on the command-line Returns: ``None``
juraj-google-style
def get_ssm_parameter(parameter_name): try: response = boto3.client('ssm').get_parameters( Names=[parameter_name], WithDecryption=True ) return response.get('Parameters', None)[0].get('Value', '') except Exception: pass return ''
Get the decrypted value of an SSM parameter Args: parameter_name - the name of the stored parameter of interest Return: Value if allowed and present else None
juraj-google-style
def get_min_instability(self, min_voltage=None, max_voltage=None): data = [] for pair in self._select_in_voltage_range(min_voltage, max_voltage): if (pair.decomp_e_charge is not None): data.append(pair.decomp_e_charge) if (pair.decomp_e_discharge is not None): data.append(pair.decomp_e_discharge) return (min(data) if (len(data) > 0) else None)
The minimum instability along a path for a specific voltage range. Args: min_voltage: The minimum allowable voltage. max_voltage: The maximum allowable voltage. Returns: Minimum decomposition energy of all compounds along the insertion path (a subset of the path can be chosen by the optional arguments)
codesearchnet
def isloaded(self, name): if (name is None): return True if isinstance(name, str): return (name in [x.__module__ for x in self]) if isinstance(name, Iterable): return set(name).issubset([x.__module__ for x in self]) return False
Checks if given hook module has been loaded Args: name (str): The name of the module to check Returns: bool. The return code:: True -- Loaded False -- Not Loaded
codesearchnet
def parse(filename, encoding=None): with open(filename, encoding=encoding) as source: for line in source: for word in line.split(): (yield word)
!DEMO! Simple file parsing generator Args: filename: absolute or relative path to file on disk encoding: encoding string that is passed to open function
codesearchnet
def number_of_shards(self): return self._sharding_policies[0].number_of_shards
Gets the number of shards to use for the InfeedQueue. Returns: Number of shards or None if the number of shards has not been set.
github-repos
def getAsGeoJson(self, session): statement = .format(self.geometryColumnName, self.tableName, self.id) result = session.execute(statement) for row in result: return row.json
Retrieve the geometry in GeoJSON format. This method is a veneer for an SQL query that calls the ``ST_AsGeoJSON()`` function on the geometry column. Args: session (:mod:`sqlalchemy.orm.session.Session`): SQLAlchemy session object bound to PostGIS enabled database. Returns: str: GeoJSON string representation of geometry.
juraj-google-style
def _Open(self, path_spec, mode='rb'): if not path_spec.HasParent(): raise errors.PathSpecError( 'Unsupported path specification without parent.') file_object = resolver.Resolver.OpenFileObject( path_spec.parent, resolver_context=self._resolver_context) self._file_object = file_object
Opens the file system object defined by path specification. Args: path_spec (PathSpec): path specification. mode (Optional[str]): file access mode. The default is 'rb' which represents read-only binary. Raises: AccessError: if the access to open the file was denied. IOError: if the file system object could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid.
juraj-google-style
def _get_resized_embeddings(self, old_embeddings, new_num_tokens=None) -> tf.Variable: old_embedding_dim = shape_list(old_embeddings)[1] init_range = getattr(self.config, 'initializer_range', 0.02) embeddings_mask, current_embeddings = init_copy_embeddings(old_embeddings, new_num_tokens) new_embeddings = self.add_weight(name=old_embeddings.name.split(':')[0], shape=[new_num_tokens, old_embedding_dim], initializer=get_initializer(init_range), dtype=tf.float32) init_embeddings = tf.where(embeddings_mask, current_embeddings, new_embeddings.value()) new_embeddings.assign(init_embeddings) return new_embeddings
Build a resized Embedding weights from a provided token Embedding weights. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end Args: old_embeddings (`tf.Variable`): Old embeddings to be resized. new_num_tokens (`int`, *optional*): New number of tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or `None`, just returns a pointer to the input tokens `tf.Variable` module of the model without doing anything. Return: `tf.Variable`: Pointer to the resized Embedding Module or the old Embedding Module if `new_num_tokens` is `None`
github-repos
def _CheckFileEntryType(self, file_entry): if (not self._file_entry_types): return None return (self._CheckIsDevice(file_entry) or self._CheckIsDirectory(file_entry) or self._CheckIsFile(file_entry) or self._CheckIsLink(file_entry) or self._CheckIsPipe(file_entry) or self._CheckIsSocket(file_entry))
Checks the file entry type find specifications. Args: file_entry (FileEntry): file entry. Returns: bool: True if the file entry matches the find specification, False if not or None if no file entry type specification is defined.
codesearchnet
def apply_operation(self, symmop, fractional=False): if not fractional: self._lattice = Lattice([symmop.apply_rotation_only(row) for row in self._lattice.matrix]) def operate_site(site): new_cart = symmop.operate(site.coords) new_frac = self._lattice.get_fractional_coords(new_cart) return PeriodicSite(site.species, new_frac, self._lattice, properties=site.properties) else: new_latt = np.dot(symmop.rotation_matrix, self._lattice.matrix) self._lattice = Lattice(new_latt) def operate_site(site): return PeriodicSite(site.species, symmop.operate(site.frac_coords), self._lattice, properties=site.properties) self._sites = [operate_site(s) for s in self._sites]
Apply a symmetry operation to the structure and return the new structure. The lattice is operated by the rotation matrix only. Coords are operated in full and then transformed to the new lattice. Args: symmop (SymmOp): Symmetry operation to apply. fractional (bool): Whether the symmetry operation is applied in fractional space. Defaults to False, i.e., symmetry operation is applied in cartesian coordinates.
juraj-google-style
def peek_all(self, model_class): if self._cache: return self._cache.get_records(model_class.__name__) else: return []
Return a list of models from the local cache. Args: model_class (:class:`cinder_data.model.CinderModel`): A subclass of :class:`cinder_data.model.CinderModel` of your chosen model. Returns: list: A list of instances of you model_class or and empty list.
codesearchnet
def get_interpolated_value(self, energy): f = {} for spin in self.densities.keys(): f[spin] = get_linear_interpolated_value(self.energies, self.densities[spin], energy) return f
Returns interpolated density for a particular energy. Args: energy: Energy to return the density for.
codesearchnet
async def vsetup(self, author): if self.vready: logger.warning("Attempt to init voice when already initialised") return if self.state != 'starting': logger.error("Attempt to init from wrong state ('{}'), must be 'starting'.".format(self.state)) return self.logger.debug("Setting up voice") self.vchannel = author.voice.voice_channel if self.vchannel: self.statuslog.info("Connecting to voice") try: self.vclient = await client.join_voice_channel(self.vchannel) except discord.ClientException as e: logger.exception(e) self.statuslog.warning("I'm already connected to a voice channel.") return except discord.opus.OpusNotLoaded as e: logger.exception(e) logger.error("Could not load Opus. This is an error with your FFmpeg setup.") self.statuslog.error("Could not load Opus.") return except discord.DiscordException as e: logger.exception(e) self.statuslog.error("I couldn't connect to the voice channel. Check my permissions.") return except Exception as e: self.statuslog.error("Internal error connecting to voice, disconnecting.") logger.error("Error connecting to voice {}".format(e)) return else: self.statuslog.error("You're not connected to a voice channel.") return self.vready = True
Creates the voice client Args: author (discord.Member): The user that the voice ui will seek
juraj-google-style
def get_variant_by_name(self, name): results = [] try: for info, dosage in self._bgen.get_variant(name): results.append(Genotypes( Variant( info.name, CHROM_STR_ENCODE.get(info.chrom, info.chrom), info.pos, [info.a1, info.a2], ), dosage, reference=info.a1, coded=info.a2, multiallelic=False, )) except ValueError: logging.variant_name_not_found(name) return results
Get the genotype of a marker using it's name. Args: name (str): The name of the marker. Returns: list: A list of Genotypes.
juraj-google-style
def __ComputeEndByte(self, start, end=None, use_chunks=True): end_byte = end if ((start < 0) and (not self.total_size)): return end_byte if use_chunks: alternate = ((start + self.chunksize) - 1) if (end_byte is not None): end_byte = min(end_byte, alternate) else: end_byte = alternate if self.total_size: alternate = (self.total_size - 1) if (end_byte is not None): end_byte = min(end_byte, alternate) else: end_byte = alternate return end_byte
Compute the last byte to fetch for this request. This is all based on the HTTP spec for Range and Content-Range. Note that this is potentially confusing in several ways: * the value for the last byte is 0-based, eg "fetch 10 bytes from the beginning" would return 9 here. * if we have no information about size, and don't want to use the chunksize, we'll return None. See the tests for more examples. Args: start: byte to start at. end: (int or None, default: None) Suggested last byte. use_chunks: (bool, default: True) If False, ignore self.chunksize. Returns: Last byte to use in a Range header, or None.
codesearchnet
def is_insert_grad_of_statement(node): tangent_calls = [anno.getanno(item.context_expr, 'func', None) is utils.insert_grad_of for item in node.items] if all(tangent_calls): return True elif any(tangent_calls): raise ValueError else: return False
Check whether a context manager calls `insert_grad_of`. Args: node: The context manager node. Returns: Whether or not this node contains `insert_grad_of` calls. Raises: ValueError: If the `insert_grad_of` calls are mixed with other calls.
juraj-google-style
def add_to_cache(cls, remote_info, container): if (not isinstance(container, cls)): raise TypeError(('%r not an instance of %r, could not be added to cache.' % (container, cls))) if (remote_info in cls.__remote_info_cache): raise KeyError('Cache has collision but should not.') cls.__remote_info_cache[remote_info] = container
Adds a ResourceContainer to a cache tying it to a protorpc method. Args: remote_info: Instance of protorpc.remote._RemoteMethodInfo corresponding to a method. container: An instance of ResourceContainer. Raises: TypeError: if the container is not an instance of cls. KeyError: if the remote method has been reference by a container before. This created remote method should never occur because a remote method is created once.
codesearchnet
def AddIndex(self, path_segment_index): if (path_segment_index in self._weight_per_index): raise ValueError('Path segment index already set.') self._weight_per_index[path_segment_index] = 0
Adds a path segment index and sets its weight to 0. Args: path_segment_index: an integer containing the path segment index. Raises: ValueError: if the path segment weights already contains the path segment index.
codesearchnet
def _update_unenrolled_list(sailthru_client, email, course_url, unenroll): try: sailthru_response = sailthru_client.api_get("user", {"id": email, "fields": {"vars": 1}}) if not sailthru_response.is_ok(): error = sailthru_response.get_error() logger.error("Error attempting to read user record from Sailthru: %s", error.get_message()) return not can_retry_sailthru_request(error) response_json = sailthru_response.json unenroll_list = [] if response_json and "vars" in response_json and response_json["vars"] \ and "unenrolled" in response_json["vars"]: unenroll_list = response_json["vars"]["unenrolled"] changed = False if unenroll: if course_url not in unenroll_list: unenroll_list.append(course_url) changed = True elif course_url in unenroll_list: unenroll_list.remove(course_url) changed = True if changed: sailthru_response = sailthru_client.api_post( 'user', {'id': email, 'key': 'email', 'vars': {'unenrolled': unenroll_list}}) if not sailthru_response.is_ok(): error = sailthru_response.get_error() logger.error("Error attempting to update user record in Sailthru: %s", error.get_message()) return not can_retry_sailthru_request(error) return True except SailthruClientError as exc: logger.exception("Exception attempting to update user record for %s in Sailthru - %s", email, text_type(exc)) return False
Maintain a list of courses the user has unenrolled from in the Sailthru user record Arguments: sailthru_client (object): SailthruClient email (str): user's email address course_url (str): LMS url for course info page. unenroll (boolean): True if unenrolling, False if enrolling Returns: False if retryable error, else True
juraj-google-style
def write(self, output_stream, kmip_version=enums.KMIPVersion.KMIP_1_0): local_stream = BytearrayStream() if self._wrapping_method: self._wrapping_method.write(local_stream, kmip_version=kmip_version) else: raise ValueError('Invalid struct missing the wrapping method attribute.') if self._encryption_key_information: self._encryption_key_information.write(local_stream, kmip_version=kmip_version) if self._mac_signature_key_information: self._mac_signature_key_information.write(local_stream, kmip_version=kmip_version) if self._attribute_names: for unique_identifier in self._attribute_names: unique_identifier.write(local_stream, kmip_version=kmip_version) if self._encoding_option: self._encoding_option.write(local_stream, kmip_version=kmip_version) self.length = local_stream.length() super(KeyWrappingSpecification, self).write(output_stream, kmip_version=kmip_version) output_stream.write(local_stream.buffer)
Write the data encoding the KeyWrappingSpecification struct to a stream. Args: output_stream (stream): A data stream in which to encode object data, supporting a write method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0.
codesearchnet
def is_diagonal(matrix: np.ndarray, *, atol: float=1e-08) -> bool: matrix = np.copy(matrix) for i in range(min(matrix.shape)): matrix[(i, i)] = 0 return tolerance.all_near_zero(matrix, atol=atol)
Determines if a matrix is a approximately diagonal. A matrix is diagonal if i!=j implies m[i,j]==0. Args: matrix: The matrix to check. atol: The per-matrix-entry absolute tolerance on equality. Returns: Whether the matrix is diagonal within the given tolerance.
codesearchnet
def validate(self): for schema in (self.headers_schema, Message.headers_schema): _log.debug('Validating message headers "%r" with schema "%r"', self._headers, schema) jsonschema.validate(self._headers, schema) for schema in (self.body_schema, Message.body_schema): _log.debug('Validating message body "%r" with schema "%r"', self.body, schema) jsonschema.validate(self.body, schema)
Validate the headers and body with the message schema, if any. In addition to the user-provided schema, all messages are checked against the base schema which requires certain message headers and the that body be a JSON object. .. warning:: This method should not be overridden by sub-classes. Raises: jsonschema.ValidationError: If either the message headers or the message body are invalid. jsonschema.SchemaError: If either the message header schema or the message body schema are invalid.
codesearchnet
def fetch(self, payment_id, data={}, **kwargs): return super(Payment, self).fetch(payment_id, data, **kwargs)
Fetch Payment for given Id Args: payment_id : Id for which payment object has to be retrieved Returns: Payment dict for given payment Id
juraj-google-style
def switch_window(self, window_id: int): if window_id not in self.tmux_available_window_ids: for i in range(max(self.tmux_available_window_ids)+1, window_id+1): self._run_raw(f'tmux new-window -t {self.tmux_session} -d') tmux_window = self.tmux_session + ':' + str(i) cmd = shlex.quote(f'cd {self.taskdir}') tmux_cmd = f'tmux send-keys -t {tmux_window} {cmd} Enter' self._run_raw(tmux_cmd) self.tmux_available_window_ids.append(i) self.tmux_window_id = window_id
Switches currently active tmux window for given task. 0 is the default window Args: window_id: integer id of tmux window to use
juraj-google-style
def on_channel_open(self, channel): self.in_channel.exchange_declare(exchange='input_exc', type='topic', durable=True) channel.queue_declare(callback=self.on_input_queue_declare, queue=self.INPUT_QUEUE_NAME)
Input channel creation callback Queue declaration done here Args: channel: input channel
juraj-google-style
class SquaredHinge(reduction_metrics.MeanMetricWrapper): def __init__(self, name='squared_hinge', dtype=None): super().__init__(fn=squared_hinge, name=name, dtype=dtype) self._direction = 'down' def get_config(self): return {'name': self.name, 'dtype': self.dtype}
Computes the hinge metric between `y_true` and `y_pred`. `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are provided we will convert them to -1 or 1. Args: name: (Optional) string name of the metric instance. dtype: (Optional) data type of the metric result. Example: >>> m = keras.metrics.SquaredHinge() >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]]) >>> m.result() 1.86 >>> m.reset_state() >>> m.update_state([[0, 1], [0, 0]], [[0.6, 0.4], [0.4, 0.6]], ... sample_weight=[1, 0]) >>> m.result() 1.46
github-repos
def supports_suggested_actions(channel_id: str, button_cnt: int=100) -> bool: max_actions = {Channels.facebook: 10, Channels.skype: 10, Channels.line: 13, Channels.kik: 20, Channels.telegram: 100, Channels.slack: 100, Channels.emulator: 100, Channels.direct_line: 100, Channels.webchat: 100} return ((button_cnt <= max_actions[channel_id]) if (channel_id in max_actions) else False)
Determine if a number of Suggested Actions are supported by a Channel. Args: channel_id (str): The Channel to check the if Suggested Actions are supported in. button_cnt (int, optional): Defaults to 100. The number of Suggested Actions to check for the Channel. Returns: bool: True if the Channel supports the button_cnt total Suggested Actions, False if the Channel does not support that number of Suggested Actions.
codesearchnet
def _add_length_constrain(token_lst: List[Dict], lengths: List) -> List[Dict]: result = [] for a_token in token_lst: for length in lengths: if type(length) == str and length and length.isdigit(): a_token[attrs.LENGTH] = int(length) result.append(copy.deepcopy(a_token)) elif type(length) == int: a_token[attrs.LENGTH] = int(length) result.append(copy.deepcopy(a_token)) return result
Add length constrain for some token type, create cross production Args: token_lst: List[Dict] lengths: List Returns: List[Dict]
juraj-google-style
def memory_write16(self, addr, data, zone=None): return self.memory_write(addr, data, zone, 16)
Writes half-words to memory of a target system. Args: self (JLink): the ``JLink`` instance addr (int): start address to write to data (list): list of half-words to write zone (str): optional memory zone to access Returns: Number of half-words written to target. Raises: JLinkException: on memory access error.
codesearchnet
def create(self, path, mime_type='application/octet-stream', compression_type=CompressionTypes.AUTO) -> BinaryIO: dirname = os.path.dirname(path) if dirname: os.makedirs(os.path.dirname(path), exist_ok=True) return self._path_open(path, 'wb', mime_type, compression_type)
Returns a write channel for the given file path. Args: path: string path of the file object to be written to the system mime_type: MIME type to specify the type of content in the file object compression_type: Type of compression to be used for this object Returns: file handle with a close function for the user to use
github-repos
def __init__(self, component1=None, component2=None): if component1 is None and component2 is not None: component1 = component2 component2 = None self._llhead = None self._lltail = None if isinstance(component1, CompositeBitarray): self._llhead = component1._llhead self._lltail = component1._lltail self._offset = component1._offset self._tailbitsused = component1._tailbitsused self._length = len(component1) else: self._llhead = self._lltail = _DLLNode(component1) self._offset = 0 self._tailbitsused = len(component1) self._length = self._tailbitsused if component2 is not None: oldtail = self._lltail if isinstance(component2, CompositeBitarray): if self._lltail is component2._llhead: if self._tail_end != component2._offset: raise ProteusDataJoinError() if component2._is_single_llnode: self._tailbitsused += component2._tailbitsused else: self._tailbitsused = component2._tailbitsused self._lltail = component2._lltail self._length += len(component2) elif self._lltail.next is component2._llhead and\ self._tailoffset == 0 and\ component2._offset == 0: self._lltail = component2._lltail self._tailbitsused = component2._tailbitsused self._length += len(component2) elif component2._llhead.prev is not None or\ self._lltail.next is not None or\ component2._offset or self._tailoffset or\ self._llhead is component2._lltail: raise ProteusDataJoinError() else: self._length += len(component2) self._lltail.next = component2._llhead self._lltail = component2._lltail self._tailbitsused = component2._tailbitsused else: if self._tailoffset or self._lltail.next is not None: raise ProteusDataJoinError() self._tailbitsused = len(component2) self._length += self._tailbitsused node = _DLLNode(component2) node.prev = self._lltail self._lltail = node if oldtail is not self._llhead or self._offset == 0: self._do_merge(oldtail)
Create a bitarray object that stores its components by reference). Args: *components: Any number of bitarray instances to store in this composition.
juraj-google-style
def NewEvent(type: str, id: UUID=None, data: JsonDict=None, metadata: JsonDict=None) -> NewEventData: return NewEventData((id or uuid4()), type, data, metadata)
Build the data structure for a new event. Args: type: An event type. id: The uuid identifier for the event. data: A dict containing data for the event. These data must be json serializable. metadata: A dict containing metadata about the event. These must be json serializable.
codesearchnet
def crt(self, mp, mq): u = (mq - mp) * self.p_inverse % self.q return mp + (u * self.p)
The Chinese Remainder Theorem as needed for decryption. Returns the solution modulo n=pq. Args: mp(int): the solution modulo p. mq(int): the solution modulo q.
juraj-google-style
def save(self, representative_dataset: RepresentativeDatasetMapping) -> Mapping[str, _RepresentativeDatasetFile]: raise NotImplementedError('Method "save" is not implemented.')
Saves the representative dataset. Args: representative_dataset: RepresentativeDatasetMapping which is a signature_def_key -> representative dataset mapping.
github-repos
def handle_subscribed_event(self, event_obj, event_name): handler, args = self.handlers[event_name] self.executor.submit(handler, event_obj, *args)
Execute the registered handler of an event. Retrieve the handler and its arguments, and execute the handler in a new thread. Args: event_obj: Json object of the event. event_name: Name of the event to call handler for.
juraj-google-style
def group_associations_types(self, group_type, api_entity=None, api_branch=None, params=None): if params is None: params = {} if not self.can_update(): self._tcex.handle_error(910, [self.type]) target = self._tcex.ti.group(group_type) for gat in self.tc_requests.group_associations_types( self.api_type, self.api_sub_type, self.unique_id, target, api_entity=api_entity, api_branch=api_branch, owner=self.owner, params=params, ): yield gat
Gets the group association from a Indicator/Group/Victim Args: group_type: api_entity: api_branch: params: Returns:
juraj-google-style
def _list_profile_sort_key(profile_datum, sort_by): if sort_by == SORT_OPS_BY_OP_NAME: return profile_datum.node_exec_stats.node_name elif sort_by == SORT_OPS_BY_OP_TYPE: return profile_datum.op_type elif sort_by == SORT_OPS_BY_LINE: return profile_datum.file_line_func elif sort_by == SORT_OPS_BY_OP_TIME: return profile_datum.op_time elif sort_by == SORT_OPS_BY_EXEC_TIME: return profile_datum.node_exec_stats.all_end_rel_micros else: return profile_datum.node_exec_stats.all_start_micros
Get a profile_datum property to sort by in list_profile command. Args: profile_datum: A `ProfileDatum` object. sort_by: (string) indicates a value to sort by. Must be one of SORT_BY* constants. Returns: profile_datum property to sort by.
github-repos
def _process_counter_example(self, mma, w_string): w_string = self._find_bad_transition(mma, w_string) diff = len(w_string) same = 0 while True: i = (same + diff) / 2 access_string = self._run_in_hypothesis(mma, w_string, i) is_diff = self._check_suffix(w_string, access_string, i) if is_diff: diff = i else: same = i if diff - same == 1: break exp = w_string[diff:] self.observation_table.em_vector.append(exp) for row in self.observation_table.sm_vector + self.observation_table.smi_vector: self._fill_table_entry(row, exp)
Process a counterexample in the Rivest-Schapire way. Args: mma (DFA): The hypothesis automaton w_string (str): The examined string to be consumed Returns: None
juraj-google-style
def raw_sql(cls, cur, query: str, values: tuple): (yield from cur.execute(query, values)) return (yield from cur.fetchall())
Run a raw sql query Args: query : query string to execute values : tuple of values to be used with the query Returns: result of query as list of named tuple
codesearchnet
def pack_sequence_as(structure, flat_sequence): flat_sequence = list(flat_sequence) flattened_structure = nest.flatten(structure, expand_composites=True) if len(flattened_structure) != len(flat_sequence): raise ValueError('Mismatch in element count') for i in range(len(flat_sequence)): if isinstance(flattened_structure[i], tensor_array_ops.TensorArray): flat_sequence[i] = tensor_array_ops.build_ta_with_new_flow(old_ta=flattened_structure[i], flow=flat_sequence[i]) return nest.pack_sequence_as(structure, flat_sequence, expand_composites=True)
Like `nest.pack_sequence_as` but also builds TensorArrays from flows. Args: structure: The structure to pack into. May contain Tensors, CompositeTensors, or TensorArrays. flat_sequence: An iterable containing tensors. Returns: A nested structure. Raises: AssertionError if `structure` and `flat_sequence` are not compatible.
github-repos
def _process_example_section(func_documentation, func, parent_class, class_name, model_name_lowercase, config_class, checkpoint, indent_level): from transformers.models import auto as auto_module example_docstring = '' if func_documentation is not None and (match := re.search('(?m)^([ \\t]*)(?=Example)', func_documentation)): example_docstring = func_documentation[match.start():] example_docstring = '\n' + set_min_indent(example_docstring, indent_level + 4) elif parent_class is None and model_name_lowercase is not None: task = f'({'|'.join(PT_SAMPLE_DOCSTRINGS.keys())})' model_task = re.search(task, class_name) CONFIG_MAPPING = auto_module.configuration_auto.CONFIG_MAPPING if (checkpoint_example := checkpoint) is None: try: checkpoint_example = get_checkpoint_from_config_class(CONFIG_MAPPING[model_name_lowercase]) except KeyError: if model_name_lowercase in HARDCODED_CONFIG_FOR_MODELS: CONFIG_MAPPING_NAMES = auto_module.configuration_auto.CONFIG_MAPPING_NAMES config_class_name = HARDCODED_CONFIG_FOR_MODELS[model_name_lowercase] if config_class_name in CONFIG_MAPPING_NAMES.values(): model_name_for_auto_config = [k for k, v in CONFIG_MAPPING_NAMES.items() if v == config_class_name][0] if model_name_for_auto_config in CONFIG_MAPPING: checkpoint_example = get_checkpoint_from_config_class(CONFIG_MAPPING[model_name_for_auto_config]) if model_task is not None: if checkpoint_example is not None: example_annotation = '' task = model_task.group() example_annotation = PT_SAMPLE_DOCSTRINGS[task].format(model_class=class_name, checkpoint=checkpoint_example, expected_output='...', expected_loss='...', qa_target_start_index=14, qa_target_end_index=15, mask='<mask>') example_docstring = set_min_indent(example_annotation, indent_level + 4) else: print(f"🚨 No checkpoint found for {class_name}.{func.__name__}. Please add a `checkpoint` arg to `auto_docstring` or add one in {config_class}'s docstring") else: for name_model_list_for_task in MODELS_TO_PIPELINE: model_list_for_task = getattr(auto_module.modeling_auto, name_model_list_for_task) if class_name in model_list_for_task.values(): pipeline_name = MODELS_TO_PIPELINE[name_model_list_for_task] example_annotation = PIPELINE_TASKS_TO_SAMPLE_DOCSTRINGS[pipeline_name].format(model_class=class_name, checkpoint=checkpoint_example, expected_output='...', expected_loss='...', qa_target_start_index=14, qa_target_end_index=15) example_docstring = set_min_indent(example_annotation, indent_level + 4) break return example_docstring
Process the example section of the docstring. Args: func_documentation (`str`): Existing function documentation (manually specified in the docstring) func (`function`): Function being processed parent_class (`class`): Parent class of the function class_name (`str`): Name of the class model_name_lowercase (`str`): Lowercase model name config_class (`str`): Config class for the model checkpoint: Checkpoint to use in examples indent_level (`int`): Indentation level
github-repos
def gff3_verifier(entries, line=None): regex = r'^[a-zA-Z0-9.:^*$@!+_?-|]+\t.+\t.+\t\d+\t\d+\t' \ + r'\d*\.?\d*\t[+-.]\t[.0-2]\t.+{0}$'.format(os.linesep) delimiter = r'\t' for entry in entries: try: entry_verifier([entry.write()], regex, delimiter) except FormatError as error: if line: intro = 'Line {0}'.format(str(line)) elif error.part == 0: intro = 'Entry with source {0}'.format(entry.source) else: intro = 'Entry with Sequence ID {0}'.format(entry.seqid) if error.part == 0: msg = '{0} has no Sequence ID'.format(intro) elif error.part == 1: msg = '{0} has no source'.format(intro) elif error.part == 2: msg = '{0} has non-numerical characters in type'.format(intro) elif error.part == 3: msg = '{0} has non-numerical characters in ' \ 'start position'.format(intro) elif error.part == 4: msg = '{0} has non-numerical characters in ' \ 'end position'.format(intro) elif error.part == 5: msg = '{0} has non-numerical characters in score'.format(intro) elif error.part == 6: msg = '{0} strand not in [+-.]'.format(intro) elif error.part == 7: msg = '{0} phase not in [.0-2]'.format(intro) elif error.part == 8: msg = '{0} has no attributes'.format(intro) else: msg = 'Unknown Error: Likely a Bug' raise FormatError(message=msg) if line: line += 1
Raises error if invalid GFF3 format detected Args: entries (list): A list of GFF3Entry instances line (int): Line number of first entry Raises: FormatError: Error when GFF3 format incorrect with descriptive message
juraj-google-style
def parse_object_like_triples(self): self.rdf.triples = SimpleNamespace() for (s, p, o) in self.rdf.graph: (ns_prefix, ns_uri, predicate) = self.rdf.graph.compute_qname(p) if (not hasattr(self.rdf.triples, ns_prefix)): setattr(self.rdf.triples, ns_prefix, SimpleNamespace()) if (not hasattr(getattr(self.rdf.triples, ns_prefix), predicate)): setattr(getattr(self.rdf.triples, ns_prefix), predicate, []) getattr(getattr(self.rdf.triples, ns_prefix), predicate).append(o)
method to parse triples from self.rdf.graph for object-like access Args: None Returns: None: sets self.rdf.triples
codesearchnet
def match_next_flag(tt_flags, pos): match = _FLAG_DOUBLE_QUOTE_PAT.match(tt_flags, pos) if match: return (match, True) match = _FLAG_SINGLE_QUOTE_PAT.match(tt_flags, pos) if match: return (match, True) match = _FLAG_NO_QUOTE_PAT.match(tt_flags, pos) if match: return (match, True) match = _FLAG_NO_EQUAL_PAT.match(tt_flags, pos) if match: return (match, False) return (None, False)
Returns the match for the next TensorTracer flag. Args: tt_flags: a string that contains the flags. pos: where in flags to start the search. Returns: A pair where the first element is the regular-expression match found and the second element indicates if the match has a value.
github-repos
def _AnsiCmd(command_list): if not isinstance(command_list, list): raise ValueError('Invalid list: %s' % command_list) for sgr in command_list: if sgr.lower() not in SGR: raise ValueError('Invalid or unsupported SGR name: %s' % sgr) command_str = [str(SGR[x.lower()]) for x in command_list] return '\033[%sm' % (';'.join(command_str))
Takes a list of SGR values and formats them as an ANSI escape sequence. Args: command_list: List of strings, each string represents an SGR value. e.g. 'fg_blue', 'bg_yellow' Returns: The ANSI escape sequence. Raises: ValueError: if a member of command_list does not map to a valid SGR value.
juraj-google-style
def charges(self, num, charge_id=None, **kwargs): baseuri = (self._BASE_URI + 'company/{}/charges'.format(num)) if (charge_id is not None): baseuri += '/{}'.format(charge_id) res = self.session.get(baseuri, params=kwargs) else: res = self.session.get(baseuri, params=kwargs) self.handle_http_error(res) return res
Search for charges against a company by company number. Args: num (str): Company number to search on. transaction (Optional[str]): Filing record number. kwargs (dict): additional keywords passed into requests.session.get params keyword.
codesearchnet
def write(self, destination, filename, content): if (not os.path.exists(destination)): try: os.makedirs(destination) except: pass filepath = ('%s/%s' % (destination, filename)) f = open(filepath, 'w+') f.write(content) f.close()
Write a file at the specific destination with the content. Args: destination (string): the destination location filename (string): the filename that will be written content (string): the content of the filename
codesearchnet
def get_key_flags_for_module(self, module): if not isinstance(module, str): module = module.__name__ key_flags = self._get_flags_defined_by_module(module) for flag in self.key_flags_by_module_dict().get(module, []): if flag not in key_flags: key_flags.append(flag) return key_flags
Returns the list of key flags for a module. Args: module: module|str, the module to get key flags from. Returns: [Flag], a new list of Flag instances. Caller may update this list as desired: none of those changes will affect the internals of this FlagValue instance.
juraj-google-style
def contains(self, sub): sub = sub.lower() found_words = set() res = cgaddag.gdg_contains(self.gdg, sub.encode(encoding="ascii")) tmp = res while tmp: word = tmp.contents.str.decode("ascii") found_words.add(word) tmp = tmp.contents.next cgaddag.gdg_destroy_result(res) return list(found_words)
Find all words containing a substring. Args: sub: A substring to be searched for. Returns: A list of all words found.
juraj-google-style
def get(self, txn_id): if (txn_id not in self._receipt_db): raise KeyError('Unknown transaction id {}'.format(txn_id)) txn_receipt_bytes = self._receipt_db[txn_id] txn_receipt = TransactionReceipt() txn_receipt.ParseFromString(txn_receipt_bytes) return txn_receipt
Returns the TransactionReceipt Args: txn_id (str): the id of the transaction for which the receipt should be retrieved. Returns: TransactionReceipt: The receipt for the given transaction id. Raises: KeyError: if the transaction id is unknown.
codesearchnet
def mt_excel_files(store, case_obj, temp_excel_dir): today = datetime.datetime.now().strftime('%Y-%m-%d') samples = case_obj.get('individuals') query = {'chrom': 'MT'} mt_variants = list(store.variants(case_id=case_obj['_id'], query=query, nr_of_variants=(- 1), sort_key='position')) written_files = 0 for sample in samples: sample_id = sample['individual_id'] sample_lines = export_mt_variants(variants=mt_variants, sample_id=sample_id) document_name = ('.'.join([case_obj['display_name'], sample_id, today]) + '.xlsx') workbook = Workbook(os.path.join(temp_excel_dir, document_name)) Report_Sheet = workbook.add_worksheet() row = 0 for (col, field) in enumerate(MT_EXPORT_HEADER): Report_Sheet.write(row, col, field) for (row, line) in enumerate(sample_lines, 1): for (col, field) in enumerate(line): Report_Sheet.write(row, col, field) workbook.close() if os.path.exists(os.path.join(temp_excel_dir, document_name)): written_files += 1 return written_files
Collect MT variants and format line of a MT variant report to be exported in excel format Args: store(adapter.MongoAdapter) case_obj(models.Case) temp_excel_dir(os.Path): folder where the temp excel files are written to Returns: written_files(int): the number of files written to temp_excel_dir
codesearchnet
def DisableInterfaces(interface): set_tested_versions = ['vista', '2008'] set_args = ['/c', 'netsh', 'set', 'interface', interface, 'DISABLED'] host_version = platform.platform().lower() for version in set_tested_versions: if (host_version.find(version) != (- 1)): res = client_utils_common.Execute('cmd', set_args, time_limit=(- 1), bypass_whitelist=True) return res return ('', 'Command not available for this version.', 99, '')
Tries to disable an interface. Only works on Vista and 7. Args: interface: Name of the interface to disable. Returns: res which is a tuple of (stdout, stderr, exit_status, time_taken).
codesearchnet
def get_resize_output_image_size(input_image: np.ndarray, size: Union[int, Tuple[int, int], List[int]], max_size: Optional[int]=None, input_data_format: Optional[Union[str, ChannelDimension]]=None) -> Tuple[int, int]: image_size = get_image_size(input_image, input_data_format) if isinstance(size, (list, tuple)): return size return get_size_with_aspect_ratio(image_size, size, max_size)
Computes the output image size given the input image size and the desired output size. If the desired output size is a tuple or list, the output image size is returned as is. If the desired output size is an integer, the output image size is computed by keeping the aspect ratio of the input image size. Args: input_image (`np.ndarray`): The image to resize. size (`int` or `Tuple[int, int]` or `List[int]`): The desired output size. max_size (`int`, *optional*): The maximum allowed output size. input_data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred from the input image.
github-repos
def apply_operation(self, symmop): def operate_site(site): new_cart = symmop.operate(site.coords) return Site(site.species, new_cart, properties=site.properties) self._sites = [operate_site(s) for s in self._sites]
Apply a symmetry operation to the molecule. Args: symmop (SymmOp): Symmetry operation to apply.
juraj-google-style
def chdir(self, target_directory): target_directory = self.filesystem.resolve_path( target_directory, allow_fd=True) self.filesystem.confirmdir(target_directory) directory = self.filesystem.resolve(target_directory) if not is_root() and not directory.st_mode | PERM_EXE: self.filesystem.raise_os_error(errno.EACCES, directory) self.filesystem.cwd = target_directory
Change current working directory to target directory. Args: target_directory: The path to new current working directory. Raises: OSError: if user lacks permission to enter the argument directory or if the target is not a directory.
juraj-google-style
def _resolve_non_literal_route(self, method, path): for route_dict in (self._wildcard, self._regex): if method in route_dict: for route in reversed(route_dict[method]): callback_data = route.match(path) if callback_data is not None: return callback_data return None
Resolve a request to a wildcard or regex route handler. Arguments: method (str): HTTP method name, e.g. GET, POST, etc. path (str): Request path Returns: tuple or None: A tuple of three items: 1. Route handler (callable) 2. Positional arguments (list) 3. Keyword arguments (dict) ``None`` if no route matches the request.
juraj-google-style
def message(self, tree, spins, subtheta, auxvars): energy_sources = set() for v, children in tree.items(): aux = auxvars[v] assert all(u in spins for u in self._ancestors[v]) def energy_contributions(): yield subtheta.linear[v] for u, bias in subtheta.adj[v].items(): if u in spins: yield SpinTimes(spins[u], bias) plus_energy = Plus(energy_contributions()) minus_energy = SpinTimes(-1, plus_energy) if children: spins[v] = 1 plus_energy = Plus(plus_energy, self.message(children, spins, subtheta, auxvars)) spins[v] = -1 minus_energy = Plus(minus_energy, self.message(children, spins, subtheta, auxvars)) del spins[v] m = FreshSymbol(REAL) ancestor_aux = {auxvars[u] if spins[u] > 0 else Not(auxvars[u]) for u in self._ancestors[v]} plus_aux = And({aux}.union(ancestor_aux)) minus_aux = And({Not(aux)}.union(ancestor_aux)) self.assertions.update({LE(m, plus_energy), LE(m, minus_energy), Implies(plus_aux, GE(m, plus_energy)), Implies(minus_aux, GE(m, minus_energy)) }) energy_sources.add(m) return Plus(energy_sources)
Determine the energy of the elimination tree. Args: tree (dict): The current elimination tree spins (dict): The current fixed spins subtheta (dict): Theta with spins fixed. auxvars (dict): The auxiliary variables for the given spins. Returns: The formula for the energy of the tree.
juraj-google-style
def create_streaming_endpoint(access_token, name, description="New Streaming Endpoint", \ scale_units="1"): path = '/StreamingEndpoints' endpoint = ''.join([ams_rest_endpoint, path]) body = '{ \ "Id":null, \ "Name":"' + name + '", \ "Description":"' + description + '", \ "Created":"0001-01-01T00:00:00", \ "LastModified":"0001-01-01T00:00:00", \ "State":null, \ "HostName":null, \ "ScaleUnits":"' + scale_units + '", \ "CrossSiteAccessPolicies":{ \ "ClientAccessPolicy":"<access-policy><cross-domain-access><policy><allow-from http-request-headers=\\"*\\"><domain uri=\\"http: "CrossDomainPolicy":"<?xml version=\\"1.0\\"?><!DOCTYPE cross-domain-policy SYSTEM \\"http: } \ }' return do_ams_post(endpoint, path, body, access_token)
Create Media Service Streaming Endpoint. Args: access_token (str): A valid Azure authentication token. name (str): A Media Service Streaming Endpoint Name. description (str): A Media Service Streaming Endpoint Description. scale_units (str): A Media Service Scale Units Number. Returns: HTTP response. JSON body.
juraj-google-style
def as_dict(self, verbosity=0): species_list = [] for spec, occu in self._species.items(): d = spec.as_dict() del d["@module"] del d["@class"] d["occu"] = occu species_list.append(d) d = {"species": species_list, "abc": [float(c) for c in self._frac_coords], "lattice": self._lattice.as_dict(verbosity=verbosity), "@module": self.__class__.__module__, "@class": self.__class__.__name__} if verbosity > 0: d["xyz"] = [float(c) for c in self.coords] d["label"] = self.species_string d["properties"] = self.properties return d
Json-serializable dict representation of PeriodicSite. Args: verbosity (int): Verbosity level. Default of 0 only includes the matrix representation. Set to 1 for more details such as cartesian coordinates, etc.
juraj-google-style
def init_database(connection=None, dbname=None): connection = connection or connect() dbname = dbname or bigchaindb.config['database']['name'] create_database(connection, dbname) create_tables(connection, dbname)
Initialize the configured backend for use with BigchainDB. Creates a database with :attr:`dbname` with any required tables and supporting indexes. Args: connection (:class:`~bigchaindb.backend.connection.Connection`): an existing connection to use to initialize the database. Creates one if not given. dbname (str): the name of the database to create. Defaults to the database name given in the BigchainDB configuration.
juraj-google-style
def save_imgs(x, fname): n = x.shape[0] fig = figure.Figure(figsize=(n, 1), frameon=False) canvas = backend_agg.FigureCanvasAgg(fig) for i in range(n): ax = fig.add_subplot(1, n, (i + 1)) ax.imshow(x[i].squeeze(), interpolation='none', cmap=cm.get_cmap('binary')) ax.axis('off') canvas.print_figure(fname, format='png') print(('saved %s' % fname))
Helper method to save a grid of images to a PNG file. Args: x: A numpy array of shape [n_images, height, width]. fname: The filename to write to (including extension).
codesearchnet
def CreateSharedBudget(client): budget_service = client.GetService('BudgetService', version='v201809') budget = {'name': ('Shared Interplanetary Budget operation = {'operator': 'ADD', 'operand': budget} response = budget_service.mutate([operation]) return response['value'][0]
Creates an explicit budget to be used only to create the Campaign. Args: client: AdWordsClient the client to run the example with. Returns: dict An object representing a shared budget.
codesearchnet
def map_into_course(self, course_key): return self.replace(usage_key=self.usage_key.map_into_course(course_key))
Return a new :class:`UsageKey` or :class:`AssetKey` representing this usage inside the course identified by the supplied :class:`CourseKey`. It returns the same type as `self` Args: course_key (:class:`CourseKey`): The course to map this object into. Returns: A new :class:`CourseObjectMixin` instance.
juraj-google-style
def generate_mediation_matrix(dsm): cat = dsm.categories ent = dsm.entities size = dsm.size[0] if (not cat): cat = (['appmodule'] * size) packages = [e.split('.')[0] for e in ent] mediation_matrix = [[0 for _ in range(size)] for _ in range(size)] for i in range(0, size): for j in range(0, size): if (cat[i] == 'framework'): if (cat[j] == 'framework'): mediation_matrix[i][j] = (- 1) else: mediation_matrix[i][j] = 0 elif (cat[i] == 'corelib'): if ((cat[j] in ('framework', 'corelib')) or ent[i].startswith((packages[j] + '.')) or (i == j)): mediation_matrix[i][j] = (- 1) else: mediation_matrix[i][j] = 0 elif (cat[i] == 'applib'): if ((cat[j] in ('framework', 'corelib', 'applib')) or ent[i].startswith((packages[j] + '.')) or (i == j)): mediation_matrix[i][j] = (- 1) else: mediation_matrix[i][j] = 0 elif (cat[i] == 'appmodule'): if ((cat[j] in ('framework', 'corelib', 'applib', 'broker', 'data')) or ent[i].startswith((packages[j] + '.')) or (i == j)): mediation_matrix[i][j] = (- 1) else: mediation_matrix[i][j] = 0 elif (cat[i] == 'broker'): if ((cat[j] in ('appmodule', 'corelib', 'framework')) or ent[i].startswith((packages[j] + '.')) or (i == j)): mediation_matrix[i][j] = (- 1) else: mediation_matrix[i][j] = 0 elif (cat[i] == 'data'): if ((cat[j] == 'framework') or (i == j)): mediation_matrix[i][j] = (- 1) else: mediation_matrix[i][j] = 0 else: raise DesignStructureMatrixError(('Mediation matrix value NOT generated for %s:%s' % (i, j))) return mediation_matrix
Generate the mediation matrix of the given matrix. Rules for mediation matrix generation: Set -1 for items NOT to be considered Set 0 for items which MUST NOT be present Set 1 for items which MUST be present Each module has optional dependencies to itself. - Framework has optional dependency to all framework items (-1), and to nothing else. - Core libraries have dependencies to framework. Dependencies to other core libraries are tolerated. - Application libraries have dependencies to framework. Dependencies to other core or application libraries are tolerated. No dependencies to application modules. - Application modules have dependencies to framework and libraries. Dependencies to other application modules should be mediated over a broker. Dependencies to data are tolerated. - Data have no dependencies at all (but framework/libraries would be tolerated). Args: dsm (:class:`DesignStructureMatrix`): the DSM to generate the mediation matrix for.
codesearchnet
def coordinate_tensor(shape, axis): if axis < 0: axis = tf.size(shape) + axis r = tf.range(shape[axis]) r_shape = tf.one_hot( axis, tf.size(shape), on_value=-1, off_value=1, dtype=tf.int32) return tf.zeros(shape, dtype=tf.int32) + tf.reshape(r, r_shape)
Return a tensor with given shape containing coordinate along given axis. Args: shape: a Tensor representing the shape of the output Tensor axis: an integer Returns: A tensor with shape shape and type tf.int32, where each elements its coordinate along the given axis.
juraj-google-style
def duplicated_initializer(tc, init, graph_seed, shape=None): if shape is None: shape = [100] with tc.test_session(graph=ops.Graph()): random_seed.set_random_seed(graph_seed) t1 = init(shape).eval() t2 = init(shape).eval() return np.allclose(t1, t2, rtol=1e-15, atol=1e-15)
Tests duplicated random initializer within the same graph. This test generates two random kernels from the same initializer to the same graph, and checks if the results are close enough. Even given the same global, seed, two different instances of random kernels should generate different results. Args: tc: An instance of TensorFlowTestCase. init: An Initializer that generates a tensor of a given shape graph_seed: A graph-level seed to use. shape: Shape of the tensor to initialize or `None` to use a vector of length 100. Returns: True or False as determined by test.
github-repos
def calc_control_outputs(self, graph): control_outputs = {} for op in graph.get_operations(): for control_input in op.control_inputs: if control_input not in control_outputs: control_outputs[control_input] = set() control_outputs[control_input].add(op) return control_outputs
Returns the map of control_outputs for a given graph. Args: graph: The graph to parse. Returns: A map of the control outputs.
github-repos
def generate_token(key, user_id, action_id='', when=None): digester = hmac.new(_helpers._to_bytes(key, encoding='utf-8')) digester.update(_helpers._to_bytes(str(user_id), encoding='utf-8')) digester.update(DELIMITER) digester.update(_helpers._to_bytes(action_id, encoding='utf-8')) digester.update(DELIMITER) when = _helpers._to_bytes(str((when or int(time.time()))), encoding='utf-8') digester.update(when) digest = digester.digest() token = base64.urlsafe_b64encode(((digest + DELIMITER) + when)) return token
Generates a URL-safe token for the given user, action, time tuple. Args: key: secret key to use. user_id: the user ID of the authenticated user. action_id: a string identifier of the action they requested authorization for. when: the time in seconds since the epoch at which the user was authorized for this action. If not set the current time is used. Returns: A string XSRF protection token.
codesearchnet
def get_completions(self, context_word, prefix): if context_word not in self._comp_dict: return (None, None) comp_items = self._comp_dict[context_word] comp_items = sorted([item for item in comp_items if item.startswith(prefix)]) return (comp_items, self._common_prefix(comp_items))
Get the tab completions given a context word and a prefix. Args: context_word: The context word. prefix: The prefix of the incomplete word. Returns: (1) None if no registered context matches the context_word. A list of str for the matching completion items. Can be an empty list of a matching context exists, but no completion item matches the prefix. (2) Common prefix of all the words in the first return value. If the first return value is None, this return value will be None, too. If the first return value is not None, i.e., a list, this return value will be a str, which can be an empty str if there is no common prefix among the items of the list.
github-repos
def sap_sid_nr(broker): insts = broker[DefaultSpecs.saphostctrl_listinstances].content hn = broker[DefaultSpecs.hostname].content[0].split('.')[0].strip() results = set() for ins in insts: ins_splits = ins.split(' - ') if (ins_splits[2].strip() == hn): results.add((ins_splits[0].split()[(- 1)].lower(), ins_splits[1].strip())) return list(results)
Get the SID and Instance Number Typical output of saphostctrl_listinstances:: # /usr/sap/hostctrl/exe/saphostctrl -function ListInstances Inst Info : SR1 - 01 - liuxc-rhel7-hana-ent - 749, patch 418, changelist 1816226 Returns: (list): List of tuple of SID and Instance Number.
codesearchnet
def Serialize(self, writer): writer.WriteUInt256(self.PrevHash) writer.WriteUInt16(self.PrevIndex)
Serialize object. Args: writer (neo.IO.BinaryWriter):
juraj-google-style
def make_block(cls, header: str='', content: str | dict[str, Any] | list[Any] | tuple[Any, ...]=(), *, braces: Union[str, tuple[str, str]]='(', equal: str='=', limit: int=20) -> str: if isinstance(braces, str): braces = _BRACE_TO_BRACES[braces] brace_start, brace_end = braces if isinstance(content, str): content = [content] if isinstance(content, dict): parts = [f'{k}{equal}{pretty_repr(v)}' for k, v in content.items()] elif isinstance(content, (list, tuple)): parts = [pretty_repr(v) for v in content] else: raise TypeError(f'Invalid fields {type(content)}') collapse = len(parts) <= 1 if any(('\n' in p for p in parts)): collapse = False elif sum((len(p) for p in parts)) <= limit: collapse = True lines = cls() lines += f'{header}{brace_start}' with lines.indent(): if collapse: lines += ', '.join(parts) else: for p in parts: lines += f'{p},' lines += f'{brace_end}' return lines.join(collapse=collapse)
Util function to create a code block. Example: ```python epy.Lines.make_block('A', {}) == 'A()' epy.Lines.make_block('A', {'x': '1'}) == 'A(x=1)' epy.Lines.make_block('A', {'x': '1', 'y': '2'}) == '''A( x=1, y=2, )''' ``` Pattern is as: ``` {header}{braces[0]} {k}={v}, ... {braces[1]} ``` Args: header: Prefix before the brace content: Dict of key to values. One line will be displayed per item if `len(content) > 1`. Otherwise the code is collapsed braces: Brace type (`(`, `[`, `{`), can be tuple for custom open/close. equal: The separator (`=`, `: `) limit: Strings smaller than this will be collapsed Returns: The block string
github-repos
def reduce_concat(self, x): return self.reduce(lambda y: y, x)
Performs a concat reduction on `x` across pfor iterations. Note that this currently may not work inside a control flow construct. Args: x: an unvectorized Tensor. Returns: A Tensor that has rank one higher than `x`. The value is the vectorized version of `x`, i.e. stacking the value of `x` across different pfor iterations.
github-repos
def store_state(node, reaching, defined, stack): defs = [def_ for def_ in reaching if (not isinstance(def_[1], gast.arguments))] if (not len(defs)): return node (reaching, original_defs) = zip(*defs) assignments = [] for id_ in (set(reaching) - defined): assignments.append(quoting.quote('{} = None'.format(id_))) store = [] load = [] for (id_, def_) in zip(reaching, original_defs): if (isinstance(def_, gast.Assign) and ('tangent.Stack()' in quoting.unquote(def_.value))): (push, pop, op_id) = get_push_pop_stack() else: (push, pop, op_id) = get_push_pop() store.append(template.replace('push(_stack, val, op_id)', push=push, val=id_, _stack=stack, op_id=op_id)) load.append(template.replace('val = pop(_stack, op_id)', pop=pop, val=id_, _stack=stack, op_id=op_id)) (body, return_) = (node.body[0].body[:(- 1)], node.body[0].body[(- 1)]) node.body[0].body = (((assignments + body) + store) + [return_]) node.body[1].body = (load[::(- 1)] + node.body[1].body) return node
Push the final state of the primal onto the stack for the adjoint. Python's scoping rules make it possible for variables to not be defined in certain blocks based on the control flow path taken at runtime. In order to make sure we don't try to push non-existing variables onto the stack, we defined these variables explicitly (by assigning `None` to them) at the beginning of the function. All the variables that reach the return statement are pushed onto the stack, and in the adjoint they are popped off in reverse order. Args: node: A module with the primal and adjoint function definitions as returned by `reverse_ad`. reaching: The variable definitions that reach the end of the primal. defined: The variables defined at the end of the primal. stack: The stack node to use for storing and restoring state. Returns: node: A node with the requisite pushes and pops added to make sure that state is transferred between primal and adjoint split motion calls.
codesearchnet
def new(self, user_id, tokens=None, user_data=None, valid_until=None, client_ip=None, encoding='utf-8'): if (valid_until is None): valid_until = (int(time.time()) + TicketFactory._DEFAULT_TIMEOUT) else: valid_until = int(valid_until) user_id = ulp.quote(user_id) token_str = '' if tokens: token_str = ','.join((ulp.quote(t) for t in tokens)) user_str = ('' if (not user_data) else ulp.quote(user_data)) ip = (self._DEFAULT_IP if (client_ip is None) else ip_address(client_ip)) data0 = ((bytes([ip.version]) + ip.packed) + pack('>I', valid_until)) data1 = '\x00'.join((user_id, token_str, user_str)).encode(encoding) digest = self._hexdigest(data0, data1) parts = ('{0}{1:08x}{2}'.format(digest, valid_until, user_id), token_str, user_str) return '!'.join(parts)
Creates a new authentication ticket. Args: user_id: User id to store in ticket (stored in plain text) tokens: Optional sequence of token strings to store in the ticket (stored in plain text). user_data: Optional user data to store in the ticket (string like object stored in plain text) valid_until: Expiration time of ticket as a integer (typically time.time() + seconds). client_ip: Optional string or ip_address.IPAddress of the client. encoding: Optional encoding type that is used when hashing the strings passed to the function Returns: A ticket string that can later be used to identify the user
codesearchnet
def _truncate_float(matchobj, format_str='0.2g'): if matchobj.group(0): return format(float(matchobj.group(0)), format_str) return ''
Truncate long floats Args: matchobj (re.Match): contains original float format_str (str): format specifier Returns: str: returns truncated float
codesearchnet
def rtt_control(self, command, config): config_byref = (ctypes.byref(config) if (config is not None) else None) res = self._dll.JLINK_RTTERMINAL_Control(command, config_byref) if (res < 0): raise errors.JLinkRTTException(res) return res
Issues an RTT Control command. All RTT control is done through a single API call which expects specifically laid-out configuration structures. Args: self (JLink): the ``JLink`` instance command (int): the command to issue (see enums.JLinkRTTCommand) config (ctypes type): the configuration to pass by reference. Returns: An integer containing the result of the command.
codesearchnet
def _Open(self, path_spec=None, mode='rb'): if not path_spec: raise ValueError('Missing path specification.') if not path_spec.HasParent(): raise errors.PathSpecError( 'Unsupported path specification without parent.') self._gzip_file_object = resolver.Resolver.OpenFileObject( path_spec.parent, resolver_context=self._resolver_context) file_size = self._gzip_file_object.get_size() self._gzip_file_object.seek(0, os.SEEK_SET) uncompressed_data_offset = 0 next_member_offset = 0 while next_member_offset < file_size: member = gzipfile.GzipMember( self._gzip_file_object, next_member_offset, uncompressed_data_offset) uncompressed_data_offset = ( uncompressed_data_offset + member.uncompressed_data_size) self._members_by_end_offset[uncompressed_data_offset] = member self.uncompressed_data_size += member.uncompressed_data_size next_member_offset = member.member_end_offset
Opens the file-like object defined by path specification. Args: path_spec (Optional[PathSpec]): path specification. mode (Optional[str]): file access mode. Raises: AccessError: if the access to open the file was denied. IOError: if the file-like object could not be opened. OSError: if the file-like object could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid.
juraj-google-style
def get_overlaps(self, offset, length): if (''.join([chunk.word for chunk in self])[offset] == ' '): offset += 1 index = 0 result = ChunkList() for chunk in self: if ((offset < (index + len(chunk.word))) and (index < (offset + length))): result.append(chunk) index += len(chunk.word) return result
Returns chunks overlapped with the given range. Args: offset (int): Begin offset of the range. length (int): Length of the range. Returns: Overlapped chunks. (:obj:`budou.chunk.ChunkList`)
codesearchnet
def GetLogdirSubdirectories(path): if not tf.io.gfile.exists(path): return () if not tf.io.gfile.isdir(path): raise ValueError('GetLogdirSubdirectories: path exists and is not a ' 'directory, %s' % path) if IsCloudPath(path): logger.info( 'GetLogdirSubdirectories: Starting to list directories via glob-ing.') traversal_method = ListRecursivelyViaGlobbing else: logger.info( 'GetLogdirSubdirectories: Starting to list directories via walking.') traversal_method = ListRecursivelyViaWalking return ( subdir for (subdir, files) in traversal_method(path) if any(IsTensorFlowEventsFile(f) for f in files) )
Obtains all subdirectories with events files. The order of the subdirectories returned is unspecified. The internal logic that determines order varies by scenario. Args: path: The path to a directory under which to find subdirectories. Returns: A tuple of absolute paths of all subdirectories each with at least 1 events file directly within the subdirectory. Raises: ValueError: If the path passed to the method exists and is not a directory.
juraj-google-style
def get_lambda_arn(app, account, region): session = boto3.Session(profile_name=account, region_name=region) lambda_client = session.client('lambda') lambda_arn = None paginator = lambda_client.get_paginator('list_functions') for lambda_functions in paginator.paginate(): for lambda_function in lambda_functions['Functions']: if (lambda_function['FunctionName'] == app): lambda_arn = lambda_function['FunctionArn'] LOG.debug('Lambda ARN for lambda function %s is %s.', app, lambda_arn) break if lambda_arn: break if (not lambda_arn): LOG.fatal('Lambda function with name %s not found in %s %s', app, account, region) raise LambdaFunctionDoesNotExist('Lambda function with name {0} not found in {1} {2}'.format(app, account, region)) return lambda_arn
Get lambda ARN. Args: account (str): AWS account name. region (str): Region name, e.g. us-east-1 app (str): Lambda function name Returns: str: ARN for requested lambda function
codesearchnet
def register_recipe(cls, recipe): recipe_name = recipe.contents['name'] cls._recipe_classes[recipe_name] = (recipe.contents, recipe.args, recipe.__doc__)
Registers a dftimewolf recipe. Args: recipe: imported python module representing the recipe.
codesearchnet
def index_of(self, file_path, line_number, called_function_name, called_file_path, called_function_start_line): location_key = (file_path, called_function_name, line_number) if location_key in self._location_key_to_location: location = self._location_key_to_location[location_key] return location.id else: location_index = len(self._location_key_to_location) + 1 location = profile_pb2.Location() location.id = location_index self._location_key_to_location[location_key] = location line = location.line.add() line.function_id = self._functions.index_of(called_file_path, called_function_name, called_function_start_line) line.line = line_number return location_index
Returns index of the location, adding the location if needed. Args: file_path: (string) Path to file that makes the call. line_number: (integer) Call line number. called_function_name: (string) Function name of the function called at `file_path` and `line_number`. called_file_path: (string) Path to file where the called function is defined. called_function_start_line: (integer) Start line number of called function definition in `called_file_path` file. Returns: Index of location.
github-repos
def on_value_event(self, event): if not event.summary.value: logger.info('The summary of the event lacks a value.') return None watch_key = event.summary.value[0].node_name tensor_value = debug_data.load_tensor_from_event(event) device_name = _extract_device_name_from_event(event) node_name, output_slot, debug_op = ( event.summary.value[0].node_name.split(':')) maybe_base_expanded_node_name = ( self._run_states.get_maybe_base_expanded_node_name(node_name, self._run_key, device_name)) self._tensor_store.add(watch_key, tensor_value) self._outgoing_channel.put(_comm_tensor_data( device_name, node_name, maybe_base_expanded_node_name, output_slot, debug_op, tensor_value, event.wall_time)) logger.info('on_value_event(): waiting for client ack (tensors)...') self._incoming_channel.get() logger.info('on_value_event(): client ack received (tensor).') if self._is_debug_node_in_breakpoints(event.summary.value[0].node_name): logger.info('Sending empty EventReply for breakpoint: %s', event.summary.value[0].node_name) return debug_service_pb2.EventReply() return None
Records the summary values based on an updated message from the debugger. Logs an error message if writing the event to disk fails. Args: event: The Event proto to be processed.
juraj-google-style
def create(self, name, description='', whitelisted_container_task_types=None, whitelisted_executable_task_types=None): if (whitelisted_container_task_types is None): whitelisted_container_task_types = [] if (whitelisted_executable_task_types is None): whitelisted_executable_task_types = [] request_url = (self._client.base_api_url + self.list_url) data_to_post = {'name': name, 'description': description, 'whitelisted_container_task_types': whitelisted_container_task_types, 'whitelisted_executable_task_types': whitelisted_executable_task_types} response = self._client.session.post(request_url, data=data_to_post) self.validate_request_success(response_text=response.text, request_url=request_url, status_code=response.status_code, expected_status_code=HTTP_201_CREATED) return self.response_data_to_model_instance(response.json())
Create a task whitelist. Args: name (str): The name of the task whitelist. description (str, optional): A description of the task whitelist. whitelisted_container_task_types (list, optional): A list of whitelisted container task type IDs. whitelisted_executable_task_types (list, optional): A list of whitelisted executable task type IDs. Returns: :class:`saltant.models.task_whitelist.TaskWhitelist`: A task whitelist model instance representing the task whitelist just created.
codesearchnet
def __eq__(self, other): if isinstance(other, DocumentReference): return self._client == other._client and self._path == other._path else: return NotImplemented
Equality check against another instance. Args: other (Any): A value to compare against. Returns: Union[bool, NotImplementedType]: Indicating if the values are equal.
juraj-google-style
def create_profiler_ui(graph, run_metadata, ui_type='readline', on_ui_exit=None, config=None): del config analyzer = ProfileAnalyzer(graph, run_metadata) cli = ui_factory.get_ui(ui_type, on_ui_exit=on_ui_exit) cli.register_command_handler('list_profile', analyzer.list_profile, analyzer.get_help('list_profile'), prefix_aliases=['lp']) cli.register_command_handler('print_source', analyzer.print_source, analyzer.get_help('print_source'), prefix_aliases=['ps']) return cli
Create an instance of ReadlineUI based on a `tf.Graph` and `RunMetadata`. Args: graph: Python `Graph` object. run_metadata: A `RunMetadata` protobuf object. ui_type: (str) requested UI type, e.g., "readline". on_ui_exit: (`Callable`) the callback to be called when the UI exits. config: An instance of `cli_config.CLIConfig`. Returns: (base_ui.BaseUI) A BaseUI subtype object with a set of standard analyzer commands and tab-completions registered.
github-repos
def exit_hook(callable, once=True): r if once and callable in ExitHooks: return ExitHooks.append(callable)
r"""A decorator that makes the decorated function to run while ec exits. Args: callable (callable): The target callable. once (bool): Avoids adding a func to the hooks, if it has been added already. Defaults to True. Note: Hooks are processedd in a LIFO order.
juraj-google-style
def generate(cache_fn): if (not os.path.exists(cache_fn)): ((print >> sys.stderr), ("Can't access `%s`!" % cache_fn)) sys.exit(1) with SqliteDict(cache_fn) as db: for item in _pick_keywords(db): (yield item)
Go thru `cache_fn` and filter keywords. Store them in `keyword_list.json`. Args: cache_fn (str): Path to the file with cache. Returns: list: List of :class:`KeywordInfo` objects.
codesearchnet
def definition_package(cls): outer_definition = cls.message_definition() if (not outer_definition): return util.get_package_for_module(cls.__module__) return outer_definition.definition_package()
Helper method for creating creating the package of a definition. Returns: Name of package that definition belongs to.
codesearchnet
def shift_time(start_time, mins) -> str: s_time = pd.Timestamp(start_time) e_time = (s_time + (np.sign(mins) * pd.Timedelta(f'00:{abs(mins)}:00'))) return e_time.strftime('%H:%M')
Shift start time by mins Args: start_time: start time in terms of HH:MM string mins: number of minutes (+ / -) Returns: end time in terms of HH:MM string
codesearchnet
def findall_operations_with_gate_type(self, gate_type: Type[T_DESIRED_GATE_TYPE]) -> Iterable[Tuple[(int, ops.GateOperation, T_DESIRED_GATE_TYPE)]]: result = self.findall_operations((lambda operation: bool(ops.op_gate_of_type(operation, gate_type)))) for (index, op) in result: gate_op = cast(ops.GateOperation, op) (yield (index, gate_op, cast(T_DESIRED_GATE_TYPE, gate_op.gate)))
Find the locations of all gate operations of a given type. Args: gate_type: The type of gate to find, e.g. XPowGate or MeasurementGate. Returns: An iterator (index, operation, gate)'s for operations with the given gate type.
codesearchnet
def image_data_format(): return _IMAGE_DATA_FORMAT
Return the default image data format convention. Returns: A string, either `'channels_first'` or `'channels_last'`. Example: >>> keras.config.image_data_format() 'channels_last'
github-repos
def create_write_transform(self) -> beam.PTransform[Chunk, Any]: raise NotImplementedError(type(self))
Creates a PTransform that writes embeddings to the vector database. Returns: A PTransform that accepts PCollection[Chunk] and writes the chunks' embeddings and metadata to the configured vector database. The transform should handle: - Converting Chunk format to database schema - Setting up database connection/client - Writing with appropriate batching/error handling
github-repos
def load(cls, campaign_dir, ns_path=None, runner_type='Auto', optimized=True, check_repo=True): if (ns_path is not None): ns_path = os.path.abspath(ns_path) campaign_dir = os.path.abspath(campaign_dir) db = DatabaseManager.load(campaign_dir) script = db.get_script() runner = None if (ns_path is not None): runner = CampaignManager.create_runner(ns_path, script, runner_type, optimized) return cls(db, runner, check_repo)
Load an existing simulation campaign. Note that specifying an ns-3 installation is not compulsory when using this method: existing results will be available, but in order to run additional simulations it will be necessary to specify a SimulationRunner object, and assign it to the CampaignManager. Args: campaign_dir (str): path to the directory in which to save the simulation campaign database. ns_path (str): path to the ns-3 installation to employ in this campaign. runner_type (str): implementation of the SimulationRunner to use. Value can be: SimulationRunner (for running sequential simulations locally), ParallelRunner (for running parallel simulations locally), GridRunner (for running simulations using a DRMAA-compatible parallel task scheduler). optimized (bool): whether to configure the runner to employ an optimized ns-3 build.
codesearchnet
def ReadFromFile(self, path): self._definitions = {} with open(path, 'r') as file_object: for preset_definition in self._ReadPresetsFromFileObject(file_object): self._definitions[preset_definition.name] = preset_definition
Reads parser and parser plugin presets from a file. Args: path (str): path of file that contains the the parser and parser plugin presets configuration. Raises: MalformedPresetError: if one or more plugin preset definitions are malformed.
codesearchnet
def _invalid_triple_quote(self, quote, row, col=None): self.add_message( 'invalid-triple-quote', line=row, args=(quote, TRIPLE_QUOTE_OPTS.get(self.config.triple_quote)), **self.get_offset(col) )
Add a message for an invalid triple quote. Args: quote: The quote characters that were found. row: The row number the quote characters were found on. col: The column the quote characters were found on.
juraj-google-style