code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def GetMatchingTransitions(self, transitions): return [t for t in transitions if self._MatchWithTransition(t)]
Return a list of state.Transition's compatible to this state. A transition is compatible to this state when the transition's pre_states is comparible with this state, i.e. the transition can be executed from this state. Args: transitions: List of state.Transitions among which it finds ones compatible to this state. Returns: List of state.Transition's compatible to this state.
github-repos
def check_server_proc_running(self):
Checks whether the server is still running. If the server is not running, it throws an error. As this function is called each time the client tries to send an RPC, this should be a quick check without affecting performance. Otherwise it is fine to not check anything. Raises: errors.ServerDiedError: if the server died.
github-repos
def parse_individuals(samples): individuals = [] if len(samples) == 0: raise PedigreeError("No samples could be found") ind_ids = set() for sample_info in samples: parsed_ind = parse_individual(sample_info) individuals.append(parsed_ind) ind_ids.add(parsed_ind['individual_id']) for parsed_ind in individuals: father = parsed_ind['father'] if (father and father != '0'): if father not in ind_ids: raise PedigreeError('father %s does not exist in family' % father) mother = parsed_ind['mother'] if (mother and mother != '0'): if mother not in ind_ids: raise PedigreeError('mother %s does not exist in family' % mother) return individuals
Parse the individual information Reformat sample information to proper individuals Args: samples(list(dict)) Returns: individuals(list(dict))
juraj-google-style
def combine(path1, path2): if (not path1): return path2.lstrip() return '{}/{}'.format(path1.rstrip('/'), path2.lstrip('/'))
Join two paths together. This is faster than :func:`~fs.path.join`, but only works when the second path is relative, and there are no back references in either path. Arguments: path1 (str): A PyFilesytem path. path2 (str): A PyFilesytem path. Returns: str: The joint path. Example: >>> combine("foo/bar", "baz") 'foo/bar/baz'
codesearchnet
def notify_program_learners(cls, enterprise_customer, program_details, users): program_name = program_details.get('title') program_branding = program_details.get('type') program_uuid = program_details.get('uuid') lms_root_url = get_configuration_value_for_site(enterprise_customer.site, 'LMS_ROOT_URL', settings.LMS_ROOT_URL) program_path = urlquote('/dashboard/programs/{program_uuid}/?tpa_hint={tpa_hint}'.format(program_uuid=program_uuid, tpa_hint=enterprise_customer.identity_provider)) destination_url = '{site}/{login_or_register}?next={program_path}'.format(site=lms_root_url, login_or_register='{login_or_register}', program_path=program_path) program_type = 'program' program_start = get_earliest_start_date_from_program(program_details) with mail.get_connection() as email_conn: for user in users: login_or_register = ('register' if isinstance(user, PendingEnterpriseCustomerUser) else 'login') destination_url = destination_url.format(login_or_register=login_or_register) send_email_notification_message(user=user, enrolled_in={'name': program_name, 'url': destination_url, 'type': program_type, 'start': program_start, 'branding': program_branding}, enterprise_customer=enterprise_customer, email_connection=email_conn)
Notify learners about a program in which they've been enrolled. Args: enterprise_customer: The EnterpriseCustomer being linked to program_details: Details about the specific program the learners were enrolled in users: An iterable of the users or pending users who were enrolled
codesearchnet
def format_import(self, source_module_name, source_name, dest_name): if self._lazy_loading: return " '%s': ('%s', '%s')," % (dest_name, source_module_name, source_name) elif source_module_name: if source_name == dest_name: return 'from %s import %s' % (source_module_name, source_name) else: return 'from %s import %s as %s' % (source_module_name, source_name, dest_name) elif source_name == dest_name: return 'import %s' % source_name else: return 'import %s as %s' % (source_name, dest_name)
Formats import statement. Args: source_module_name: (string) Source module to import from. source_name: (string) Source symbol name to import. dest_name: (string) Destination alias name. Returns: An import statement string.
github-repos
def find_customer(cls, session, mailbox, customer): return cls( '/mailboxes/%d/customers/%s/conversations.json' % ( mailbox.id, customer.id, ), session=session, )
Return conversations for a specific customer in a mailbox. Args: session (requests.sessions.Session): Authenticated session. mailbox (helpscout.models.Mailbox): Mailbox to search. customer (helpscout.models.Customer): Customer to search for. Returns: RequestPaginator(output_type=helpscout.models.Conversation): Conversations iterator.
juraj-google-style
def _GetVisitSource(self, visit_identifier, cache, database): sync_cache_results = cache.GetResults('sync') if (not sync_cache_results): result_set = database.Query(self._SYNC_CACHE_QUERY) cache.CacheQueryResults(result_set, 'sync', 'id', ('source',)) sync_cache_results = cache.GetResults('sync') if (sync_cache_results and visit_identifier): results = sync_cache_results.get(visit_identifier, None) if results: return results[0] return None
Retrieves a visit source type based on the identifier. Args: visit_identifier (str): identifier from the visits table for the particular record. cache (SQLiteCache): cache which contains cached results from querying the visit_source table. database (SQLiteDatabase): database. Returns: int: visit source type or None if no visit source type was found for the identifier.
codesearchnet
def plot(self, event_names, x_axis='step'): if isinstance(event_names, six.string_types): event_names = [event_names] events_list = self.get_events(event_names) for event_name, dir_event_dict in zip(event_names, events_list): for dir, df in six.iteritems(dir_event_dict): label = event_name + ':' + dir x_column = df['step'] if x_axis == 'step' else df['time'] plt.plot(x_column, df['value'], label=label) plt.legend(loc='best') plt.show()
Plots a list of events. Each event (a dir+event_name) is represetented as a line in the graph. Args: event_names: A list of events to plot. Each event_name may correspond to multiple events, each in a different directory. x_axis: whether to use step or time as x axis.
juraj-google-style
def update_environmental_configuration(self, configuration, timeout=-1): uri = '{}/environmentalConfiguration'.format(self.data['uri']) return self._helper.do_put(uri, configuration, timeout, None)
Sets the calibrated max power of an unmanaged or unsupported enclosure. Args: configuration: Configuration timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: Settings that describe the environmental configuration.
juraj-google-style
def is_end_node(node): return (isinstance(node, ast.Expr) and isinstance(node.value, ast.Name) and (node.value.id == 'end'))
Checks if a node is the "end" keyword. Args: node: AST node. Returns: True if the node is the "end" keyword, otherwise False.
codesearchnet
def RegisterOutput(cls, output_class, disabled=False): output_name = output_class.NAME.lower() if disabled: class_dict = cls._disabled_output_classes else: class_dict = cls._output_classes if output_name in class_dict: raise KeyError(( 'Output class already set for name: {0:s}.').format( output_class.NAME)) class_dict[output_name] = output_class
Registers an output class. The output classes are identified based on their NAME attribute. Args: output_class (type): output module class. disabled (Optional[bool]): True if the output module is disabled due to the module not loading correctly or not. Raises: KeyError: if output class is already set for the corresponding name.
juraj-google-style
def __savorize(self, node: yaml.Node, expected_type: Type) -> yaml.Node: logger.debug('Savorizing node assuming type {}'.format( expected_type.__name__)) for base_class in expected_type.__bases__: if base_class in self._registered_classes.values(): node = self.__savorize(node, base_class) if hasattr(expected_type, 'yatiml_savorize'): logger.debug('Calling {}.yatiml_savorize()'.format( expected_type.__name__)) cnode = Node(node) expected_type.yatiml_savorize(cnode) node = cnode.yaml_node return node
Removes syntactic sugar from the node. This calls yatiml_savorize(), first on the class's base \ classes, then on the class itself. Args: node: The node to modify. expected_type: The type to assume this type is.
juraj-google-style
def _ListFileEntry( self, file_system, file_entry, parent_full_path, output_writer): full_path = file_system.JoinPath([parent_full_path, file_entry.name]) if not self._list_only_files or file_entry.IsFile(): output_writer.WriteFileEntry(full_path) for sub_file_entry in file_entry.sub_file_entries: self._ListFileEntry(file_system, sub_file_entry, full_path, output_writer)
Lists a file entry. Args: file_system (dfvfs.FileSystem): file system that contains the file entry. file_entry (dfvfs.FileEntry): file entry to list. parent_full_path (str): full path of the parent file entry. output_writer (StdoutWriter): output writer.
juraj-google-style
def lattice_points_in_supercell(supercell_matrix): diagonals = np.array( [[0, 0, 0], [0, 0, 1], [0, 1, 0], [0, 1, 1], [1, 0, 0], [1, 0, 1], [1, 1, 0], [1, 1, 1]]) d_points = np.dot(diagonals, supercell_matrix) mins = np.min(d_points, axis=0) maxes = np.max(d_points, axis=0) + 1 ar = np.arange(mins[0], maxes[0])[:, None] * \ np.array([1, 0, 0])[None, :] br = np.arange(mins[1], maxes[1])[:, None] * \ np.array([0, 1, 0])[None, :] cr = np.arange(mins[2], maxes[2])[:, None] * \ np.array([0, 0, 1])[None, :] all_points = ar[:, None, None] + br[None, :, None] + cr[None, None, :] all_points = all_points.reshape((-1, 3)) frac_points = np.dot(all_points, np.linalg.inv(supercell_matrix)) tvects = frac_points[np.all(frac_points < 1 - 1e-10, axis=1) & np.all(frac_points >= -1e-10, axis=1)] assert len(tvects) == round(abs(np.linalg.det(supercell_matrix))) return tvects
Returns the list of points on the original lattice contained in the supercell in fractional coordinates (with the supercell basis). e.g. [[2,0,0],[0,1,0],[0,0,1]] returns [[0,0,0],[0.5,0,0]] Args: supercell_matrix: 3x3 matrix describing the supercell Returns: numpy array of the fractional coordinates
juraj-google-style
def pair_wise_sigmoid_cross_entropy_loss(inputs: torch.Tensor, labels: torch.Tensor) -> torch.Tensor: height_and_width = inputs.shape[1] criterion = nn.BCEWithLogitsLoss(reduction='none') cross_entropy_loss_pos = criterion(inputs, torch.ones_like(inputs)) cross_entropy_loss_neg = criterion(inputs, torch.zeros_like(inputs)) loss_pos = torch.matmul(cross_entropy_loss_pos / height_and_width, labels.T) loss_neg = torch.matmul(cross_entropy_loss_neg / height_and_width, (1 - labels).T) loss = loss_pos + loss_neg return loss
A pair wise version of the cross entropy loss, see `sigmoid_cross_entropy_loss` for usage. Args: inputs (`torch.Tensor`): A tensor representing a mask. labels (`torch.Tensor`): A tensor with the same shape as inputs. Stores the binary classification labels for each element in inputs (0 for the negative class and 1 for the positive class). Returns: loss (`torch.Tensor`): The computed loss between each pairs.
github-repos
def get(self, key): if key in self._feature_tensors: return self._feature_tensors[key] if key in self._features: feature_tensor = self._get_raw_feature_as_tensor(key) self._feature_tensors[key] = feature_tensor return feature_tensor if isinstance(key, six.string_types): raise ValueError('Feature {} is not in features dictionary.'.format(key)) if not isinstance(key, _FeatureColumn): raise TypeError('"key" must be either a "str" or "_FeatureColumn". Provided: {}'.format(key)) column = key logging.debug('Transforming feature_column %s.', column) transformed = column._transform_feature(self) if transformed is None: raise ValueError('Column {} is not supported.'.format(column.name)) self._feature_tensors[column] = transformed return transformed
Returns a `Tensor` for the given key. A `str` key is used to access a base feature (not-transformed). When a `_FeatureColumn` is passed, the transformed feature is returned if it already exists, otherwise the given `_FeatureColumn` is asked to provide its transformed output, which is then cached. Args: key: a `str` or a `_FeatureColumn`. Returns: The transformed `Tensor` corresponding to the `key`. Raises: ValueError: if key is not found or a transformed `Tensor` cannot be computed.
github-repos
def DEFINE_list(name, default, help, flag_values=FLAGS, **args): parser = ListParser() serializer = CsvListSerializer(',') DEFINE(parser, name, default, help, flag_values, serializer, **args)
Registers a flag whose value is a comma-separated list of strings. The flag value is parsed with a CSV parser. Args: name: A string, the flag name. default: The default value of the flag. help: A help string. flag_values: FlagValues object with which the flag will be registered. **args: Dictionary with extra keyword args that are passed to the Flag __init__.
codesearchnet
def _build_rdf(self, data=None): self.rdf = SimpleNamespace() self.rdf.data = data self.rdf.prefixes = SimpleNamespace() self.rdf.uris = SimpleNamespace() for prefix,uri in self.repo.context.items(): setattr(self.rdf.prefixes, prefix, rdflib.Namespace(uri)) self._parse_graph()
Parse incoming rdf as self.rdf.orig_graph, create copy at self.rdf.graph Args: data (): payload from GET request, expected RDF content in various serialization formats Returns: None
juraj-google-style
def LookupClients(self, keywords): if isinstance(keywords, string_types): raise ValueError( "Keywords should be an iterable, not a string (got %s)." % keywords) start_time, filtered_keywords = self._AnalyzeKeywords(keywords) keyword_map = data_store.REL_DB.ListClientsForKeywords( list(map(self._NormalizeKeyword, filtered_keywords)), start_time=start_time) results = itervalues(keyword_map) relevant_set = set(next(results)) for hits in results: relevant_set &= set(hits) if not relevant_set: return [] return sorted(relevant_set)
Returns a list of client URNs associated with keywords. Args: keywords: The list of keywords to search by. Returns: A list of client URNs. Raises: ValueError: A string (single keyword) was passed instead of an iterable.
juraj-google-style
def guess_is_tensorflow_py_library(py_file_path): if not is_extension_uncompiled_python_source(py_file_path) and (not is_extension_compiled_python_source(py_file_path)): return False py_file_path = _norm_abs_path(py_file_path) return (py_file_path.startswith(_TENSORFLOW_BASEDIR) or py_file_path.startswith(_ABSL_BASEDIR)) and (not py_file_path.endswith('_test.py')) and (os.path.normpath('tensorflow/python/debug/examples') not in os.path.normpath(py_file_path))
Guess whether a Python source file is a part of the tensorflow library. Special cases: 1) Returns False for unit-test files in the library (*_test.py), 2) Returns False for files under python/debug/examples. Args: py_file_path: full path of the Python source file in question. Returns: (`bool`) Whether the file is inferred to be a part of the tensorflow library.
github-repos
def reset(self, indices=None): if indices is None: indices = np.arange(self.trajectories.batch_size) if indices.size == 0: tf.logging.warning( "`reset` called with empty indices array, this is a no-op.") return None observations = self._reset(indices) processed_observations = self.process_observations(observations) self.trajectories.reset(indices, observations) return processed_observations
Resets environments at given indices. Subclasses should override _reset to do the actual reset if something other than the default implementation is desired. Args: indices: Indices of environments to reset. If None all envs are reset. Returns: Batch of initial observations of reset environments.
juraj-google-style
def bgr2gray(img, keepdim=False): out_img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) if keepdim: out_img = out_img[(..., None)] return out_img
Convert a BGR image to grayscale image. Args: img (ndarray): The input image. keepdim (bool): If False (by default), then return the grayscale image with 2 dims, otherwise 3 dims. Returns: ndarray: The converted grayscale image.
codesearchnet
def glu(x, axis=-1): if any_symbolic_tensors((x,)): return Glu(axis).symbolic_call(x) return backend.nn.glu(x, axis=axis)
Gated Linear Unit (GLU) activation function. It is defined as: `f(x) = a * sigmoid(b)` where `x` is split into `a` and `b` along the given axis. Args: x: Input tensor. axis: The axis along which to split the input tensor. Defaults to `-1`. Returns: A tensor with the same shape as half of the input. Example: >>> x = np.array([-1., 0., 1. , 1.]) >>> x_glu = keras.ops.glu(x) >>> print(x_glu) array([-0.73105858, 0. ], shape=(2,), dtype=float64)
github-repos
def swo_start(self, swo_speed=9600): if self.swo_enabled(): self.swo_stop() info = structs.JLinkSWOStartInfo() info.Speed = swo_speed res = self._dll.JLINKARM_SWO_Control(enums.JLinkSWOCommands.START, ctypes.byref(info)) if (res < 0): raise errors.JLinkException(res) self._swo_enabled = True return None
Starts collecting SWO data. Note: If SWO is already enabled, it will first stop SWO before enabling it again. Args: self (JLink): the ``JLink`` instance swo_speed (int): the frequency in Hz used by the target to communicate Returns: ``None`` Raises: JLinkException: on error
codesearchnet
def _fn(arg0, arg1): return arg0 + arg1
fn doc. Args: arg0: Arg 0. arg1: Arg 1. Returns: Sum of args.
github-repos
def format(self, record): if (not FLAGS['showprefixforinfo'].value and FLAGS['verbosity'].value == converter.ABSL_INFO and record.levelno == logging.INFO and _absl_handler.python_handler.stream == sys.stderr): prefix = '' else: prefix = get_absl_log_prefix(record) return prefix + super(PythonFormatter, self).format(record)
Appends the message from the record to the results of the prefix. Args: record: logging.LogRecord, the record to be formatted. Returns: The formatted string representing the record.
juraj-google-style
def dvd_lists(self, **kwargs): path = self._get_path('dvd_lists') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Gets the dvd lists available from the API. Returns: A dict respresentation of the JSON returned from the API.
codesearchnet
def box_draw_character(first: Optional[BoxDrawCharacterSet], second: BoxDrawCharacterSet, *, top: int=0, bottom: int=0, left: int=0, right: int=0) -> Optional[str]: if (first is None): first = second sign = (+ 1) combo = None if ((first is NORMAL_BOX_CHARS) and (second is BOLD_BOX_CHARS)): combo = NORMAL_THEN_BOLD_MIXED_BOX_CHARS if ((first is BOLD_BOX_CHARS) and (second is NORMAL_BOX_CHARS)): combo = NORMAL_THEN_BOLD_MIXED_BOX_CHARS sign = (- 1) if (combo is None): choice = (second if ((+ 1) in [top, bottom, left, right]) else first) return choice.char(top=bool(top), bottom=bool(bottom), left=bool(left), right=bool(right)) return combo.char(top=(top * sign), bottom=(bottom * sign), left=(left * sign), right=(right * sign))
Finds a box drawing character based on its connectivity. For example: box_draw_character( NORMAL_BOX_CHARS, BOLD_BOX_CHARS, top=-1, right=+1) evaluates to '┕', which has a normal upward leg and bold rightward leg. Args: first: The character set to use for legs set to -1. If set to None, defaults to the same thing as the second character set. second: The character set to use for legs set to +1. top: Whether the upward leg should be present. bottom: Whether the bottom leg should be present. left: Whether the left leg should be present. right: Whether the right leg should be present. Returns: A box drawing character approximating the desired properties, or None if all legs are set to 0.
codesearchnet
def get_reference_points(spatial_shapes, valid_ratios, device): reference_points_list = [] for level, (height, width) in enumerate(spatial_shapes): ref_y, ref_x = meshgrid(torch.linspace(0.5, height - 0.5, height, dtype=torch.float32, device=device), torch.linspace(0.5, width - 0.5, width, dtype=torch.float32, device=device), indexing='ij') ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, level, 1] * height) ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, level, 0] * width) ref = torch.stack((ref_x, ref_y), -1) reference_points_list.append(ref) reference_points = torch.cat(reference_points_list, 1) reference_points = reference_points[:, :, None] * valid_ratios[:, None] return reference_points
Get reference points for each feature map. Args: spatial_shapes (`torch.LongTensor` of shape `(num_feature_levels, 2)`): Spatial shapes of each feature map. valid_ratios (`torch.FloatTensor` of shape `(batch_size, num_feature_levels, 2)`): Valid ratios of each feature map. device (`torch.device`): Device on which to create the tensors. Returns: `torch.FloatTensor` of shape `(batch_size, num_queries, num_feature_levels, 2)`
github-repos
def read(self, size=-1): self._check_open() if not self._remaining(): return '' data_list = [] while True: remaining = self._buffer.remaining() if size >= 0 and size < remaining: data_list.append(self._buffer.read(size)) self._offset += size break else: size -= remaining self._offset += remaining data_list.append(self._buffer.read()) if self._buffer_future is None: if size < 0 or size >= self._remaining(): needs = self._remaining() else: needs = size data_list.extend(self._get_segments(self._offset, needs)) self._offset += needs break if self._buffer_future: self._buffer.reset(self._buffer_future.get_result()) self._buffer_future = None if self._buffer_future is None: self._request_next_buffer() return ''.join(data_list)
Read data from RAW file. Args: size: Number of bytes to read as integer. Actual number of bytes read is always equal to size unless EOF is reached. If size is negative or unspecified, read the entire file. Returns: data read as str. Raises: IOError: When this buffer is closed.
juraj-google-style
def startProducing(self, consumer): self._consumer = consumer self._current_deferred = defer.Deferred() self._sent = 0 self._paused = False if not hasattr(self, "_chunk_headers"): self._build_chunk_headers() if self._data: block = "" for field in self._data: block += self._chunk_headers[field] block += self._data[field] block += "\r\n" self._send_to_consumer(block) if self._files: self._files_iterator = self._files.iterkeys() self._files_sent = 0 self._files_length = len(self._files) self._current_file_path = None self._current_file_handle = None self._current_file_length = None self._current_file_sent = 0 result = self._produce() if result: return result else: return defer.succeed(None) return self._current_deferred
Start producing. Args: consumer: Consumer
juraj-google-style
def _hide_parameters(self, file_name): try: in_data = load_b26_file(file_name) except: in_data = {} def set_item_visible(item, is_visible): if isinstance(is_visible, dict): for child_id in range(item.childCount()): child = item.child(child_id) if child.name in is_visible: set_item_visible(child, is_visible[child.name]) else: item.visible = is_visible if "scripts_hidden_parameters" in in_data: if len(list(in_data["scripts_hidden_parameters"].keys())) == self.tree_scripts.topLevelItemCount(): for index in range(self.tree_scripts.topLevelItemCount()): item = self.tree_scripts.topLevelItem(index) set_item_visible(item, in_data["scripts_hidden_parameters"][item.name]) else: print('WARNING: settings for hiding parameters does\'t seem to match other settings')
hide the parameters that had been hidden Args: file_name: config file that has the information about which parameters are hidden
juraj-google-style
def pick_unused_port(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('localhost', 0)) (addr, port) = s.getsockname() s.close() return port
get an unused port on the VM. Returns: An unused port.
codesearchnet
def send_offset_commit_request(self, group, payloads=None, fail_on_error=True, callback=None, group_generation_id=(- 1), consumer_id=''): group = _coerce_consumer_group(group) encoder = partial(KafkaCodec.encode_offset_commit_request, group=group, group_generation_id=group_generation_id, consumer_id=consumer_id) decoder = KafkaCodec.decode_offset_commit_response resps = (yield self._send_broker_aware_request(payloads, encoder, decoder, consumer_group=group)) returnValue(self._handle_responses(resps, fail_on_error, callback, group))
Send a list of OffsetCommitRequests to the Kafka broker for the given consumer group. Args: group (str): The consumer group to which to commit the offsets payloads ([OffsetCommitRequest]): List of topic, partition, offsets to commit. fail_on_error (bool): Whether to raise an exception if a response from the Kafka broker indicates an error callback (callable): a function to call with each of the responses before returning the returned value to the caller. group_generation_id (int): Must currently always be -1 consumer_id (str): Must currently always be empty string Returns: [OffsetCommitResponse]: List of OffsetCommitResponse objects. Will raise KafkaError for failed requests if fail_on_error is True
codesearchnet
def HandleInvMessage(self, payload): if self.sync_mode != MODE_MAINTAIN: return inventory = IOHelper.AsSerializableWithType(payload, 'neo.Network.Payloads.InvPayload.InvPayload') if not inventory: return if inventory.Type == InventoryType.BlockInt: ok_hashes = [] for hash in inventory.Hashes: hash = hash.encode('utf-8') if hash not in self.myblockrequests and hash not in BC.Default().BlockRequests: ok_hashes.append(hash) BC.Default().BlockRequests.add(hash) self.myblockrequests.add(hash) if len(ok_hashes): message = Message("getdata", InvPayload(InventoryType.Block, ok_hashes)) self.SendSerializedMessage(message) elif inventory.Type == InventoryType.TXInt: pass elif inventory.Type == InventoryType.ConsensusInt: pass
Process a block header inventory payload. Args: inventory (neo.Network.Payloads.InvPayload):
juraj-google-style
def _GetAttributeContainerByIndex(self, container_type, index): sequence_number = index + 1 query = 'SELECT _data FROM {0:s} WHERE rowid = {1:d}'.format( container_type, sequence_number) try: self._cursor.execute(query) except sqlite3.OperationalError as exception: raise IOError('Unable to query storage file with error: {0!s}'.format( exception)) row = self._cursor.fetchone() if row: identifier = identifiers.SQLTableIdentifier( container_type, sequence_number) if self.compression_format == definitions.COMPRESSION_FORMAT_ZLIB: serialized_data = zlib.decompress(row[0]) else: serialized_data = row[0] if self._storage_profiler: self._storage_profiler.Sample( 'read', container_type, len(serialized_data), len(row[0])) attribute_container = self._DeserializeAttributeContainer( container_type, serialized_data) attribute_container.SetIdentifier(identifier) return attribute_container count = self._CountStoredAttributeContainers(container_type) index -= count serialized_data = self._GetSerializedAttributeContainerByIndex( container_type, index) attribute_container = self._DeserializeAttributeContainer( container_type, serialized_data) if attribute_container: identifier = identifiers.SQLTableIdentifier( container_type, sequence_number) attribute_container.SetIdentifier(identifier) return attribute_container
Retrieves a specific attribute container. Args: container_type (str): attribute container type. index (int): attribute container index. Returns: AttributeContainer: attribute container or None if not available. Raises: IOError: when there is an error querying the storage file. OSError: when there is an error querying the storage file.
juraj-google-style
def render_build_args(options, ns): build_args = options.get('buildArgs', {}) for key, value in build_args.items(): build_args[key] = value.format(**ns) return build_args
Get docker build args dict, rendering any templated args. Args: options (dict): The dictionary for a given image from chartpress.yaml. Fields in `options['buildArgs']` will be rendered and returned, if defined. ns (dict): the namespace used when rendering templated arguments
juraj-google-style
class PatchMixerBlock(nn.Module): def __init__(self, config: PatchTSMixerConfig): super().__init__() self.norm = PatchTSMixerNormLayer(config) self.self_attn = config.self_attn self.gated_attn = config.gated_attn self.mlp = PatchTSMixerMLP(in_features=config.num_patches, out_features=config.num_patches, config=config) if config.gated_attn: self.gating_block = PatchTSMixerGatedAttention(in_size=config.num_patches, out_size=config.num_patches) if config.self_attn: self.self_attn_layer = PatchTSMixerAttention(embed_dim=config.d_model, num_heads=config.self_attn_heads, dropout=config.dropout, config=config) self.norm_attn = PatchTSMixerNormLayer(config) def forward(self, hidden_state): residual = hidden_state hidden_state = self.norm(hidden_state) if self.self_attn: batch_size, n_vars, num_patches, d_model = hidden_state.shape hidden_state_reshaped = hidden_state.reshape(batch_size * n_vars, num_patches, d_model) x_attn, _, _ = self.self_attn_layer(hidden_state_reshaped, output_attentions=False) x_attn = x_attn.reshape(batch_size, n_vars, num_patches, d_model) hidden_state = hidden_state.transpose(2, 3) hidden_state = self.mlp(hidden_state) if self.gated_attn: hidden_state = self.gating_block(hidden_state) hidden_state = hidden_state.transpose(2, 3) if self.self_attn: hidden_state = self.norm_attn(hidden_state + x_attn) out = hidden_state + residual return out
This module mixes the patch dimension. Args: config (`PatchTSMixerConfig`): Configuration.
github-repos
def build(self, var_list): if self.built: return super().build(var_list) self._momentums, self._velocities = self.add_optimizer_variables(var_list, ['momentum', 'velocity'])
Initialize optimizer variables. Lamb optimizer has 2 types of variables: momentums and velocities Args: var_list: list of model variables to build Lamb variables on.
github-repos
def clusters_sites_obj(clusters): result = {} all_clusters = get_all_clusters_sites() clusters_sites = {c: s for (c, s) in all_clusters.items() if (c in clusters)} for (cluster, site) in clusters_sites.items(): result.update({cluster: get_site_obj(site)}) return result
Get all the corresponding sites of the passed clusters. Args: clusters(list): list of string uid of sites (e.g 'rennes') Return: dict corresponding to the mapping cluster uid to python-grid5000 site
codesearchnet
def ensure_scheme(url, default_scheme='http'): parsed = urlsplit(url, scheme=default_scheme) if not parsed.netloc: parsed = SplitResult( scheme=parsed.scheme, netloc=parsed.path, path='', query=parsed.query, fragment=parsed.fragment ) return urlunsplit(parsed)
Adds a scheme to a url if not present. Args: url (string): a url, assumed to start with netloc default_scheme (string): a scheme to be added Returns: string: URL with a scheme
juraj-google-style
def decode(self, codes): assert (codes.ndim == 2) (N, M) = codes.shape assert (M == self.M) assert (codes.dtype == self.code_dtype) vecs = np.empty((N, (self.Ds * self.M)), dtype=np.float32) for m in range(self.M): vecs[(:, (m * self.Ds):((m + 1) * self.Ds))] = self.codewords[m][(codes[(:, m)], :)] return vecs
Given PQ-codes, reconstruct original D-dimensional vectors approximately by fetching the codewords. Args: codes (np.ndarray): PQ-cdoes with shape=(N, M) and dtype=self.code_dtype. Each row is a PQ-code Returns: np.ndarray: Reconstructed vectors with shape=(N, D) and dtype=np.float32
codesearchnet
def getFilesFromAFolder(path): from os import listdir from os.path import isfile, join onlyFiles = [] for f in listdir(path): if isfile(join(path, f)): onlyFiles.append(f) return onlyFiles
Getting all the files in a folder. Args: ----- path: The path in which looking for the files Returns: -------- list: The list of filenames found.
juraj-google-style
def parts(path): _path = normpath(path) components = _path.strip('/') _parts = [('/' if _path.startswith('/') else './')] if components: _parts += components.split('/') return _parts
Split a path in to its component parts. Arguments: path (str): Path to split in to parts. Returns: list: List of components Example: >>> parts('/foo/bar/baz') ['/', 'foo', 'bar', 'baz']
codesearchnet
async def run(self, state: ConnectionState) -> None: self._print('%d +++| %s', bytes(socket_info.get())) bad_commands = 0 try: greeting = await self._exec(state.do_greeting()) except ResponseError as exc: resp = exc.get_response(b'*') resp.condition = ResponseBye.condition await self.write_response(resp) return else: await self.write_response(greeting) while True: try: cmd = await self.read_command() except (ConnectionError, EOFError): break except CancelledError: await self.send_error_disconnect() break except Exception: await self.send_error_disconnect() raise else: prev_cmd = current_command.set(cmd) try: if isinstance(cmd, AuthenticateCommand): creds = await self.authenticate(state, cmd.mech_name) response, _ = await self._exec( state.do_authenticate(cmd, creds)) elif isinstance(cmd, IdleCommand): response = await self.idle(state, cmd) else: response = await self._exec(state.do_command(cmd)) except ResponseError as exc: resp = exc.get_response(cmd.tag) await self.write_response(resp) if resp.is_terminal: break except AuthenticationError as exc: msg = bytes(str(exc), 'utf-8', 'surrogateescape') resp = ResponseBad(cmd.tag, msg) await self.write_response(resp) except TimeoutError: resp = ResponseNo(cmd.tag, b'Operation timed out.', ResponseCode.of(b'TIMEOUT')) await self.write_response(resp) except CancelledError: await self.send_error_disconnect() break except Exception: await self.send_error_disconnect() raise else: await self.write_response(response) if response.is_bad: bad_commands += 1 if self.bad_command_limit \ and bad_commands >= self.bad_command_limit: msg = b'Too many errors, disconnecting.' response.add_untagged(ResponseBye(msg)) else: bad_commands = 0 if response.is_terminal: break if isinstance(cmd, StartTLSCommand) and state.ssl_context \ and isinstance(response, ResponseOk): await self.start_tls(state.ssl_context) finally: await state.do_cleanup() current_command.reset(prev_cmd) self._print('%d ---| %s', b'<disconnected>')
Start the socket communication with the IMAP greeting, and then enter the command/response cycle. Args: state: Defines the interaction with the backend plugin.
juraj-google-style
async def send_heartbeat(self, short_name): if short_name not in self.services: raise ArgumentError("Unknown service name", short_name=short_name) self.services[short_name]['state'].heartbeat() await self._notify_update(short_name, 'heartbeat')
Post a heartbeat for a service. Args: short_name (string): The short name of the service to query
juraj-google-style
def IsInitializerList(clean_lines, linenum): for i in xrange(linenum, 1, -1): line = clean_lines.elided[i] if i == linenum: remove_function_body = Match(r'^(.*)\{\s*$', line) if remove_function_body: line = remove_function_body.group(1) if Search(r'\s:\s*\w+[({]', line): return True if Search(r'\}\s*,\s*$', line): return True if Search(r'[{};]\s*$', line): return False return False
Check if current line is inside constructor initializer list. Args: clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. Returns: True if current line appears to be inside constructor initializer list, False otherwise.
juraj-google-style
def not_found(cls, errors=None): if cls.expose_status: cls.response.content_type = 'application/json' cls.response._status_line = '404 Not Found' return cls(404, None, errors).to_json
Shortcut API for HTTP 404 `Not found` response. Args: errors (list): Response key/value data. Returns: WSResponse Instance.
codesearchnet
def read_table(fstream): pos = fstream.tell() line = fstream.readline().strip() fragments = line.split(',') fragments = [x for x in fragments if (x is not None)] partition = dict() if (not (len(fragments) >= 4)): return None partition['table'] = fragments[0] partition['group'] = fragments[1] partition['set'] = fragments[2] partition['num_lines'] = fragments[3] struct = None if ((partition is not None) and (partition['table'] == 'TABLE')): num_lines = int(partition['num_lines'].strip()) struct = {} header = fetch_cols(fstream) struct.update({header[0]: header[1:]}) for _ in range(num_lines): cols = fetch_cols(fstream) struct.update({cols[0]: cols[1:]}) else: fstream.seek(pos) return struct
Read a likwid table info from the text stream. Args: fstream: Likwid's filestream. Returns (dict(str: str)): A dict containing likwid's table info as key/value pairs.
codesearchnet
def _init_trace_logging(self, app): enabled = (not app.config.get(CONF_DISABLE_TRACE_LOGGING, False)) if (not enabled): return self._trace_log_handler = LoggingHandler(self._key, telemetry_channel=self._channel) app.logger.addHandler(self._trace_log_handler)
Sets up trace logging unless ``APPINSIGHTS_DISABLE_TRACE_LOGGING`` is set in the Flask config. Args: app (flask.Flask). the Flask application for which to initialize the extension.
codesearchnet
def from_json_file(cls, json_file: Union[str, os.PathLike]) -> PreTrainedFeatureExtractor: with open(json_file, encoding='utf-8') as reader: text = reader.read() feature_extractor_dict = json.loads(text) return cls(**feature_extractor_dict)
Instantiates a feature extractor of type [`~feature_extraction_utils.FeatureExtractionMixin`] from the path to a JSON file of parameters. Args: json_file (`str` or `os.PathLike`): Path to the JSON file containing the parameters. Returns: A feature extractor of type [`~feature_extraction_utils.FeatureExtractionMixin`]: The feature_extractor object instantiated from that JSON file.
github-repos
def unpack(self, buff, offset=0): header = UBInt16() header.unpack(buff[offset:offset+2]) self.tlv_type = header.value >> 9 length = header.value & 511 begin, end = offset + 2, offset + 2 + length sub_type = UBInt8() sub_type.unpack(buff[begin:begin+1]) self.sub_type = sub_type.value self.sub_value = BinaryData(buff[begin+1:end])
Unpack a binary message into this object's attributes. Unpack the binary value *buff* and update this object attributes based on the results. Args: buff (bytes): Binary data package to be unpacked. offset (int): Where to begin unpacking. Raises: Exception: If there is a struct unpacking error.
juraj-google-style
def set_result(self, result): if self.done(): raise RuntimeError("set_result can only be called once.") self._result = result self._trigger()
Set the result of the future to the provided result. Args: result (Any): The result
juraj-google-style
async def stop_tasks(self, address): tasks = self._tasks.get(address, []) for task in tasks: task.cancel() asyncio.gather(*tasks, return_exceptions=True) self._tasks[address] = []
Clear all tasks pertaining to a tile. This coroutine will synchronously cancel all running tasks that were attached to the given tile and wait for them to stop before returning. Args: address (int): The address of the tile we should stop.
codesearchnet
def _to_tensor_list(self, value) -> List['core_types.Symbol']: return nest.flatten(self._to_components(value), expand_composites=True)
Encodes `value` as a flat list of `tf.Tensor`. By default, this just flattens `self._to_components(value)` using `nest.flatten`. However, subclasses may override this to return a different tensor encoding for values. In particular, some subclasses of `BatchableTypeSpec` override this method to return a "boxed" encoding for values, which then can be batched or unbatched. See `BatchableTypeSpec` for more details. Args: value: A value with compatible this `TypeSpec`. (Caller is responsible for ensuring compatibility.) Returns: A list of `tf.Tensor`, compatible with `self._flat_tensor_specs`, which can be used to reconstruct `value`.
github-repos
def create_view(operations, operation): operations.execute(('CREATE VIEW %s AS %s' % (operation.target.name, operation.target.sqltext)))
Implements ``CREATE VIEW``. Args: operations: instance of ``alembic.operations.base.Operations`` operation: instance of :class:`.ReversibleOp` Returns: ``None``
codesearchnet
def data_group_type(self, group_data): if isinstance(group_data, dict): file_content = group_data.pop('fileContent', None) if file_content is not None: self._files[group_data.get('xid')] = { 'fileContent': file_content, 'type': group_data.get('type'), } else: GROUPS_STRINGS_WITH_FILE_CONTENTS = ['Document', 'Report'] if group_data.data.get('type') in GROUPS_STRINGS_WITH_FILE_CONTENTS: self._files[group_data.data.get('xid')] = group_data.file_data group_data = group_data.data return group_data
Return dict representation of group data. Args: group_data (dict|obj): The group data dict or object. Returns: dict: The group data in dict format.
juraj-google-style
async def get_participants(self, force_update=False) -> list: if force_update or self.participants is None: res = await self.connection('GET', 'tournaments/{}/participants'.format(self._id)) self._refresh_participants_from_json(res) return self.participants or []
get all participants |methcoro| Args: force_update (default=False): True to force an update to the Challonge API Returns: list[Participant]: Raises: APIException
juraj-google-style
def add_state(self, state_name, initial_state, batch_size=None): state_shape = initial_state.get_shape().as_list() full_shape = ([batch_size] + state_shape) if (not batch_size): shape_proto = self._as_shape_proto(([0] + state_shape)) batch_size = 1 else: shape_proto = self._as_shape_proto(([batch_size] + state_shape)) tiles = ([batch_size] + ([1] * len(initial_state.get_shape()))) feed_op = tf.placeholder_with_default(tf.tile(tf.expand_dims(initial_state, [0]), tiles), shape=full_shape, name=('%s_feed' % state_name)) s = {'feed_op': feed_op, 'feed_type': initial_state.dtype, 'feed_shape': shape_proto} self._states[state_name] = s
Adds a state to the state saver. Args: state_name: The name of this state. initial_state: The initial state vector. Only zeros are supported. batch_size: The batch_size or None for unknown.
codesearchnet
def apply_configs(config): default_enabled = config.get('default_component_enabled', False) delegate_keys = sorted(dr.DELEGATES, key=dr.get_name) for comp_cfg in config.get('configs', []): name = comp_cfg.get('name') for c in delegate_keys: delegate = dr.DELEGATES[c] cname = dr.get_name(c) if cname.startswith(name): dr.ENABLED[c] = comp_cfg.get('enabled', default_enabled) delegate.metadata.update(comp_cfg.get('metadata', {})) delegate.tags = set(comp_cfg.get('tags', delegate.tags)) for (k, v) in delegate.metadata.items(): if hasattr(c, k): log.debug('Setting %s.%s to %s', cname, k, v) setattr(c, k, v) if hasattr(c, 'timeout'): c.timeout = comp_cfg.get('timeout', c.timeout) if (cname == name): break
Configures components. They can be enabled or disabled, have timeouts set if applicable, and have metadata customized. Valid keys are name, enabled, metadata, and timeout. Args: config (list): a list of dictionaries with the following keys: default_component_enabled (bool): default value for whether compoments are enable if not specifically declared in the config section packages (list): a list of packages to be loaded. These will be in addition to any packages previosly loaded for the `-p` option configs: name, enabled, metadata, and timeout. All keys are optional except name. name is the prefix or exact name of any loaded component. Any component starting with name will have the associated configuration applied. enabled is whether the matching components will execute even if their dependencies are met. Defaults to True. timeout sets the class level timeout attribute of any component so long as the attribute already exists. metadata is any dictionary that you want to attach to the component. The dictionary can be retrieved by the component at runtime.
codesearchnet
def iter_intersecting(self, iterable, key=None, descending=False): return _ContainsVersionIterator(self, iterable, key, descending, mode=_ContainsVersionIterator.MODE_INTERSECTING)
Like `iter_intersect_test`, but returns intersections only. Returns: An iterator that returns items from `iterable` that intersect.
codesearchnet
def parse(input_string, prefix=''): tree = parser.parse(input_string) visitor = ChatlVisitor(prefix) visit_parse_tree(tree, visitor) return visitor.parsed
Parses the given DSL string and returns parsed results. Args: input_string (str): DSL string prefix (str): Optional prefix to add to every element name, useful to namespace things Returns: dict: Parsed content
codesearchnet
def is_field_remote(model, field_name): if (not hasattr(model, '_meta')): return False model_field = get_model_field(model, field_name) return isinstance(model_field, (ManyToManyField, RelatedObject))
Check whether a given model field is a remote field. A remote field is the inverse of a one-to-many or a many-to-many relationship. Arguments: model: a Django model field_name: the name of a field Returns: True if `field_name` is a remote field, False otherwise.
codesearchnet
def image_to_tf_summary_value(image, tag): curr_image = np.asarray(image, dtype=np.uint8) height, width, n_channels = curr_image.shape if n_channels == 1: curr_image = np.reshape(curr_image, [height, width]) s = io.BytesIO() matplotlib_pyplot().imsave(s, curr_image, format="png") img_sum = tf.Summary.Image(encoded_image_string=s.getvalue(), height=height, width=width, colorspace=n_channels) return tf.Summary.Value(tag=tag, image=img_sum)
Converts a NumPy image to a tf.Summary.Value object. Args: image: 3-D NumPy array. tag: name for tf.Summary.Value for display in tensorboard. Returns: image_summary: A tf.Summary.Value object.
juraj-google-style
def next_population(self, population, fitnesses): self._probability_vec = _adjust_probability_vec_best( population, fitnesses, self._probability_vec, self._adjust_rate) _mutate_probability_vec(self._probability_vec, self._mutation_chance, self._mutation_adjust_rate) return [ _sample(self._probability_vec) for _ in range(self._population_size) ]
Make a new population after each optimization iteration. Args: population: The population current population of solutions. fitnesses: The fitness associated with each solution in the population Returns: list; a list of solutions.
juraj-google-style
def asymmetric_depolarize(p_x: float, p_y: float, p_z: float) -> AsymmetricDepolarizingChannel: return AsymmetricDepolarizingChannel(p_x, p_y, p_z)
r"""Returns a AsymmetricDepolarizingChannel with given parameter. This channel evolves a density matrix via $$ \rho \rightarrow (1 - p_x - p_y - p_z) \rho + p_x X \rho X + p_y Y \rho Y + p_z Z \rho Z $$ Args: p_x: The probability that a Pauli X and no other gate occurs. p_y: The probability that a Pauli Y and no other gate occurs. p_z: The probability that a Pauli Z and no other gate occurs. Raises: ValueError: if the args or the sum of the args are not probabilities.
codesearchnet
def CreateStorageWriter(cls, storage_format, session, path): if (storage_format == definitions.STORAGE_FORMAT_SQLITE): return sqlite_writer.SQLiteStorageFileWriter(session, path) return None
Creates a storage writer. Args: session (Session): session the storage changes are part of. path (str): path to the storage file. storage_format (str): storage format. Returns: StorageWriter: a storage writer or None if the storage file cannot be opened or the storage format is not supported.
codesearchnet
def remap_variables(fn): def custom_getter(getter, *args, **kwargs): v = getter(*args, **kwargs) return fn(v) return custom_getter_scope(custom_getter)
Use fn to map the output of any variable getter. Args: fn (tf.Variable -> tf.Tensor) Returns: The current variable scope with a custom_getter that maps all the variables by fn. Example: .. code-block:: python with varreplace.remap_variables(lambda var: quantize(var)): x = FullyConnected('fc', x, 1000) # fc/{W,b} will be quantized
juraj-google-style
def get_provider_uri(self, provider_display_name): providers = self._provider_client.get_by('displayName', provider_display_name) return providers[0]['uri'] if providers else None
Gets uri for a specific provider. Args: provider_display_name: Display name of the provider. Returns: uri
juraj-google-style
def get_product_order_book(self, product_id, level=1): params = {'level': level} return self._send_message('get', '/products/{}/book'.format(product_id), params=params)
Get a list of open orders for a product. The amount of detail shown can be customized with the `level` parameter: * 1: Only the best bid and ask * 2: Top 50 bids and asks (aggregated) * 3: Full order book (non aggregated) Level 1 and Level 2 are recommended for polling. For the most up-to-date data, consider using the websocket stream. **Caution**: Level 3 is only recommended for users wishing to maintain a full real-time order book using the websocket stream. Abuse of Level 3 via polling will cause your access to be limited or blocked. Args: product_id (str): Product level (Optional[int]): Order book level (1, 2, or 3). Default is 1. Returns: dict: Order book. Example for level 1:: { "sequence": "3", "bids": [ [ price, size, num-orders ], ], "asks": [ [ price, size, num-orders ], ] }
codesearchnet
def metadata_path(self, m_path): if not m_path: self.metadata_dir = None self.metadata_file = None else: if not op.exists(m_path): raise OSError('{}: file does not exist!'.format(m_path)) if not op.dirname(m_path): self.metadata_dir = '.' else: self.metadata_dir = op.dirname(m_path) self.metadata_file = op.basename(m_path)
Provide pointers to the paths of the metadata file Args: m_path: Path to metadata file
juraj-google-style
def get_mapreduce_yaml(parse=parse_mapreduce_yaml): mr_yaml_path = find_mapreduce_yaml() if not mr_yaml_path: raise errors.MissingYamlError() mr_yaml_file = open(mr_yaml_path) try: return parse(mr_yaml_file.read()) finally: mr_yaml_file.close()
Locates mapreduce.yaml, loads and parses its info. Args: parse: Used for testing. Returns: MapReduceYaml object. Raises: errors.BadYamlError: when contents is not a valid mapreduce.yaml file or the file is missing.
juraj-google-style
def set_s3_bucket(self, region, name, bucketName): ct = self.session.client('cloudtrail', region_name=region) ct.update_trail(Name=name, S3BucketName=bucketName) auditlog(event='cloudtrail.set_s3_bucket', actor=self.ns, data={'account': self.account.account_name, 'region': region}) self.log.info('Updated S3BucketName to {} for {} in {}/{}'.format(bucketName, name, self.account.account_name, region))
Sets the S3 bucket location for logfile delivery Args: region (`str`): Name of the AWS region name (`str`): Name of the CloudTrail Trail bucketName (`str`): Name of the S3 bucket to deliver log files to Returns: `None`
codesearchnet
def parent(self): if (self._parent is not None): return self._parent try: package = self.repository.get_parent_package(self.resource) self._parent = Package(package, context=self.context) except AttributeError as e: reraise(e, ValueError) return self._parent
Get the parent package. Returns: `Package`.
codesearchnet
def get_image_embeddings(self, pixel_values, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, return_dict: Optional[bool]=None): vision_output = self.vision_encoder(pixel_values=pixel_values, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict) image_embeddings = vision_output[0] intermediate_embeddings = vision_output[1] return (image_embeddings, intermediate_embeddings)
Returns the image embeddings by passing the pixel values through the vision encoder. Args: pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): Input pixel values output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. output_hidden_states (`bool`, *optional*): Whether or not to return the hidden states of all layers. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
github-repos
def create_tasks(self, wfk_file, scr_input): assert (len(self) == 0) wfk_file = self.wfk_file = os.path.abspath(wfk_file) shell_manager = self.manager.to_shell_manager(mpi_procs=1) w = Work(workdir=self.tmpdir.path_join('_qptdm_run'), manager=shell_manager) fake_input = scr_input.deepcopy() fake_task = w.register(fake_input) w.allocate() w.build() fake_task.inlink_file(wfk_file) fake_task.set_vars({'nqptdm': (- 1)}) fake_task.start_and_wait() with NetcdfReader(fake_task.outdir.has_abiext('qptdms.nc')) as reader: qpoints = reader.read_value('reduced_coordinates_of_kpoints') for qpoint in qpoints: qptdm_input = scr_input.deepcopy() qptdm_input.set_vars(nqptdm=1, qptdm=qpoint) new_task = self.register_scr_task(qptdm_input, manager=self.manager) if (self.flow.gc is not None): new_task.set_gc(self.flow.gc) self.allocate()
Create the SCR tasks and register them in self. Args: wfk_file: Path to the ABINIT WFK file to use for the computation of the screening. scr_input: Input for the screening calculation.
codesearchnet
def install(self, ref, table_name=None, index_columns=None, logger=None): try: obj_number = ObjectNumber.parse(ref) if isinstance(obj_number, TableNumber): table = self._library.table(ref) connection = self._backend._get_connection() return self._backend.install_table(connection, table, logger=logger) else: raise NotObjectNumberError except NotObjectNumberError: partition = self._library.partition(ref) connection = self._backend._get_connection() return self._backend.install(connection, partition, table_name=table_name, index_columns=index_columns, logger=logger)
Finds partition by reference and installs it to warehouse db. Args: ref (str): id, vid (versioned id), name or vname (versioned name) of the partition.
codesearchnet
def clinsig_query(self, query, mongo_query): LOG.debug('clinsig is a query parameter') trusted_revision_level = ['mult', 'single', 'exp', 'guideline'] rank = [] str_rank = [] clnsig_query = {} for item in query['clinsig']: rank.append(int(item)) rank.append(CLINSIG_MAP[int(item)]) str_rank.append(CLINSIG_MAP[int(item)]) if query.get('clinsig_confident_always_returned') == True: LOG.debug("add CLINSIG filter with trusted_revision_level") clnsig_query = { "clnsig": { '$elemMatch': { '$or' : [ { '$and' : [ {'value' : { '$in': rank }}, {'revstat': { '$in': trusted_revision_level }} ] }, { '$and': [ {'value' : re.compile('|'.join(str_rank))}, {'revstat' : re.compile('|'.join(trusted_revision_level))} ] } ] } } } else: LOG.debug("add CLINSIG filter for rank: %s" % ', '.join(str(query['clinsig']))) clnsig_query = { "clnsig": { '$elemMatch': { '$or' : [ { 'value' : { '$in': rank }}, { 'value' : re.compile('|'.join(str_rank)) } ] } } } return clnsig_query
Add clinsig filter values to the mongo query object Args: query(dict): a dictionary of query filters specified by the users mongo_query(dict): the query that is going to be submitted to the database Returns: clinsig_query(dict): a dictionary with clinsig key-values
juraj-google-style
def parse(self, data): self.binding_var_count = 0 self.segment_count = 0 segments = self.parser.parse(data) path_wildcard = False for segment in segments: if ((segment.kind == _TERMINAL) and (segment.literal == '**')): if path_wildcard: raise ValidationException('validation error: path template cannot contain more than one path wildcard') path_wildcard = True return segments
Returns a list of path template segments parsed from data. Args: data: A path template string. Returns: A list of _Segment.
codesearchnet
def _trychar(char, fallback, asciimode=None): if (asciimode is True): return fallback if (hasattr(sys.stdout, 'encoding') and sys.stdout.encoding): try: char.encode(sys.stdout.encoding) except Exception: pass else: return char return fallback
Logic from IPython timeit to handle terminals that cant show mu Args: char (str): character, typically unicode, to try to use fallback (str): ascii character to use if stdout cannot encode char asciimode (bool): if True, always use fallback Example: >>> char = _trychar('µs', 'us') >>> print('char = {}'.format(char)) >>> assert _trychar('µs', 'us', asciimode=True) == 'us'
codesearchnet
def serialize_to_string(self, name, datas): value = datas.get('value', None) if value is None: msg = ("String reference '{}' lacks of required 'value' variable " "or is empty") raise SerializerError(msg.format(name)) return value
Serialize given datas to a string. Simply return the value from required variable``value``. Arguments: name (string): Name only used inside possible exception message. datas (dict): Datas to serialize. Returns: string: Value.
juraj-google-style
def eager_run(main=None, argv=None) -> NoReturn: enable_eager_execution() app.run(main, argv)
Runs the program with an optional main function and argv list. The program will run with eager execution enabled. Example: ```python import tensorflow as tf # Import subject to future changes: def main(_): u = tf.constant(6.0) v = tf.constant(7.0) print(u * v) if __name__ == "__main__": tfe.run() ``` Args: main: the main function to run. argv: the arguments to pass to it.
github-repos
def boxify(message, border_color=None): lines = message.split("\n") max_width = max(_visual_width(line) for line in lines) padding_horizontal = 5 padding_vertical = 1 box_size_horizontal = max_width + (padding_horizontal * 2) chars = {"corner": "+", "horizontal": "-", "vertical": "|", "empty": " "} margin = "{corner}{line}{corner}\n".format( corner=chars["corner"], line=chars["horizontal"] * box_size_horizontal ) padding_lines = [ "{border}{space}{border}\n".format( border=colorize(chars["vertical"], color=border_color), space=chars["empty"] * box_size_horizontal, ) * padding_vertical ] content_lines = [ "{border}{space}{content}{space}{border}\n".format( border=colorize(chars["vertical"], color=border_color), space=chars["empty"] * padding_horizontal, content=_visual_center(line, max_width), ) for line in lines ] box_str = "{margin}{padding}{content}{padding}{margin}".format( margin=colorize(margin, color=border_color), padding="".join(padding_lines), content="".join(content_lines), ) return box_str
Put a message inside a box. Args: message (unicode): message to decorate. border_color (unicode): name of the color to outline the box with.
juraj-google-style
def _construct_operation_id(self, service_name, protorpc_method_name): method_name_camel = util.snake_case_to_headless_camel_case( protorpc_method_name) return '{0}_{1}'.format(service_name, method_name_camel)
Return an operation id for a service method. Args: service_name: The name of the service. protorpc_method_name: The ProtoRPC method name. Returns: A string representing the operation id.
juraj-google-style
def add_object_to_scope(self, obj): if isinstance(obj, Computer): self.add_object_to_path(obj, "scope/computers") elif isinstance(obj, ComputerGroup): self.add_object_to_path(obj, "scope/computer_groups") elif isinstance(obj, Building): self.add_object_to_path(obj, "scope/buildings") elif isinstance(obj, Department): self.add_object_to_path(obj, "scope/departments") else: raise TypeError
Add an object to the appropriate scope block. Args: obj: JSSObject to add to scope. Accepted subclasses are: Computer ComputerGroup Building Department Raises: TypeError if invalid obj type is provided.
juraj-google-style
def CreateSitelinkFeedItem(feed_items, feed_item_id): site_link_from_feed = feed_items[feed_item_id] site_link_feed_item = { 'sitelinkText': site_link_from_feed['text'], 'sitelinkLine2': site_link_from_feed['line2'], 'sitelinkLine3': site_link_from_feed['line3'], } if 'finalUrls' in site_link_from_feed and site_link_from_feed['finalUrls']: site_link_feed_item['sitelinkFinalUrls'] = { 'urls': site_link_from_feed['finalUrls'] } if 'finalMobileUrls' in site_link_from_feed: site_link_feed_item['sitelinkFinalMobileUrls'] = { 'urls': site_link_from_feed['finalMobileUrls'] } site_link_feed_item['sitelinkTrackingUrlTemplate'] = ( site_link_from_feed['trackingUrlTemplate']) else: site_link_feed_item['sitelinkUrl'] = site_link_from_feed['url'] return site_link_feed_item
Creates a Sitelink Feed Item. Args: feed_items: a list of all Feed Items. feed_item_id: the Id of a specific Feed Item for which a Sitelink Feed Item should be created. Returns: The new Sitelink Feed Item.
juraj-google-style
def _parse_address(self, val): ret = {'type': None, 'value': None} try: ret['type'] = val[1]['type'] except (KeyError, ValueError, TypeError): pass try: ret['value'] = val[1]['label'] except (KeyError, ValueError, TypeError): ret['value'] = '\n'.join(val[3]).strip() try: self.vars['address'].append(ret) except AttributeError: self.vars['address'] = [] self.vars['address'].append(ret)
The function for parsing the vcard address. Args: val (:obj:`list`): The value to parse.
codesearchnet
def ts_to_dt(jwt_dict): d = jwt_dict.copy() for (k, v) in [v[:2] for v in CLAIM_LIST if v[2]]: if (k in jwt_dict): d[k] = d1_common.date_time.dt_from_ts(jwt_dict[k]) return d
Convert timestamps in JWT to datetime objects. Args: jwt_dict: dict JWT with some keys containing timestamps. Returns: dict: Copy of input dict where timestamps have been replaced with datetime.datetime() objects.
codesearchnet
def add_parameter(self, name, min_val, max_val): self.__parameters.append(Parameter(name, min_val, max_val))
Adds a paramber to the Population Args: name (str): name of the parameter min_val (int or float): minimum value for the parameter max_val (int or float): maximum value for the parameter
codesearchnet
def extract_annotation(data): xlabel = None xvalues = None ylabel = None yvalues = None if hasattr(data, 'minor_axis'): xvalues = data.minor_axis if hasattr(data.minor_axis, 'name'): xlabel = data.minor_axis.name if hasattr(data, 'columns'): xvalues = data.columns if hasattr(data.columns, 'name'): xlabel = data.columns.name if hasattr(data, 'major_axis'): yvalues = data.major_axis if hasattr(data.major_axis, 'name'): ylabel = data.major_axis.name if hasattr(data, 'index'): yvalues = data.index if hasattr(data.index, 'name'): ylabel = data.index.name return (xlabel, xvalues, ylabel, yvalues)
Extract names and values of rows and columns. Parameter: data : DataFrame | Panel Returns: col_name, col_values, row_name, row_values
codesearchnet
def parse_config(file_path): if (not os.path.isfile(file_path)): return {} parser = ConfigParser() parser.read(file_path) for s in parser._sections: for v in six.iterkeys(parser._sections[s]): parser._sections[s][v] = parser._sections[s][v].split(' return parser._sections
Convert the CISM configuration file to a python dictionary Args: file_path: absolute path to the configuration file Returns: A dictionary representation of the given file
codesearchnet
class Poisson(MeanMetricWrapper): def __init__(self, name='poisson', dtype=None): super(Poisson, self).__init__(poisson, name, dtype=dtype)
Computes the Poisson metric between `y_true` and `y_pred`. `metric = y_pred - y_true * log(y_pred)` Args: name: (Optional) string name of the metric instance. dtype: (Optional) data type of the metric result. Standalone usage: >>> m = tf.keras.metrics.Poisson() >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]]) >>> m.result().numpy() 0.49999997 >>> m.reset_state() >>> m.update_state([[0, 1], [0, 0]], [[1, 1], [0, 0]], ... sample_weight=[1, 0]) >>> m.result().numpy() 0.99999994 Usage with `compile()` API: ```python model.compile(optimizer='sgd', loss='mse', metrics=[tf.keras.metrics.Poisson()]) ```
github-repos
def _build_element_shape(shape): if isinstance(shape, tensor_lib.Tensor): return shape if isinstance(shape, tensor_shape.TensorShape): shape = shape.as_list() if shape else None if shape is None: return -1 if isinstance(shape, (np.ndarray, np.generic)) or not shape: return ops.convert_to_tensor(shape, dtype=dtypes.int32) def convert(val): if val is None: return -1 if isinstance(val, tensor_lib.Tensor): return val if isinstance(val, tensor_shape.Dimension): return val.value if val.value is not None else -1 return val return [convert(d) for d in shape]
Converts shape to a format understood by list_ops for element_shape. If `shape` is already a `Tensor` it is returned as-is. We do not perform a type check here. If shape is None or a TensorShape with unknown rank, -1 is returned. If shape is a scalar, an int32 tensor with empty list is returned. Note we do directly return an empty list since ops.convert_to_tensor would conver it to a float32 which is not a valid type for element_shape. If shape is a sequence of dims, None's in the list are replaced with -1. We do not check the dtype of the other dims. Args: shape: Could be None, Tensor, TensorShape or a list of dims (each dim could be a None, scalar or Tensor). Returns: A None-free shape that can be converted to a tensor.
github-repos
def update_headers(self, response): if (('expires' in response.headers) and ('cache-control' in response.headers)): self.msg = self.server_cache_headers return response.headers else: self.msg = self.default_cache_vars date = parsedate(response.headers['date']) expires = (datetime(*date[:6]) + timedelta(0, self.expire_after)) response.headers.update({'expires': formatdate(calendar.timegm(expires.timetuple())), 'cache-control': 'public'}) return response.headers
Returns the updated caching headers. Args: response (HttpResponse): The response from the remote service Returns: response:(HttpResponse.Headers): Http caching headers
codesearchnet
def event(self, cuuid, host, euuid, event_data, timestamp, priority): response = None if (host in self.encrypted_hosts): logger.debug('Encrypted!') client_key = self.registry[cuuid]['encryption'] else: logger.debug('Not encrypted :<') client_key = None port = host[1] host = host[0] if (not self.is_registered(cuuid, host)): logger.warning(('<%s> Sending BYE EVENT: Client not registered.' % cuuid)) response = serialize_data({'method': 'BYE EVENT', 'data': 'Not registered'}, self.compression, self.encryption, client_key) return response if (euuid in self.event_uuids): logger.warning(('<%s> Event ID is already being processed: %s' % (cuuid, euuid))) return response self.event_uuids[euuid] = 0 logger.debug(('<%s> <euuid:%s> Currently processing events: %s' % (cuuid, euuid, str(self.event_uuids)))) logger.debug(('<%s> <euuid:%s> New event being processed' % (cuuid, euuid))) logger.debug(('<%s> <euuid:%s> Event Data: %s' % (cuuid, euuid, pformat(event_data)))) if self.middleware.event_legal(cuuid, euuid, event_data): logger.debug(('<%s> <euuid:%s> Event LEGAL. Sending judgement to client.' % (cuuid, euuid))) response = serialize_data({'method': 'LEGAL', 'euuid': euuid, 'priority': priority}, self.compression, self.encryption, client_key) thread = threading.Thread(target=self.middleware.event_execute, args=(cuuid, euuid, event_data)) thread.start() else: logger.debug(('<%s> <euuid:%s> Event ILLEGAL. Sending judgement to client.' % (cuuid, euuid))) response = serialize_data({'method': 'ILLEGAL', 'euuid': euuid, 'priority': priority}, self.compression, self.encryption, client_key) self.listener.call_later(self.timeout, self.retransmit, {'euuid': euuid, 'response': response, 'cuuid': cuuid}) return response
This function will process event packets and send them to legal checks. Args: cuuid (string): The client uuid that the event came from. host (tuple): The (address, port) tuple of the client. euuid (string): The event uuid of the specific event. event_data (any): The event data that we will be sending to the middleware to be judged and executed. timestamp (string): The client provided timestamp of when the event was created. priority (string): The priority of the event. This is normally set to either "normal" or "high". If an event was sent with a high priority, then the client will not wait for a response from the server before executing the event locally. Returns: A LEGAL/ILLEGAL response to be sent to the client.
codesearchnet
def post_process(self, outputs, target_sizes): logger.warning_once('`post_process` is deprecated and will be removed in v5 of Transformers, please use `post_process_object_detection` instead, with `threshold=0.` for equivalent results.') out_logits, out_bbox = (outputs.logits, outputs.pred_boxes) if len(out_logits) != len(target_sizes): raise ValueError('Make sure that you pass in as many target sizes as the batch dimension of the logits') if target_sizes.shape[1] != 2: raise ValueError('Each element of target_sizes must contain the size (h, w) of each image of the batch') prob = out_logits.sigmoid() topk_values, topk_indexes = torch.topk(prob.view(out_logits.shape[0], -1), 100, dim=1) scores = topk_values topk_boxes = torch.div(topk_indexes, out_logits.shape[2], rounding_mode='floor') labels = topk_indexes % out_logits.shape[2] boxes = center_to_corners_format(out_bbox) boxes = torch.gather(boxes, 1, topk_boxes.unsqueeze(-1).repeat(1, 1, 4)) img_h, img_w = target_sizes.unbind(1) scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1) boxes = boxes * scale_fct[:, None, :] results = [{'scores': s, 'labels': l, 'boxes': b} for s, l, b in zip(scores, labels, boxes)] return results
Converts the raw output of [`DeformableDetrForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch. Args: outputs ([`DeformableDetrObjectDetectionOutput`]): Raw outputs of the model. target_sizes (`torch.Tensor` of shape `(batch_size, 2)`): Tensor containing the size (height, width) of each image of the batch. For evaluation, this must be the original image size (before any data augmentation). For visualization, this should be the image size after data augment, but before padding. Returns: `List[Dict]`: A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.
github-repos
def expired(self, cfgstr=None, product=None): products = self._rectify_products(product) certificate = self._get_certificate(cfgstr=cfgstr) if certificate is None: is_expired = True elif products is None: is_expired = False elif not all(map(os.path.exists, products)): is_expired = True else: product_file_hash = self._product_file_hash(products) certificate_hash = certificate.get('product_file_hash', None) is_expired = product_file_hash != certificate_hash return is_expired
Check to see if a previously existing stamp is still valid and if the expected result of that computation still exists. Args: cfgstr (str, optional): override the default cfgstr if specified product (PathLike or Sequence[PathLike], optional): override the default product if specified
juraj-google-style
def on_enter(__msg: Optional[Union[Callable, str]] = None) -> Callable: def decorator(__func): @wraps(__func) def wrapper(*args, **kwargs): if __msg: print(__msg) else: print('Entering {!r}({!r})'.format(__func.__name__, __func)) return __func(*args, **kwargs) return wrapper if callable(__msg): return on_enter()(__msg) return decorator
Decorator to display a message when entering a function. Args: __msg: Message to display Returns: Wrapped function
juraj-google-style
def checkpoints(self): return list(self._maybe_delete.keys())
A list of managed checkpoints. Note that checkpoints saved due to `keep_checkpoint_every_n_hours` will not show up in this list (to avoid ever-growing filename lists). Returns: A list of filenames, sorted from oldest to newest.
github-repos