code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def setup(self, *args, **kwargs): pass
Called to prepare an instance for combining. This method can be useful if there is some state that needs to be loaded before executing any of the other methods. The resources can then be disposed of in ``CombineFn.teardown``. If you are using Dataflow, you need to enable Dataflow Runner V2 before using this feature. Args: *args: Additional arguments and side inputs. **kwargs: Additional arguments and side inputs.
github-repos
def put(self, item): with self._not_full: if self._closed: raise QueueClosedError() if self._maxsize > 0: while len(self._queue) == self._maxsize: self._not_full.wait() if self._closed: raise QueueClosedError() self._queue.append(item) self._not_empty.notify()
Put an item into the queue. If the queue is closed, fails immediately. If the queue is full, blocks until space is available or until the queue is closed by a call to close(), at which point this call fails. Args: item: an item to add to the queue Raises: QueueClosedError: if insertion failed because the queue is closed
github-repos
def get_path_str(self, sep=os.path.sep, type_str=None): return sep.join(list(reversed([v.label_str for v in self.parent_gen if (type_str in (None, v.type_str))])))
Get path from root to this node. Args: sep: str One or more characters to insert between each element in the path. Defaults to "/" on Unix and "\" on Windows. type_str: SUBJECT_NODE_TAG, TYPE_NODE_TAG or None. If set, only include information from nodes of that type. Returns: str: String describing the path from the root to this node.
codesearchnet
def _format_param_val(self, param_val): if isinstance(param_val, list): return ' '.join(str(x) for x in param_val) else: return str(param_val)
Internal method to format values in the packmol parameter dictionaries Args: param_val: Some object to turn into String Returns: string representation of the object
juraj-google-style
def iterable_source(iterable, target): it = iter(iterable) for item in it: try: target.send(item) except StopIteration: return prepend(item, it) return empty_iter()
Convert an iterable into a stream of events. Args: iterable: A series of items which will be sent to the target one by one. target: The target coroutine or sink. Returns: An iterator over any remaining items.
codesearchnet
def choose_1_from_each(lists): if (len(lists) == 0): (yield []) else: for el in lists[0]: for next_list in choose_1_from_each(lists[1:]): (yield ([el] + next_list))
Takes a list of lists and returns a list of lists with one item from each list. This new list should be the length of each list multiplied by the others. 18 for an list with lists of 3, 2 and 3. Also the lenght of each sub list should be same as the length of lists passed in. Args: lists(list of Lists): A list of lists Returns: list of lists: returns a list of lists constructions of one item from each list in lists.
codesearchnet
def garbage_collect_exports(export_dir_base, exports_to_keep): if exports_to_keep is None: return version_paths = [] for filename in tf_v1.gfile.ListDirectory(export_dir_base): path = os.path.join( tf.compat.as_bytes(export_dir_base), tf.compat.as_bytes(filename)) if len(filename) == 10 and filename.isdigit(): version_paths.append((int(filename), path)) oldest_version_path = sorted(version_paths)[:-exports_to_keep] for _, path in oldest_version_path: try: tf_v1.gfile.DeleteRecursively(path) except tf.errors.NotFoundError as e: logging.warn("Can not delete %s recursively: %s", path, e)
Deletes older exports, retaining only a given number of the most recent. Export subdirectories are assumed to be named with monotonically increasing integers; the most recent are taken to be those with the largest values. Args: export_dir_base: the base directory under which each export is in a versioned subdirectory. exports_to_keep: Number of exports to keep. Older exports will be garbage collected. Set to None to disable.
juraj-google-style
def load(filename): if not os.path.exists(filename): LOG.error("load object - File '%s' does not exist.", filename) return None obj = None with open(filename, 'rb') as obj_file: obj = dill.load(obj_file) return obj
Load a pickled obj from the filesystem. You better know what you expect from the given pickle, because we don't check it. Args: filename (str): The filename we load the object from. Returns: The object we were able to unpickle, else None.
juraj-google-style
def forward(self, input_ids: torch.Tensor, cache_position: torch.Tensor) -> torch.Tensor: batch_size = input_ids.shape[0] position_ids = cache_position.unsqueeze(0).expand(batch_size, -1) outputs = self.model(input_ids=input_ids, attention_mask=None, position_ids=position_ids, past_key_values=self.cache, use_cache=True, cache_position=cache_position) return outputs.logits
Forward pass of the module, which is compatible with the ExecuTorch llm runner. Args: input_ids (`torch.Tensor`): Tensor representing current input token id to the module. cache_position (`torch.Tensor`): Tensor representing current input position in the cache. Returns: torch.Tensor: Logits output from the model.
github-repos
def lookup_replicas(self, task_id: int, logical_core: int) -> List[int]: try: return self._task_and_cores_to_replicas[task_id][logical_core] except KeyError: raise ValueError('Can not find any replica in task: {} contains logical_core: {} '.format(task_id, logical_core))
Lookup replica ids by task number and logical core. Args: task_id: TensorFlow task number. logical_core: An integer, identifying a logical core. Returns: A sorted list of the replicas that are attached to that task and logical_core. Raises: ValueError: If no replica exists in the task which contains the logical core.
github-repos
def add_listener(self, callback, event_type=None): listener_uid = uuid4() self.listeners.append({'uid': listener_uid, 'callback': callback, 'event_type': event_type}) return listener_uid
Add a listener that will send a callback when the client recieves an event. Args: callback (func(roomchunk)): Callback called when an event arrives. event_type (str): The event_type to filter for. Returns: uuid.UUID: Unique id of the listener, can be used to identify the listener.
codesearchnet
def get_requirements(requirements_file="requirements.txt"): with open(requirements_file) as fd: lines = fd.readlines() dependencies = [] for line in lines: maybe_dep = line.strip() if maybe_dep.startswith(" continue if maybe_dep.startswith("git+"): __, __, maybe_dep = maybe_dep.rpartition(" else: maybe_dep, __, __ = maybe_dep.partition(" maybe_dep = maybe_dep.strip() if maybe_dep: dependencies.append(maybe_dep) return dependencies
Get the contents of a file listing the requirements. Args: requirements_file (str): The path to the requirements file, relative to this file. Returns: list: the list of requirements, or an empty list if ``requirements_file`` could not be opened or read.
juraj-google-style
def length_of_overlap(first_start, first_end, second_start, second_end): if first_end <= second_start or first_start >= second_end: return 0.0 if first_start < second_start: if first_end < second_end: return abs(first_end - second_start) else: return abs(second_end - second_start) if first_start > second_start: if first_end > second_end: return abs(second_end - first_start) else: return abs(first_end - first_start)
Find the length of the overlapping part of two segments. Args: first_start (float): Start of the first segment. first_end (float): End of the first segment. second_start (float): Start of the second segment. second_end (float): End of the second segment. Return: float: The amount of overlap or 0 if they don't overlap at all.
juraj-google-style
def fetch_tokens(self, client_id=None, client_secret=None, code=None, redirect_uri=None, **kwargs): client_id = client_id or self.client_id client_secret = client_secret or self.client_secret redirect_uri = redirect_uri or self.redirect_uri data = { 'grant_type': 'authorization_code', 'client_id': client_id, 'client_secret': client_secret, 'code': code, 'redirect_uri': redirect_uri } r = self._httpclient.request( method='POST', url=self.token_url, json=data, path='/api/oauth2/RequestToken', auth=None, **kwargs ) if not r.ok: raise PanCloudError( '%s %s: %s' % (r.status_code, r.reason, r.text) ) try: r_json = r.json() except ValueError as e: raise PanCloudError("Invalid JSON: %s" % e) else: if r.json().get( 'error_description' ) or r.json().get( 'error' ): raise PanCloudError(r.text) self.access_token = r_json.get('access_token') self.jwt_exp = self._decode_exp(self.access_token_) self.refresh_token = r_json.get('refresh_token') self.write_credentials() return r_json
Exchange authorization code for token. Args: client_id (str): OAuth2 client ID. Defaults to ``None``. client_secret (str): OAuth2 client secret. Defaults to ``None``. code (str): Authorization code. Defaults to ``None``. redirect_uri (str): Redirect URI. Defaults to ``None``. Returns: dict: Response from token URL.
juraj-google-style
def subscribe(object_type: str, subscriber: str, callback_handler: Callable=None) -> EventQueue: key = _keys.subscribers(object_type) DB.remove_from_list(key, subscriber) DB.append_to_list(key, subscriber) return EventQueue(object_type, subscriber, callback_handler)
Subscribe to the specified object type. Returns an EventQueue object which can be used to query events associated with the object type for this subscriber. Args: object_type (str): Object type subscriber (str): Subscriber name callback_handler (function, optional): Callback handler function. Returns: EventQueue, event queue object.
codesearchnet
def ParseFileObject(self, parser_mediator, file_object): win_registry_reader = FileObjectWinRegistryFileReader() try: registry_file = win_registry_reader.Open(file_object) except IOError as exception: parser_mediator.ProduceExtractionWarning( 'unable to open Windows Registry file with error: {0!s}'.format( exception)) return win_registry = dfwinreg_registry.WinRegistry() key_path_prefix = win_registry.GetRegistryFileMapping(registry_file) registry_file.SetKeyPathPrefix(key_path_prefix) root_key = registry_file.GetRootKey() if not root_key: return registry_find_specs = getattr( parser_mediator.artifacts_filter_helper, 'registry_find_specs', None) if not registry_find_specs: try: self._ParseRecurseKeys(parser_mediator, root_key) except IOError as exception: parser_mediator.ProduceExtractionWarning('{0!s}'.format(exception)) else: artifacts_filter_helper = artifact_filters.ArtifactDefinitionsFilterHelper if not artifacts_filter_helper.CheckKeyCompatibility(key_path_prefix): logger.warning(( 'Artifacts filters are not supported for Windows Registry file ' 'with key path prefix: "{0:s}".').format(key_path_prefix)) else: try: win_registry.MapFile(key_path_prefix, registry_file) self._ParseKeysFromFindSpecs( parser_mediator, win_registry, registry_find_specs) except IOError as exception: parser_mediator.ProduceExtractionWarning('{0!s}'.format(exception))
Parses a Windows Registry file-like object. Args: parser_mediator (ParserMediator): parser mediator. file_object (dfvfs.FileIO): a file-like object.
juraj-google-style
def get_conflicting_tools(self, request_only=False): from collections import defaultdict tool_sets = defaultdict(set) tools_dict = self.get_tools(request_only=request_only) for (variant, tools) in tools_dict.itervalues(): for tool in tools: tool_sets[tool].add(variant) conflicts = dict(((k, v) for (k, v) in tool_sets.iteritems() if (len(v) > 1))) return conflicts
Returns tools of the same name provided by more than one package. Args: request_only: If True, only return the key from resolved packages that were also present in the request. Returns: Dict of {tool-name: set([Variant])}.
codesearchnet
def scalar_projection(v1, v2): return np.dot(v1, v2) / np.linalg.norm(v2)
compute the scalar projection of v1 upon v2 Args: v1, v2: iterable indices 0, 1, 2 corresponding to cartesian coordinates Returns: 3-vector of the projection of point p onto the direction of v
juraj-google-style
def setup_engines(client=None): if (not client): try: client = ipyparallel.Client() except: raise DistobClusterError(u"Could not connect to an ipyparallel cluster. Make\n sure a cluster is started (e.g. to use the CPUs of a\n single computer, can type 'ipcluster start')") eids = client.ids if (not eids): raise DistobClusterError(u'No ipyparallel compute engines are available') nengines = len(eids) dv = client[eids] dv.use_dill() with dv.sync_imports(quiet=True): import distob ars = [] for i in eids: dv.targets = i ars.append(dv.apply_async(_remote_setup_engine, i, nengines)) dv.wait(ars) for ar in ars: if (not ar.successful()): raise ar.r if (distob.engine is None): distob.engine = ObjectHub((- 1), client)
Prepare all iPython engines for distributed object processing. Args: client (ipyparallel.Client, optional): If None, will create a client using the default ipyparallel profile.
codesearchnet
def remove_behaviour(self, behaviour): if not self.has_behaviour(behaviour): raise ValueError("This behaviour is not registered") index = self.behaviours.index(behaviour) self.behaviours[index].kill() self.behaviours.pop(index)
Removes a behaviour from the agent. The behaviour is first killed. Args: behaviour (spade.behaviour.CyclicBehaviour): the behaviour instance to be removed
juraj-google-style
def gallery_section(images, title): imgs = [] while True: img = yield marv.pull(images) if img is None: break imgs.append({'src': img.relpath}) if not imgs: return widget = {'title': images.title, 'gallery': {'images': imgs}} section = {'title': title, 'widgets': [widget]} yield marv.push(section)
Create detail section with gallery. Args: title (str): Title to be displayed for detail section. images: stream of marv image files Returns One detail section.
juraj-google-style
class JetMoeMoA(nn.Module): def __init__(self, config: JetMoeConfig): super(JetMoeMoA, self).__init__() self.num_experts = config.num_local_experts self.input_size = config.hidden_size self.hidden_size = config.kv_channels * config.num_key_value_heads self.top_k = config.num_experts_per_tok self.bias = torch.nn.Parameter(torch.empty(self.input_size)) self.input_linear = JetMoeParallelExperts(self.num_experts, self.input_size, self.hidden_size) self.output_linear = JetMoeParallelExperts(self.num_experts, self.hidden_size, self.input_size) self.router = JetMoeTopKGating(input_size=self.input_size, num_experts=self.num_experts, top_k=self.top_k) def map(self, layer_input): bsz, length, emb_size = layer_input.size() layer_input = layer_input.reshape(-1, emb_size) index_sorted_experts, batch_index, batch_gates, expert_size, router_logits = self.router(layer_input) topo_info = (index_sorted_experts, batch_index, batch_gates, expert_size) expert_inputs = layer_input[batch_index] expert_outputs = self.input_linear(expert_inputs, expert_size) zeros = torch.zeros((bsz * length * self.top_k, self.hidden_size), dtype=expert_outputs.dtype, device=expert_outputs.device) layer_output = zeros.index_add(0, index_sorted_experts, expert_outputs) layer_output = layer_output.view(bsz, length, self.top_k, -1) return (layer_output, router_logits, topo_info) def reduce(self, layer_input, topo_info): bsz, length, k, hidden_size = layer_input.size() layer_input = layer_input.reshape(-1, hidden_size) index_sorted_experts, batch_index, batch_gates, expert_size = topo_info expert_inputs = layer_input[index_sorted_experts] expert_outputs = self.output_linear(expert_inputs, expert_size) expert_outputs = expert_outputs * batch_gates[:, None] zeros = torch.zeros((bsz * length, self.input_size), dtype=expert_outputs.dtype, device=expert_outputs.device) layer_output = zeros.index_add(0, batch_index, expert_outputs) layer_output = layer_output.view(bsz, length, self.input_size) layer_output = layer_output + self.bias return layer_output def forward(self, layer_input): raise NotImplementedError("This module doesn't support call and forward.")
A Sparsely gated mixture of attention layer with pairs of query- and output-projections as experts. Args: config: Configuration object with model hyperparameters.
github-repos
def dot(inputs, axes, normalize=False, **kwargs): return Dot(axes=axes, normalize=normalize, **kwargs)(inputs)
Functional interface to the `Dot` layer. Args: inputs: A list of input tensors (at least 2). axes: Integer or tuple of integers, axis or axes along which to take the dot product. normalize: Whether to L2-normalize samples along the dot product axis before taking the dot product. If set to True, then the output of the dot product is the cosine proximity between the two samples. **kwargs: Standard layer keyword arguments. Returns: A tensor, the dot product of the samples from the inputs.
github-repos
def print_probabilities(state: State, ndigits: int = 4, file: TextIO = None) -> None: prob = bk.evaluate(state.probabilities()) for index, prob in np.ndenumerate(prob): prob = round(prob, ndigits) if prob == 0.0: continue ket = "".join([str(n) for n in index]) print(ket, ":", prob, file=file)
Pretty print state probabilities. Args: state: ndigits: Number of digits of accuracy file: Output stream (Defaults to stdout)
juraj-google-style
def get_summary_dict(self, include_msd_t=False, include_mscd_t=False): d = {'D': self.diffusivity, 'D_sigma': self.diffusivity_std_dev, 'D_charge': self.chg_diffusivity, 'D_charge_sigma': self.chg_diffusivity_std_dev, 'S': self.conductivity, 'S_sigma': self.conductivity_std_dev, 'S_charge': self.chg_conductivity, 'D_components': self.diffusivity_components.tolist(), 'S_components': self.conductivity_components.tolist(), 'D_components_sigma': self.diffusivity_components_std_dev.tolist(), 'S_components_sigma': self.conductivity_components_std_dev.tolist(), 'specie': str(self.specie), 'step_skip': self.step_skip, 'time_step': self.time_step, 'temperature': self.temperature, 'max_framework_displacement': self.max_framework_displacement, 'Haven_ratio': self.haven_ratio} if include_msd_t: d['msd'] = self.msd.tolist() d['msd_components'] = self.msd_components.tolist() d['dt'] = self.dt.tolist() if include_mscd_t: d['mscd'] = self.mscd.tolist() return d
Provides a summary of diffusion information. Args: include_msd_t (bool): Whether to include mean square displace and time data with the data. include_msd_t (bool): Whether to include mean square charge displace and time data with the data. Returns: (dict) of diffusion and conductivity data.
codesearchnet
def get_iso3_country_code(cls, country, use_live=True, exception=None): countriesdata = cls.countriesdata(use_live=use_live) countryupper = country.upper() len_countryupper = len(countryupper) if len_countryupper == 3: if countryupper in countriesdata['countries']: return countryupper elif len_countryupper == 2: iso3 = countriesdata['iso2iso3'].get(countryupper) if iso3 is not None: return iso3 iso3 = countriesdata['countrynames2iso3'].get(countryupper) if iso3 is not None: return iso3 for candidate in cls.expand_countryname_abbrevs(countryupper): iso3 = countriesdata['countrynames2iso3'].get(candidate) if iso3 is not None: return iso3 if exception is not None: raise exception return None
Get ISO3 code for cls. Only exact matches or None are returned. Args: country (str): Country for which to get ISO3 code use_live (bool): Try to get use latest data from web rather than file in package. Defaults to True. exception (Optional[ExceptionUpperBound]): An exception to raise if country not found. Defaults to None. Returns: Optional[str]: ISO3 country code or None
juraj-google-style
def optimize( self, re_encoder_grads_list, decoder_grads_list, encoder_grads_list, learning_rate, epoch ): self.__retrospective_encoder.optimize(re_encoder_grads_list, learning_rate, epoch) self.__encoder_decoder_controller.optimize( decoder_grads_list, encoder_grads_list, learning_rate, epoch )
Back propagation. Args: re_encoder_grads_list: re-encoder's `list` of graduations. decoder_grads_list: decoder's `list` of graduations. encoder_grads_list: encoder's `list` of graduations. learning_rate: Learning rate. epoch: Now epoch.
juraj-google-style
def vrp_solver(path_graph, initial_solution=None, runtime_seconds=60): routing = pywrapcp.RoutingModel(path_graph.num_nodes(), 1, path_graph.ORIGIN) for disjunction in path_graph.iter_disjunctions(): routing.AddDisjunction(disjunction) COST_MULTIPLIER = 1e4 def distance(i, j): return int(path_graph.cost(i, j) * COST_MULTIPLIER) routing.SetArcCostEvaluatorOfAllVehicles(distance) start_time = time() def found_solution(): t = time() - start_time cost = routing.CostVar().Max() / COST_MULTIPLIER print('\rBest solution at {} seconds has cost {} '.format( int(t), cost), end='') routing.AddAtSolutionCallback(found_solution) if not initial_solution: initial_solution = [i for i, _ in path_graph.iter_disjunctions()] initial_assignment = routing.ReadAssignmentFromRoutes([initial_solution], True) search_parameters = pywrapcp.RoutingModel.DefaultSearchParameters() search_parameters.time_limit_ms = runtime_seconds * 1000 search_parameters.local_search_metaheuristic = ( routing_enums_pb2.LocalSearchMetaheuristic.GUIDED_LOCAL_SEARCH) assignment = routing.SolveFromAssignmentWithParameters(initial_assignment, search_parameters) print() solution = [] index = routing.Start(0) while not routing.IsEnd(index): index = assignment.Value(routing.NextVar(index)) node = routing.IndexToNode(index) if node != 0: solution.append(node) return solution
Solve a path using or-tools' Vehicle Routing Problem solver. Params: path_graph the PathGraph representing the problem initial_solution a solution to start with (list of indices, not including the origin) runtime_seconds how long to search before returning Returns: an ordered list of indices in the graph representing a solution.
juraj-google-style
def _publish_actor_class_to_key(self, key, actor_class_info): self._worker.redis_client.hmset(key, actor_class_info) self._worker.redis_client.rpush('Exports', key)
Push an actor class definition to Redis. The is factored out as a separate function because it is also called on cached actor class definitions when a worker connects for the first time. Args: key: The key to store the actor class info at. actor_class_info: Information about the actor class.
codesearchnet
def _get_operand_name_and_index(self, numeric_verify_name: str) -> Tuple[str, int]: tensor_name, tensor_idx = numeric_verify_name.rsplit(':', 1) float_tensor_name = tensor_name[len(_NUMERIC_VERIFY_OP_NAME) + 1:] if re.match('\\d', float_tensor_name[-1]): float_tensor_name = float_tensor_name[:-1] return (float_tensor_name, int(tensor_idx))
Gets the index and name of NumericVerify Op's quantized input tensor. Args: numeric_verify_name: name of the NumericVerify op's output tensor. It has format of `NumericVerify/{quantized_tensor_name}:{quantized_tensor_idx}` Returns: Tuple of (tensor_name, tensor_idx) for quantized op's output tensor.
github-repos
def _prepare_init_params_from_job_description(cls, job_details, model_channel_name=None): init_params = super(MXNet, cls)._prepare_init_params_from_job_description(job_details, model_channel_name) image_name = init_params.pop('image') framework, py_version, tag, _ = framework_name_from_image(image_name) if not framework: init_params['image_name'] = image_name return init_params init_params['py_version'] = py_version init_params['framework_version'] = '0.12' if tag == '1.0' else framework_version_from_tag(tag) training_job_name = init_params['base_job_name'] if framework != cls.__framework_name__: raise ValueError("Training job: {} didn't use image for requested framework".format(training_job_name)) return init_params
Convert the job description to init params that can be handled by the class constructor Args: job_details: the returned job details from a describe_training_job API call. model_channel_name (str): Name of the channel where pre-trained model data will be downloaded. Returns: dictionary: The transformed init_params
juraj-google-style
def constant_value_as_shape(tensor): if isinstance(tensor, core.Value): return tensor_shape.TensorShape([dim if dim != -1 else None for dim in tensor.numpy()]) if tensor.get_shape().ndims == 0: value = constant_value(tensor) if value is None: raise ValueError("Received a scalar with unknown value as shape; require a statically known scalar with value '-1' to describe an unknown shape.") if value != -1: raise ValueError(f"Received a scalar value '{value}' as shape; require a statically known scalar with value '-1' to describe an unknown shape.") return tensor_shape.unknown_shape() shape = tensor.get_shape().with_rank(1) if shape == [0]: return tensor_shape.TensorShape([]) elif tensor.op.type == 'Cast': pre_cast = constant_value_as_shape(tensor.op.inputs[0]) if pre_cast.dims is None: return pre_cast cast_dtype = dtypes.as_dtype(tensor.op.get_attr('DstT')) if cast_dtype not in (dtypes.int32, dtypes.int64): return tensor_shape.unknown_shape(shape.dims[0].value) dest_dtype_shape_array = np.array([x if x is not None else -1 for x in pre_cast.as_list()]).astype(cast_dtype.as_numpy_dtype) return tensor_shape.TensorShape([x if x >= 0 else None for x in dest_dtype_shape_array]) elif tensor.op.type == 'Shape': return tensor.op.inputs[0].get_shape() elif tensor.op.type == 'Pack': ret = tensor_shape.TensorShape([]) assert tensor.op.get_attr('axis') == 0 for pack_input in tensor.op.inputs: pack_input_val = constant_value(pack_input) if pack_input_val is None or pack_input_val < 0: new_dim = tensor_shape.Dimension(None) else: new_dim = tensor_shape.Dimension(pack_input_val) ret = ret.concatenate([new_dim]) return ret elif tensor.op.type == 'Concat': ret = tensor_shape.TensorShape([]) for concat_input in tensor.op.inputs[1:]: ret = ret.concatenate(constant_value_as_shape(concat_input)) return ret elif tensor.op.type == 'ConcatV2': ret = tensor_shape.TensorShape([]) for concat_input in tensor.op.inputs[:-1]: ret = ret.concatenate(constant_value_as_shape(concat_input)) return ret elif tensor.op.type == 'StridedSlice': try: begin = constant_value(tensor.op.inputs[1]) end = constant_value(tensor.op.inputs[2]) strides = constant_value(tensor.op.inputs[3]) if begin is not None and end is not None and (strides is not None): begin = begin[0] end = end[0] strides = strides[0] begin_mask = tensor.op.get_attr('begin_mask') if begin_mask == 1: begin = None end_mask = tensor.op.get_attr('end_mask') if end_mask == 1: end = None ellipsis_mask = tensor.op.get_attr('ellipsis_mask') new_axis_mask = tensor.op.get_attr('new_axis_mask') shrink_axis_mask = tensor.op.get_attr('shrink_axis_mask') valid_attributes = not ellipsis_mask and (not new_axis_mask) and (not shrink_axis_mask) and (not begin_mask or begin_mask == 1) and (not end_mask or end_mask == 1) if valid_attributes: prev = constant_value_as_shape(tensor.op.inputs[0]) prev = prev[begin:end:strides] ret = tensor_shape.TensorShape(prev) return ret except ValueError: pass except TypeError: pass elif tensor.op.type == 'Placeholder' and tensor.op.graph.building_function and hasattr(tensor.op.graph, 'internal_captures'): for i, capture in enumerate(tensor.op.graph.internal_captures): if capture is tensor: external_capture = tensor.op.graph.external_captures[i] return constant_value_as_shape(external_capture) ret = tensor_shape.unknown_shape(shape.dims[0].value) value = constant_value(tensor) if value is not None: ret = ret.merge_with(tensor_shape.TensorShape([d if d >= 0 else None for d in value])) return ret
A version of `constant_value()` that returns a `TensorShape`. This version should be used when a constant tensor value is interpreted as a (possibly partial) shape, e.g. in the shape function for `tf.reshape()`. By explicitly requesting a `TensorShape` as the return value, it is possible to represent unknown dimensions; by contrast, `constant_value()` is all-or-nothing. Args: tensor: The rank-0 or rank-1 Tensor to be evaluated. Returns: A `TensorShape` based on the constant value of the given `tensor`. Raises: ValueError: If the shape is rank-0 and is not statically known to be -1.
github-repos
def GetTARInfo(self): if (not self._tar_info): location = getattr(self.path_spec, 'location', None) if (location is None): raise errors.PathSpecError('Path specification missing location.') if (not location.startswith(self._file_system.LOCATION_ROOT)): raise errors.PathSpecError('Invalid location in path specification.') if (len(location) == 1): return None tar_file = self._file_system.GetTARFile() try: self._tar_info = tar_file.getmember(location[1:]) except KeyError: pass return self._tar_info
Retrieves the TAR info. Returns: tarfile.TARInfo: TAR info or None if it does not exist. Raises: PathSpecError: if the path specification is incorrect.
codesearchnet
def to_hdf(self, path, key, mode='a'): pd.DataFrame(self.serialize()).to_hdf(path, key, mode=mode, format='table', complib='zlib', complevel=9) f = h5py.File(path, 'r+') f[key].attrs['microns_per_pixel'] = (float(self.microns_per_pixel) if (self.microns_per_pixel is not None) else np.nan) f.close()
Save the CellDataFrame to an hdf5 file. Args: path (str): the path to save to key (str): the name of the location to save it to mode (str): write mode
codesearchnet
def _ParseVValueString( self, parser_mediator, data, user_information_descriptor): data_start_offset = ( user_information_descriptor.offset + self._V_VALUE_STRINGS_OFFSET) data_end_offset = data_start_offset + user_information_descriptor.size descriptor_data = data[data_start_offset:data_end_offset] try: username = descriptor_data.decode('utf-16-le') except (UnicodeDecodeError, UnicodeEncodeError) as exception: username = descriptor_data.decode('utf-16-le', errors='replace') parser_mediator.ProduceExtractionWarning(( 'unable to decode V value string with error: {0!s}. Characters ' 'that cannot be decoded will be replaced with "?" or ' '"\\ufffd".').format(exception)) return username
Parses a V value string. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. data (bytes): Windows Registry V value data. user_information_descriptor (user_information_descriptor): V value user information descriptor. Returns: str: string value stored in the Windows Registry V value data.
juraj-google-style
def get_step(): return _summary_state.step
Returns the default summary step for the current thread. Returns: The step set by `tf.summary.experimental.set_step()` if one has been set, otherwise None.
github-repos
def _merge_assets_key_collection(saved_model_proto, path): for meta_graph in saved_model_proto.meta_graphs: node_asset_map = {} if (tf_v1.saved_model.constants.ASSETS_KEY in meta_graph.collection_def): assets_any_proto = meta_graph.collection_def[tf_v1.saved_model.constants.ASSETS_KEY].any_list.value for asset_any_proto in assets_any_proto: asset_proto = meta_graph_pb2.AssetFileDef() asset_any_proto.Unpack(asset_proto) asset_filename = _get_asset_filename(path, asset_proto.filename) node_asset_map[_get_node_name_from_tensor(asset_proto.tensor_info.name)] = asset_filename del meta_graph.collection_def[tf_v1.saved_model.constants.ASSETS_KEY] for node in meta_graph.graph_def.node: asset_filepath = node_asset_map.get(node.name) if asset_filepath: _check_asset_node_def(node) node.attr['value'].tensor.string_val[0] = asset_filepath
Merges the ASSETS_KEY collection into the GraphDefs in saved_model_proto. Removes the ASSETS_KEY collection from the GraphDefs in the SavedModel and modifies nodes with the assets filenames to point to the assets in `path`. After this transformation, the SavedModel GraphDefs can be used without feeding asset tensors. Args: saved_model_proto: SavedModel proto to be modified. path: path where the SavedModel is being loaded from.
codesearchnet
def lxml(self): import lxml.etree return lxml.etree.fromstring((e_views.XML_HEADER + self.xml()).encode('utf-8'))
render the record into a lxml document. this is useful for querying data from the record using xpath, etc. note: lxml must be installed. Returns: lxml.etree.ElementTree: the rendered and parsed xml document. Raises: ImportError: if lxml is not installed.
codesearchnet
def GetConfiguredUsers(self): if os.path.exists(self.google_users_file): users = open(self.google_users_file).readlines() else: users = [] return [user.strip() for user in users]
Retrieve the list of configured Google user accounts. Returns: list, the username strings of users congfigured by Google.
codesearchnet
def _bind_length_handlers(tids, user_handler, lns): for tid in tids: for ln in lns: type_octet = _gen_type_octet(tid, ln) ion_type = _TID_VALUE_TYPE_TABLE[tid] if ln == 1 and ion_type is IonType.STRUCT: handler = partial(_ordered_struct_start_handler, partial(user_handler, ion_type)) elif ln < _LENGTH_FIELD_FOLLOWS: handler = partial(user_handler, ion_type, ln) else: handler = partial(_var_uint_field_handler, partial(user_handler, ion_type)) _HANDLER_DISPATCH_TABLE[type_octet] = handler
Binds a set of handlers with the given factory. Args: tids (Sequence[int]): The Type IDs to bind to. user_handler (Callable): A function that takes as its parameters :class:`IonType`, ``length``, and the ``ctx`` context returning a co-routine. lns (Sequence[int]): The low-nibble lengths to bind to.
juraj-google-style
def get_self_attention_bias(x): x_shape = common_layers.shape_list(x) self_attention_bias = common_attention.attention_bias_lower_triangle( x_shape[1]) return self_attention_bias
Creates masked self attention bias. Args: x: A tensor of shape [batch, length, depth] Returns: self_attention_bias: A tensor of shape [length, length, 1]
juraj-google-style
def execute_script(self, script, *args): args = [arg.base if isinstance(arg, Base) else arg for arg in args] self.driver.execute_script(script, *args)
Execute the given script, not returning a result. This is useful for scripts that return complex objects, such as jQuery statements. ``execute_script`` should be used over :meth:`evaluate_script` whenever possible. Args: script (str): A string of JavaScript to execute. *args: Variable length argument list to pass to the executed JavaScript string.
juraj-google-style
def convert_strtime_datetime(dt_str): dt, _, us = dt_str.partition(".") dt = datetime.datetime.strptime(dt, "%Y-%m-%dT%H:%M:%S") us = int(us.rstrip("Z"), 10) return dt + datetime.timedelta(microseconds=us)
Converts datetime isoformat string to datetime (dt) object Args: :dt_str (str): input string in '2017-12-30T18:48:00.353Z' form or similar Returns: TYPE: datetime object
juraj-google-style
def encoding_specs(self, spec): return spec._component_specs
Returns a list of `TensorSpec`(s) describing the encoding for `spec`. See `encode` for a description of the default encoding. Subclasses may override this default definition, when necessary. Args: spec: The TypeSpec whose encoding should be described. Returns: A nest (as defined by `tf.nest) of `tf.TypeSpec`, describing the values that are returned by `self.encode(spec, ...)`. All TypeSpecs in this nest must be batchable.
github-repos
def to(self, new_unit): return FloatWithUnit( self * self.unit.get_conversion_factor(new_unit), unit_type=self._unit_type, unit=new_unit)
Conversion to a new_unit. Right now, only supports 1 to 1 mapping of units of each type. Args: new_unit: New unit type. Returns: A FloatWithUnit object in the new units. Example usage: >>> e = Energy(1.1, "eV") >>> e = Energy(1.1, "Ha") >>> e.to("eV") 29.932522246 eV
juraj-google-style
def info(self, code, message, compressed=False): return ''.join([x for x in self.info_gen(code, message, compressed)])
The complete content of an info response. This should only used for commands that return small or known amounts of data. Returns: A the complete content of a textual response.
codesearchnet
def ProcessConfigOverrides(filename): abs_filename = os.path.abspath(filename) cfg_filters = [] keep_looking = True while keep_looking: abs_path, base_name = os.path.split(abs_filename) if not base_name: break cfg_file = os.path.join(abs_path, "CPPLINT.cfg") abs_filename = abs_path if not os.path.isfile(cfg_file): continue try: with open(cfg_file) as file_handle: for line in file_handle: line, _, _ = line.partition(' if not line.strip(): continue name, _, val = line.partition('=') name = name.strip() val = val.strip() if name == 'set noparent': keep_looking = False elif name == 'filter': cfg_filters.append(val) elif name == 'exclude_files': if base_name: pattern = re.compile(val) if pattern.match(base_name): sys.stderr.write('Ignoring "%s": file excluded by "%s". ' 'File path component "%s" matches ' 'pattern "%s"\n' % (filename, cfg_file, base_name, val)) return False elif name == 'linelength': global _line_length try: _line_length = int(val) except ValueError: sys.stderr.write('Line length must be numeric.') else: sys.stderr.write( 'Invalid configuration option (%s) in file %s\n' % (name, cfg_file)) except IOError: sys.stderr.write( "Skipping config file '%s': Can't open for reading\n" % cfg_file) keep_looking = False for filter in reversed(cfg_filters): _AddFilters(filter) return True
Loads the configuration files and processes the config overrides. Args: filename: The name of the file being processed by the linter. Returns: False if the current |filename| should not be processed further.
juraj-google-style
def list_jobs(config, *, status=JobStatus.Active, filter_by_type=None, filter_by_worker=None): celery_app = create_app(config) if (filter_by_worker is not None): inspect = celery_app.control.inspect(destination=(filter_by_worker if isinstance(filter_by_worker, list) else [filter_by_worker])) else: inspect = celery_app.control.inspect() if (status == JobStatus.Active): job_map = inspect.active() elif (status == JobStatus.Registered): job_map = inspect.registered() elif (status == JobStatus.Reserved): job_map = inspect.reserved() elif (status == JobStatus.Scheduled): job_map = inspect.scheduled() else: job_map = None if (job_map is None): return [] result = [] for (worker_name, jobs) in job_map.items(): for job in jobs: try: job_stats = JobStats.from_celery(worker_name, job, celery_app) if ((filter_by_type is None) or (job_stats.type == filter_by_type)): result.append(job_stats) except JobStatInvalid: pass return result
Return a list of Celery jobs. Args: config (Config): Reference to the configuration object from which the settings are retrieved. status (JobStatus): The status of the jobs that should be returned. filter_by_type (list): Restrict the returned jobs to the types in this list. filter_by_worker (list): Only return jobs that were registered, reserved or are running on the workers given in this list of worker names. Using this option will increase the performance. Returns: list: A list of JobStats.
codesearchnet
def minimum(self, vars_list: List[str]) -> 'TensorFluent': return self._aggregation_op(tf.reduce_min, self, vars_list)
Returns the TensorFluent for the minimum aggregation function. Args: vars_list: The list of variables to be aggregated over. Returns: A TensorFluent wrapping the minimum aggregation function.
codesearchnet
def relabel_variables(self, mapping, inplace=True): if (not inplace): return self.copy().relabel_variables(mapping, inplace=True) try: old_labels = set(mapping) new_labels = set(mapping.values()) except TypeError: raise ValueError('mapping targets must be hashable objects') variables = self.variables for v in new_labels: if ((v in variables) and (v not in old_labels)): raise ValueError('A variable cannot be relabeled "{}" without also relabeling the existing variable of the same name'.format(v)) shared = (old_labels & new_labels) if shared: (old_to_intermediate, intermediate_to_new) = resolve_label_conflict(mapping, old_labels, new_labels) self.relabel_variables(old_to_intermediate, inplace=True) self.relabel_variables(intermediate_to_new, inplace=True) return self for (oldterm, bias) in list(self.items()): newterm = frozenset((mapping.get(v, v) for v in oldterm)) if (newterm != oldterm): self[newterm] = bias del self[oldterm] return self
Relabel variables of a binary polynomial as specified by mapping. Args: mapping (dict): Dict mapping current variable labels to new ones. If an incomplete mapping is provided, unmapped variables retain their current labels. inplace (bool, optional, default=True): If True, the binary polynomial is updated in-place; otherwise, a new binary polynomial is returned. Returns: :class:`.BinaryPolynomial`: A binary polynomial with the variables relabeled. If `inplace` is set to True, returns itself.
codesearchnet
async def _verify_examples(self, client: GRPCClient, examples: List[Example], origin: Origin): count_of_verified = 0 verify_status_failed = False default_examples = [] for example in examples: if example.tag.default_example: default_examples.append(example) if example.status not in Config.ERROR_STATUSES: count_of_verified += 1 continue if example.status == STATUS_VALIDATION_ERROR: logging.error('Example: %s has validation error', example.filepath) elif example.status == STATUS_PREPARATION_ERROR: logging.error('Example: %s has preparation error', example.filepath) elif example.status == STATUS_ERROR: logging.error('Example: %s has error during setup run builder', example.filepath) elif example.status == STATUS_RUN_TIMEOUT: logging.error('Example: %s failed because of timeout', example.filepath) elif example.status == STATUS_COMPILE_ERROR: err = await client.get_compile_output(example.pipeline_id) logging.error('Example: %s has compilation error: %s', example.filepath, err) elif example.status == STATUS_RUN_ERROR: err = await client.get_run_error(example.pipeline_id) logging.error('Example: %s has execution error: %s', example.filepath, err) verify_status_failed = True logging.info('Number of verified Playground examples: %s / %s', count_of_verified, len(examples)) logging.info('Number of Playground examples with some error: %s / %s', len(examples) - count_of_verified, len(examples)) if origin == Origin.PG_EXAMPLES: if len(default_examples) == 0: logging.error('Default example not found') raise VerifyException('CI step failed due to finding an incorrect number of default examples. Default example not found') if len(default_examples) > 1: logging.error('Many default examples found') logging.error('Examples where the default_example field is true:') for example in default_examples: logging.error(example.filepath) raise VerifyException('CI step failed due to finding an incorrect number of default examples. Many default examples found') if verify_status_failed: raise VerifyException('CI step failed due to errors in the examples')
Verify statuses of beam examples and the number of found default examples. Check example.status for each examples. If the status of the example is: - STATUS_VALIDATION_ERROR/STATUS_PREPARATION_ERROR /STATUS_ERROR/STATUS_RUN_TIMEOUT: log error - STATUS_COMPILE_ERROR: get logs using GetCompileOutput request and log them with error. - STATUS_RUN_ERROR: get logs using GetRunError request and log them with error. Args: examples: beam examples that should be verified
github-repos
def assert_no_current_path(self, path, **kwargs): query = CurrentPathQuery(path, **kwargs) @self.document.synchronize def assert_no_current_path(): if query.resolves_for(self): raise ExpectationNotMet(query.negative_failure_message) assert_no_current_path() return True
Asserts that the page doesn't have the given path. Args: path (str | RegexObject): The string or regex that the current "path" should match. **kwargs: Arbitrary keyword arguments for :class:`CurrentPathQuery`. Returns: True Raises: ExpectationNotMet: If the assertion hasn't succeeded during the wait time.
codesearchnet
def attach(self, engine, metric_names=None, output_transform=None, event_name=Events.ITERATION_COMPLETED, closing_event_name=Events.EPOCH_COMPLETED): desc = self.tqdm_kwargs.get('desc', 'Epoch') if (not ((event_name in Events) and (closing_event_name in Events))): raise ValueError('Logging and closing events should be only ignite.engine.Events') if (not self._compare_lt(event_name, closing_event_name)): raise ValueError('Logging event {} should be called before closing event {}'.format(event_name, closing_event_name)) log_handler = _OutputHandler(desc, metric_names, output_transform, event_name=event_name, closing_event_name=closing_event_name) super(ProgressBar, self).attach(engine, log_handler, event_name) engine.add_event_handler(closing_event_name, self._close)
Attaches the progress bar to an engine object. Args: engine (Engine): engine object. metric_names (list, optional): list of the metrics names to log as the bar progresses output_transform (callable, optional): a function to select what you want to print from the engine's output. This function may return either a dictionary with entries in the format of ``{name: value}``, or a single scalar, which will be displayed with the default name `output`. event_name: event's name on which the progress bar advances. Valid events are from :class:`~ignite.engine.Events`. closing_event_name: event's name on which the progress bar is closed. Valid events are from :class:`~ignite.engine.Events`.
codesearchnet
def add_package(self, pkg, action_type="Install"): if isinstance(pkg, Package): if action_type not in ("Install", "Cache", "Install Cached"): raise ValueError package = self.add_object_to_path( pkg, "package_configuration/packages") action = package.find("action") if not action: action = ElementTree.SubElement(package, "action") action.text = action_type else: raise ValueError("Please pass a Package object to parameter: " "pkg.")
Add a Package object to the policy with action=install. Args: pkg: A Package object to add. action_type (str, optional): One of "Install", "Cache", or "Install Cached". Defaults to "Install".
juraj-google-style
def produce_csv_output(filehandle: TextIO, fields: Sequence[str], values: Iterable[str]) -> None: output_csv(filehandle, fields) for row in values: output_csv(filehandle, row)
Produce CSV output, without using ``csv.writer``, so the log can be used for lots of things. - ... eh? What was I talking about? - POOR; DEPRECATED. Args: filehandle: file to write to fields: field names values: values
juraj-google-style
def default_onnx_opset(self) -> int: return DEFAULT_ONNX_OPSET
Which onnx opset to use when exporting the model Returns: Integer ONNX Opset version
github-repos
def create_with_secret(self, name, secret, encryption): try: encryption = encryption or DEFAULT_ENCRYPTION enc = ENCRYPTION_MAP[encryption] except KeyError: raise TypeError('encryption must be one of "cleartext", "md5"' ' or "sha512"') cmd = 'username %s secret %s %s' % (name, enc, secret) return self.configure(cmd)
Creates a new user on the local node Args: name (str): The name of the user to craete secret (str): The secret (password) to assign to this user encryption (str): Specifies how the secret is encoded. Valid values are "cleartext", "md5", "sha512". The default is "cleartext" Returns: True if the operation was successful otherwise False
juraj-google-style
def clone(self, spec=None, **overrides): settings = dict(self.get_param_values(), **overrides) if spec is None: spec = (self.name, overrides.get('label', self.label)) if 'label' in overrides and isinstance(spec, basestring) : spec = (spec, overrides['label']) elif 'label' in overrides and isinstance(spec, tuple) : if overrides['label'] != spec[1]: self.param.warning( 'Using label as supplied by keyword ({!r}), ignoring ' 'tuple value {!r}'.format(overrides['label'], spec[1])) spec = (spec[0], overrides['label']) return self.__class__(spec, **{k:v for k,v in settings.items() if k not in ['name', 'label']})
Clones the Dimension with new parameters Derive a new Dimension that inherits existing parameters except for the supplied, explicit overrides Args: spec (tuple, optional): Dimension tuple specification **overrides: Dimension parameter overrides Returns: Cloned Dimension object
juraj-google-style
def _is_propertyable(names, attrs, annotations, attr): return ((attr in annotations) and (not attr.startswith('_')) and (not attr.isupper()) and ('__{}'.format(attr) not in names) and (not isinstance(getattr(attrs, attr, None), types.MethodType)))
Determine if an attribute can be replaced with a property. Args: names: The complete list of all attribute names for the class. attrs: The attribute dict returned by __prepare__. annotations: A mapping of all defined annotations for the class. attr: The attribute to test. Returns: True if the attribute can be replaced with a property; else False.
codesearchnet
def _in_multi_worker_mode(self): strategy = self._distribution_strategy if not strategy and distribute_lib.has_strategy(): strategy = distribute_lib.get_strategy() return strategy and strategy.extended._in_multi_worker_mode()
Method to infer if this `Model` is working in multi-worker settings. Multi-worker training refers to the setup where the training is distributed across multiple workers, as opposed to the case where only a local process performs the training. This function is used to infer for example whether or not a distribute coordinator should be run, and thus TensorFlow servers should be started for communication with other servers in the cluster, or whether or not saving/restoring checkpoints is relevant for preemption fault tolerance. Experimental. Signature and implementation are subject to change. Returns: Whether this model indicates it's working in multi-worker settings.
github-repos
def check_the_end_flag(self, state_arr): if self.__check_goal_flag(state_arr) is True or self.__check_crash_flag(state_arr): return True else: return False
Check the end flag. If this return value is `True`, the learning is end. As a rule, the learning can not be stopped. This method should be overrided for concreate usecases. Args: state_arr: `np.ndarray` of state in `self.t`. Returns: bool
juraj-google-style
def add(self, resource, provider_uri_or_id, timeout=-1): uri = self._provider_client.build_uri(provider_uri_or_id) + "/device-managers" return self._client.create(resource=resource, uri=uri, timeout=timeout)
Adds a Device Manager under the specified provider. Args: resource (dict): Object to add. provider_uri_or_id: ID or URI of provider. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView, just stop waiting for its completion. Returns: dict: Added SAN Manager.
juraj-google-style
def AddPerformanceOptions(self, argument_group): argument_group.add_argument( '--buffer_size', '--buffer-size', '--bs', dest='buffer_size', action='store', default=0, help=( 'The buffer size for the output (defaults to 196MiB).')) argument_group.add_argument( '--queue_size', '--queue-size', dest='queue_size', action='store', default=0, help=( 'The maximum number of queued items per worker ' '(defaults to {0:d})').format(self._DEFAULT_QUEUE_SIZE))
Adds the performance options to the argument group. Args: argument_group (argparse._ArgumentGroup): argparse argument group.
juraj-google-style
def set_status(self, status: Status, increment_try_count: bool=True, filename: str=None): url = self.url_record.url assert not self._try_count_incremented, (url, status) if increment_try_count: self._try_count_incremented = True _logger.debug(__('Marking URL {0} status {1}.', url, status)) url_result = URLResult() url_result.filename = filename self.app_session.factory['URLTable'].check_in( url, status, increment_try_count=increment_try_count, url_result=url_result, ) self._processed = True
Mark the item with the given status. Args: status: a value from :class:`Status`. increment_try_count: if True, increment the ``try_count`` value
juraj-google-style
def html_for_cgi_argument(argument, form): value = form[argument].value if argument in form else None return KEY_VALUE_TEMPLATE.format(argument, value)
Returns an HTML snippet for a CGI argument. Args: argument: A string representing an CGI argument name in a form. form: A CGI FieldStorage object. Returns: String HTML representing the CGI value and variable.
juraj-google-style
def get(self, params=None): return self._call('get', url=self.endpoint, params=params)
Send a POST request and return the JSON decoded result. Args: params (dict, optional): Mapping of parameters to send in request. Returns: mixed: JSON decoded response data.
juraj-google-style
def from_params(cls, params): key_fn = lambda x: id(x[1].owner) streams = [] for _, group in groupby(sorted(params.items(), key=key_fn), key_fn): group = list(group) inst = [p.owner for _, p in group][0] if not isinstance(inst, param.Parameterized): continue names = [p.name for _, p in group] rename = {p.name: n for n, p in group} streams.append(cls(inst, names, rename=rename)) return streams
Returns Params streams given a dictionary of parameters Args: params (dict): Dictionary of parameters Returns: List of Params streams
juraj-google-style
def get_tensors(object_): if torch.is_tensor(object_): return [object_] elif isinstance(object_, (str, float, int)): return [] tensors = set() if isinstance(object_, collections.abc.Mapping): for value in object_.values(): tensors.update(get_tensors(value)) elif isinstance(object_, collections.abc.Iterable): for value in object_: tensors.update(get_tensors(value)) else: members = [ value for key, value in inspect.getmembers(object_) if not isinstance(value, (collections.abc.Callable, type(None))) ] tensors.update(get_tensors(members)) return tensors
Get all tensors associated with ``object_`` Args: object_ (any): Any object to look for tensors. Returns: (list of torch.tensor): List of tensors that are associated with ``object_``.
juraj-google-style
def _collect_grades_data(self, enterprise_enrollment, course_details): if (self.grades_api is None): self.grades_api = GradesApiClient(self.user) course_id = enterprise_enrollment.course_id username = enterprise_enrollment.enterprise_customer_user.user.username try: grades_data = self.grades_api.get_course_grade(course_id, username) except HttpNotFoundError as error: if hasattr(error, 'content'): response_content = json.loads(error.content) if (response_content.get('error_code', '') == 'user_not_enrolled'): LOGGER.info('User [%s] not enrolled in course [%s], enterprise enrollment [%d]', username, course_id, enterprise_enrollment.pk) return (None, None, None) LOGGER.error('No grades data found for [%d]: [%s], [%s]', enterprise_enrollment.pk, course_id, username) return (None, None, None) course_end_date = course_details.get('end') if (course_end_date is not None): course_end_date = parse_datetime(course_end_date) now = timezone.now() is_passing = grades_data.get('passed') if ((course_end_date is not None) and (course_end_date < now)): completed_date = course_end_date grade = (self.grade_passing if is_passing else self.grade_failing) elif is_passing: completed_date = now grade = self.grade_passing else: completed_date = None grade = self.grade_incomplete return (completed_date, grade, is_passing)
Collect the learner completion data from the Grades API. Used for self-paced courses. Args: enterprise_enrollment (EnterpriseCourseEnrollment): the enterprise enrollment record for which we need to collect completion/grade data course_details (dict): the course details for the course in the enterprise enrollment record. Returns: completed_date: Date the course was completed, this is None if course has not been completed. grade: Current grade in the course. is_passing: Boolean indicating if the grade is a passing grade or not.
codesearchnet
def _input_optional(inp): if 'default' in inp.keys(): return True typ = inp.get('type') if isinstance(typ, six.string_types): return typ.endswith('?') elif isinstance(typ, dict): return False elif isinstance(typ, list): return bool(u'null' in typ) else: raise ValueError('Invalid input "{}"'.format(inp.get['id']))
Returns True if a step input parameter is optional. Args: inp (dict): a dictionary representation of an input. Raises: ValueError: The inp provided is not valid.
juraj-google-style
def _split_bytecode(bytecode: list[opcodes.Opcode], processed_blocks: set[Block], python_version) -> list[Block]: targets = {op.target for op in bytecode if op.target} blocks = [] code = [] prev_block: Block = None i = 0 while i < len(bytecode): op = bytecode[i] if python_version >= (3, 12) and isinstance(op, opcodes.SEND): if code: prev_block = Block(code) blocks.append(prev_block) code = [] new_blocks, i = _preprocess_async_for_and_yield(i, bytecode, prev_block, processed_blocks) blocks.extend(new_blocks) prev_block = blocks[-1] continue code.append(op) if op.no_next() or op.does_jump() or op.pops_block() or (op.next is None) or (op.next in targets and (not isinstance(op.next, opcodes.GET_ANEXT) or python_version < (3, 12))): prev_block = Block(code) blocks.append(prev_block) code = [] i += 1 return blocks
Given a sequence of bytecodes, return basic blocks. This will split the code at "basic block boundaries". These occur at every instruction that is jumped to, and after every instruction that jumps somewhere else (or returns / aborts). Args: bytecode: A list of instances of opcodes.Opcode. (E.g. returned from opcodes.dis()) Returns: A list of _Block instances.
github-repos
def _tf_predict(model_dir, input_csvlines): with tf.Graph().as_default(), tf.Session() as sess: input_alias_map, output_alias_map = _tf_load_model(sess, model_dir) csv_tensor_name = list(input_alias_map.values())[0] results = sess.run(fetches=output_alias_map, feed_dict={csv_tensor_name: input_csvlines}) if len(input_csvlines) == 1: for k, v in six.iteritems(results): if not isinstance(v, (list, np.ndarray)): results[k] = [v] for k, v in six.iteritems(results): if any(isinstance(x, bytes) for x in v): results[k] = [x.decode('utf-8') for x in v] return results
Prediction with a tf savedmodel. Args: model_dir: directory that contains a saved model input_csvlines: list of csv strings Returns: Dict in the form tensor_name:prediction_list. Note that the value is always a list, even if there was only 1 row in input_csvlines.
juraj-google-style
def moving_average_variables(scope=None): return ops.get_collection(ops.GraphKeys.MOVING_AVERAGE_VARIABLES, scope)
Returns all variables that maintain their moving averages. If an `ExponentialMovingAverage` object is created and the `apply()` method is called on a list of variables, these variables will be added to the `GraphKeys.MOVING_AVERAGE_VARIABLES` collection. This convenience function returns the contents of that collection. Args: scope: (Optional.) A string. If supplied, the resulting list is filtered to include only items whose `name` attribute matches `scope` using `re.match`. Items without a `name` attribute are never returned if a scope is supplied. The choice of `re.match` means that a `scope` without special tokens filters by prefix. Returns: A list of Variable objects.
github-repos
def protocol(alias_name, default=None, allow_none=False): warnings.warn('Will be removed in v1.0', DeprecationWarning, stacklevel=2) try: return _split_docker_link(alias_name)[0] except KeyError as err: if default or allow_none: return default else: raise err
Get the protocol from the docker link alias or return the default. Args: alias_name: The docker link alias default: The default value if the link isn't available allow_none: If the return value can be `None` (i.e. optional) Examples: Assuming a Docker link was created with ``docker --link postgres:db`` and the resulting environment variable is ``DB_PORT=tcp://172.17.0.82:5432``. >>> envitro.docker.protocol('DB') tcp
juraj-google-style
def compress_dir(path, compression='gz'): for (parent, subdirs, files) in os.walk(path): for f in files: compress_file(os.path.join(parent, f), compression=compression)
Recursively compresses all files in a directory. Note that this compresses all files singly, i.e., it does not create a tar archive. For that, just use Python tarfile class. Args: path (str): Path to parent directory. compression (str): A compression mode. Valid options are "gz" or "bz2". Defaults to gz.
codesearchnet
def WriteTaskStart(self): self._RaiseIfNotWritable() if (self._storage_type != definitions.STORAGE_TYPE_TASK): raise IOError('Unsupported storage type.') task_start = self._task.CreateTaskStart() self._storage_file.WriteTaskStart(task_start)
Writes task start information. Raises: IOError: if the storage type is not supported or when the storage writer is closed. OSError: if the storage type is not supported or when the storage writer is closed.
codesearchnet
def extract_version(exepath, version_arg, word_index=-1, version_rank=3): if isinstance(version_arg, basestring): version_arg = [version_arg] args = [exepath] + version_arg stdout, stderr, returncode = _run_command(args) if returncode: raise RezBindError("failed to execute %s: %s\n(error code %d)" % (exepath, stderr, returncode)) stdout = stdout.strip().split('\n')[0].strip() log("extracting version from output: '%s'" % stdout) try: strver = stdout.split()[word_index] toks = strver.replace('.', ' ').replace('-', ' ').split() strver = '.'.join(toks[:version_rank]) version = Version(strver) except Exception as e: raise RezBindError("failed to parse version from output '%s': %s" % (stdout, str(e))) log("extracted version: '%s'" % str(version)) return version
Run an executable and get the program version. Args: exepath: Filepath to executable. version_arg: Arg to pass to program, eg "-V". Can also be a list. word_index: Expect the Nth word of output to be the version. version_rank: Cap the version to this many tokens. Returns: `Version` object.
juraj-google-style
def init_args(cls): try: argspec = getargspec(cls) except TypeError: argspec = getargspec(cls.__init__) args = argspec.args if args[0] == 'self': args.remove('self') return args
Return the __init__ args (minus 'self') for @cls Args: cls: class, instance or callable Returns: list of str, the arguments minus 'self'
juraj-google-style
def element_at(self, index): if self.closed(): raise ValueError('Attempt to call element_at() on a closed Queryable.') if (index < 0): raise OutOfRangeError('Attempt to use negative index.') try: return self._iterable[index] except IndexError: raise OutOfRangeError('Index out of range.') except TypeError: pass for (i, item) in enumerate(self): if (i == index): return item raise OutOfRangeError('element_at(index) out of range.')
Return the element at ordinal index. Note: This method uses immediate execution. Args: index: The index of the element to be returned. Returns: The element at ordinal index in the source sequence. Raises: ValueError: If the Queryable is closed(). ValueError: If index is out of range.
codesearchnet
def HasDefinition(self, name): return name in self.consts or name in self.roles or name in self.states or (name in self.qualifiers) or (name in self.messages) or (name in self.events) or (name in self.transitions)
Whether this module has a named object |name|. Args: name: The string name of the object to look for. Returns: True if this module has an object with name |name|, False otherwise.
github-repos
def __init__(self, token_provider): if token_provider is None: raise ValueError("token_provider is required") if not isinstance(token_provider, TokenProvider): raise ValueError("token_provider must be instance of TokenProvider") self.__token = token_provider.get_token()
The Neurio API client. Args: token_provider (TokenProvider): object providing authentication services
juraj-google-style
def _as_indexed_slices_list(inputs, optimize=True): if not isinstance(inputs, (list, tuple)): raise TypeError(f'Expected a list or tuple, not {type(inputs)}.') outputs = [_as_indexed_slices(i, optimize=optimize) for i in inputs] with_int32_index = [o.indices for o in outputs if o.indices.dtype == dtypes.int32] if not with_int32_index or len(with_int32_index) == len(outputs): return outputs casted_outputs = [] for o in outputs: if o.indices.dtype == dtypes.int32: casted_outputs.append(indexed_slices.IndexedSlices(o.values, cast(o.indices, dtypes.int64), o.dense_shape)) else: casted_outputs.append(o) return casted_outputs
Convert all elements of 'inputs' to IndexedSlices. Additionally, homogenize the types of all the indices to either int32 or int64. Args: inputs: List containing either Tensor or IndexedSlices objects. optimize: if true, attempt to optimize the conversion of each input. Returns: A list of IndexedSlices objects. Raises: TypeError: If 'inputs' is not a list or a tuple.
github-repos
def user_warning(channel, user, warnings, max_warnings): username = user.name if isinstance(user, discord.Member): if user.nick is not None: username = user.nick warning_count_text = "warnings" if warnings != 1 else "warning" warning_text = "{} {}".format(warnings, warning_count_text) result_text = "at {} you will be banned".format(max_warnings) if warnings >= max_warnings: result_text = "you are being banned because you have more than the maximum warnings" gui = ui_embed.UI( channel, "Warning {}".format(username), "You now have {} {}, {}".format(warning_text, username, result_text), modulename=modulename ) return gui
Creates an embed UI containing an user warning message Args: channel (discord.Channel): The Discord channel to bind the embed to user (discord.User): The user to warn warnings (str): The warnings for the user max_warnings (str): The maximum warnings for the user Returns: ui (ui_embed.UI): The embed UI object
juraj-google-style
def click_slot(self, slot, right=False): if isinstance(slot, int): slot = self.window.slots[slot] button = (constants.INV_BUTTON_RIGHT if right else constants.INV_BUTTON_LEFT) return self.send_click(windows.SingleClick(slot, button))
Left-click or right-click the slot. Args: slot (Slot): The clicked slot. Can be ``Slot`` instance or integer. Set to ``inventory.cursor_slot`` for clicking outside the window.
codesearchnet
def read_init() -> Dict[str, List[str]]: with open(os.path.join(PATH_TO_TRANSFORMERS, '__init__.py'), 'r', encoding='utf-8', newline='\n') as f: lines = f.readlines() line_index = 0 while not lines[line_index].startswith('if TYPE_CHECKING'): line_index += 1 backend_specific_objects = {} while line_index < len(lines): backend = find_backend(lines[line_index]) if backend is not None: while not lines[line_index].startswith(' else:'): line_index += 1 line_index += 1 objects = [] while len(lines[line_index]) <= 1 or lines[line_index].startswith(' ' * 8): line = lines[line_index] single_line_import_search = _re_single_line_import.search(line) if single_line_import_search is not None: objects.extend(single_line_import_search.groups()[0].split(', ')) elif line.startswith(' ' * 12): objects.append(line[12:-2]) line_index += 1 backend_specific_objects[backend] = objects else: line_index += 1 return backend_specific_objects
Read the init and extract backend-specific objects. Returns: Dict[str, List[str]]: A dictionary mapping backend name to the list of object names requiring that backend.
github-repos
def set(self, time): self._time = time self._pb.sec = int(self._time) self._pb.nsec = int(((self._time - self._pb.sec) * (10 ** 9)))
Sets time in seconds since Epoch Args: time (:obj:`float`): time in seconds since Epoch (see time.time()) Returns: None
codesearchnet
def volume( self ): return np.dot( self.matrix[0], np.cross( self.matrix[1], self.matrix[2] ) )
The cell volume. Args: None Returns: (float): The cell volume.
juraj-google-style
def _is_png(contents, name=None): with ops.name_scope(name, 'is_png'): substr = string_ops.substr(contents, 0, 3) return math_ops.equal(substr, b'\x89PN', name=name)
Convenience function to check if the 'contents' encodes a PNG image. Args: contents: 0-D `string`. The encoded image bytes. name: A name for the operation (optional) Returns: A scalar boolean tensor indicating if 'contents' may be a PNG image. is_png is susceptible to false positives.
github-repos
def join(self, basepath, *paths): return os.path.join(basepath, *paths)
Join two or more pathname components for the filesystem Args: basepath: string path of the first component of the path paths: path components to be added Returns: full path after combining all the passed components
github-repos
def get_config(self): raise NotImplementedError(f'{self} does not implement get_config()')
Returns the config of the regularizer. An regularizer config is a Python dictionary (serializable) containing all configuration parameters of the regularizer. The same regularizer can be reinstantiated later (without any saved state) from this configuration. This method is optional if you are just training and executing models, exporting to and from SavedModels, or using weight checkpoints. This method is required for Keras `model_to_estimator`, saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON. Returns: Python dictionary.
github-repos
def ExpandSuperClasses(self, t): superclasses = set() self._CollectSuperclasses(t, superclasses) return superclasses
Generate a list of all (known) superclasses for a type. Arguments: t: A type name. E.g. "int". Returns: A set of types. This set includes t as well as all its superclasses. For example, this will return "bool", "int" and "object" for "bool".
github-repos
def truncate_repetitions(text: str, min_len: int=30) -> str: text_lower = text.lower() text_length = len(text_lower) if text_length < 2 * min_len: return text max_repetition_length = None for repetition_length in range(min_len, int(text_length / 2)): same = True for i in range(0, repetition_length): if text_lower[text_length - repetition_length - i - 1] != text_lower[text_length - i - 1]: same = False break if same: max_repetition_length = repetition_length if max_repetition_length is None: return text lcs = text_lower[-max_repetition_length:] substituted_text = text substituted_text_lower = text_lower while substituted_text_lower.endswith(lcs): substituted_text = substituted_text[:-max_repetition_length] substituted_text_lower = substituted_text_lower[:-max_repetition_length] repeating_tail = text_lower[len(substituted_text_lower):] substituted_text_lower_out = substituted_text_lower while True: sentence_end = find_next_punctuation(text_lower, len(substituted_text_lower_out)) sentence_start = find_next_punctuation(text_lower[::-1], len(substituted_text_lower_out)) if sentence_end and sentence_start: sentence = text_lower[sentence_start:sentence_end] substituted_text_lower_out = text_lower[:sentence_end + 1] if sentence in repeating_tail: break else: break text_out = text[:len(substituted_text_lower_out)] return text_out
Attempt to truncate repeating segments in the input string. This function looks for the longest repeating substring at the end of the input string and truncates it to appear only once. To be considered for removal, repetitions need to be continuous. Args: text (`str`): The input raw prediction to be truncated. min_len (int): The minimum length of the repeating segment. Returns: `str`: The input string with repeated segments truncated.
github-repos
def __init__(self, time: Timestamp, duration: Union[Duration, timedelta], operation: ops.Operation) -> None: self.time = time self.duration = Duration.create(duration) self.operation = operation
Initializes the scheduled operation. Args: time: When the operation starts. duration: How long the operation lasts. operation: The operation.
juraj-google-style
def _ParseKey(self, parser_mediator, registry_key): matching_plugin = None normalized_key_path = self._NormalizeKeyPath(registry_key.path) if self._path_filter.CheckPath(normalized_key_path): matching_plugin = self._plugin_per_key_path[normalized_key_path] else: for plugin in self._plugins_without_key_paths: if self._CanProcessKeyWithPlugin(registry_key, plugin): matching_plugin = plugin break if not matching_plugin: matching_plugin = self._default_plugin if matching_plugin: self._ParseKeyWithPlugin(parser_mediator, registry_key, matching_plugin)
Parses the Registry key with a specific plugin. Args: parser_mediator (ParserMediator): parser mediator. registry_key (dfwinreg.WinRegistryKey): Windwos Registry key.
juraj-google-style
def _compute_valid(self): if (self._dimension != 2): raise NotImplementedError('Validity check only implemented in R^2') poly_sign = None if (self._degree == 1): first_deriv = (self._nodes[(:, 1:)] - self._nodes[(:, :(- 1))]) poly_sign = _SIGN(np.linalg.det(first_deriv)) elif (self._degree == 2): bernstein = _surface_helpers.quadratic_jacobian_polynomial(self._nodes) poly_sign = _surface_helpers.polynomial_sign(bernstein, 2) elif (self._degree == 3): bernstein = _surface_helpers.cubic_jacobian_polynomial(self._nodes) poly_sign = _surface_helpers.polynomial_sign(bernstein, 4) else: raise _helpers.UnsupportedDegree(self._degree, supported=(1, 2, 3)) return (poly_sign == 1)
r"""Determines if the current surface is "valid". Does this by checking if the Jacobian of the map from the reference triangle is everywhere positive. Returns: bool: Flag indicating if the current surface is valid. Raises: NotImplementedError: If the surface is in a dimension other than :math:`\mathbf{R}^2`. .UnsupportedDegree: If the degree is not 1, 2 or 3.
codesearchnet
def _parse_config(self, requires_cfg=True): if len(self.config_paths) > 0: try: self._find_config() except BisonError: if not requires_cfg: return raise try: with open(self.config_file, 'r') as f: parsed = self._fmt_to_parser[self.config_format](f) except Exception as e: raise BisonError( 'Failed to parse config file: {}'.format(self.config_file) ) from e self._full_config = None self._config = parsed
Parse the configuration file, if one is configured, and add it to the `Bison` state. Args: requires_cfg (bool): Specify whether or not parsing should fail if a config file is not found. (default: True)
juraj-google-style
def save_data_files(vr, bs, prefix=None, directory=None): filename = ('{}_band.dat'.format(prefix) if prefix else 'band.dat') directory = (directory if directory else '.') filename = os.path.join(directory, filename) if bs.is_metal(): zero = vr.efermi else: zero = bs.get_vbm()['energy'] with open(filename, 'w') as f: header = ' f.write(header) for band in bs.bands[Spin.up]: for (d, e) in zip(bs.distance, band): f.write('{:.8f} {:.8f}\n'.format(d, (e - zero))) f.write('\n') if bs.is_spin_polarized: for band in bs.bands[Spin.down]: for (d, e) in zip(bs.distance, band): f.write('{:.8f} {:.8f}\n'.format(d, (e - zero))) f.write('\n') return filename
Write the band structure data files to disk. Args: vs (`Vasprun`): Pymatgen `Vasprun` object. bs (`BandStructureSymmLine`): Calculated band structure. prefix (`str`, optional): Prefix for data file. directory (`str`, optional): Directory in which to save the data. Returns: The filename of the written data file.
codesearchnet
def _merge_tensors(t1, t2, name, validate): if t1 is None: return (t2, False) elif t2 is None: return (t1, False) elif t1 is t2: return (t1, True) else: err_msg = 'RowPartition._merge_precomputed_encodings: partitions have incompatible %s' % name if not t1.shape.is_compatible_with(t2.shape): raise ValueError(err_msg) if validate: checks = [check_ops.assert_equal(t1, t2, message=err_msg)] return (control_flow_ops.with_dependencies(checks, t1), True) else: return (t1, False)
Merge two optional Tensors with equal values into a single Tensor. Args: t1: tf.Tensor or None t2: tf.Tensor or None name: A name for the tensors (for error messages) validate: If true, then check that `t1` is compatible with `t2` (if both are non-None). Returns: A pair `(merged_value, validated)`: * `merged_value` is `t1` if it is not None; or `t2` otherwise. * `validated` is true if we validated that t1 and t2 are equal (either by adding a check, or because t1 is t2).
github-repos
def _get_combined_properties(self, dev): return (dev.job if dev.job is not None else self.job, dev.replica if dev.replica is not None else self.replica, dev.task if dev.task is not None else self.task, dev.device_type if dev.device_type is not None else self.device_type, dev.device_index if dev.device_index is not None else self.device_index)
Combine the current DeviceSpec with another DeviceSpec. The combination of DeviceSpecs is will give priority to dev. Args: dev: a `DeviceSpec` Returns: A tuple of (job, replica, task, device_type, device_index) which represents the combination of self and dev.
github-repos
def lookup(self, iterable, index=0, gather=False, edit_distance=0, max_edit_distance=0, match_threshold=0.0, matched_length=0): if self.is_terminal: if index == len(iterable) or \ (gather and index < len(iterable) and iterable[index] == ' '): confidence = float(len(self.key) - edit_distance) / float(max(len(self.key), index)) if confidence > match_threshold: yield { 'key': self.key, 'match': iterable[:index], 'data': self.data, 'confidence': confidence * self.weight } if index < len(iterable) and iterable[index] in self.children: for result in self.children[iterable[index]]\ .lookup(iterable, index + 1, gather=gather, edit_distance=edit_distance, max_edit_distance=max_edit_distance, matched_length=matched_length + 1): yield result potential_confidence = float(index - edit_distance + (max_edit_distance - edit_distance)) / \ (float(index) + (max_edit_distance - edit_distance)) if index + max_edit_distance - edit_distance > 0 else 0.0 if edit_distance < max_edit_distance and potential_confidence > match_threshold: for child in list(self.children): if index >= len(iterable) or child != iterable[index]: for result in self.children[child]\ .lookup(iterable, index + 1, gather=gather, edit_distance=edit_distance + 1, max_edit_distance=max_edit_distance, matched_length=matched_length): yield result for result in self.children[child]\ .lookup(iterable, index + 2, gather=gather, edit_distance=edit_distance + 1, max_edit_distance=max_edit_distance, matched_length=matched_length): yield result for result in self.children[child]\ .lookup(iterable, index, gather=gather, edit_distance=edit_distance + 1, max_edit_distance=max_edit_distance, matched_length=matched_length): yield result
TODO: Implement trie lookup with edit distance Args: iterable(list?): key used to find what is requested this could be a generator. index(int): index of what is requested gather(bool): of weather to gather or not edit_distance(int): the distance -- currently not used max_edit_distance(int): the max distance -- not currently used yields: object: yields the results of the search
juraj-google-style