code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def determine_final_config(config_module): config = Config(DEFAULT_LIBRARY_RC_ADDITIONS, DEFAULT_LIBRARY_RC_REPLACEMENTS, DEFAULT_TEST_RC_ADDITIONS, DEFAULT_TEST_RC_REPLACEMENTS) for field in config._fields: if hasattr(config_module, field): config = config._replace(**{field: getattr(config_module, field)}) return config
Determines the final additions and replacements. Combines the config module with the defaults. Args: config_module: The loaded local configuration module. Returns: Config: the final configuration.
codesearchnet
def read(self, filename, binary_mode=False, size=None, offset=None): mode = "rb" if binary_mode else "r" with io.open(filename, mode) as f: if offset is not None: f.seek(offset) if size is not None: return f.read(size) else: return f.read()
Reads contents of a file to a string. Args: filename: string, a path binary_mode: bool, read as binary if True, otherwise text size: int, number of bytes or characters to read, otherwise read all the contents of the file from the offset offset: int, offset into file to read from, otherwise read from the very beginning Returns: Subset of the contents of the file as a string or bytes.
juraj-google-style
def event(self, name, **kwargs): group_obj = Event(name, **kwargs) return self._group(group_obj)
Add Event data to Batch object. Args: name (str): The name for this Group. date_added (str, kwargs): The date timestamp the Indicator was created. event_date (str, kwargs): The event datetime expression for this Group. status (str, kwargs): The status for this Group. xid (str, kwargs): The external id for this Group. Returns: obj: An instance of Event.
juraj-google-style
def emit_flow_end(self, name: str, timestamp: int, pid: int, tid: int, flow_id: int) -> None: event = self._create_event('t', 'DataFlow', name, pid, tid, timestamp) event['id'] = flow_id self._events.append(event)
Adds a flow end event to the trace. When matched with a flow start event (with the same 'flow_id') this will cause the trace viewer to draw an arrow between the start and end events. Args: name: The event name as a string. timestamp: The timestamp of this event as a long integer. pid: Identifier of the process generating this event as an integer. tid: Identifier of the thread generating this event as an integer. flow_id: Identifier of the flow as an integer.
github-repos
def get_rollout_from_id(self, rollout_id): layer = self.rollout_id_map.get(rollout_id) if layer: return layer self.logger.error(('Rollout with ID "%s" is not in datafile.' % rollout_id)) return None
Get rollout for the provided ID. Args: rollout_id: ID of the rollout to be fetched. Returns: Rollout corresponding to the provided ID.
codesearchnet
def write_index(fn, index): with open(fn, "wb") as o_file: o_file.write(_CHECK_STRING) o_file.write(zlib.compress(bytes( index.to_csv(None, index=False, encoding="utf-8"), encoding="utf-8", )))
Writes the index to file. Args: fn (str): the name of the file that will contain the index. index (pandas.DataFrame): the index.
juraj-google-style
def createURL(self, word, mode='phonefy'): try: return self.modes[mode]['url'].format(placeholder=urllib.pathname2url(word)) except: if (mode == 'base'): if (word[0] == '/'): return ((self.baseURL + word[1:]), word) else: return (self.baseURL + word) else: try: return self.url[mode].replace((('<' + mode) + '>'), urllib.pathname2url(word)) except: pass return None
Method to create the URL replacing the word in the appropriate URL. Args: ----- word: Word to be searched. mode: Mode to be executed. Return: ------- The URL to be queried.
codesearchnet
def AddDatastore(self, urn): if (urn not in self._datastores): self._datastores.add(urn) return True return False
Adds a datastore URN as a source. Args: urn: an RDF URN value of the datastore. Returns: True if the datastore is not an already existing source.
codesearchnet
def substitute(self, var_map, cont=False, tag=None): return self.apply(substitute, var_map=var_map, cont=cont, tag=tag)
Substitute sub-expressions both on the lhs and rhs Args: var_map (dict): Dictionary with entries of the form ``{expr: substitution}``
codesearchnet
def remove_pad(x, pad_remover, mode): x = expert_utils.flatten_all_but_last(x) if mode != ModeKeys.PREDICT: x = pad_remover.remove(x) x = tf.expand_dims(x, axis=0) return x
Remove padding by concatenating all dimension into one. Args: x (tf.Tensor): input of shape [batch_size, length, depth] pad_remover (obj): a PadRemover object mode (ModeKeys): infer, train or eval. If inference, the padding remover is not applied Returns: tf.Tensor of shape [1,length_nonpad,depth] where length_nonpad <= batch_size*length
juraj-google-style
def process_usufy(self, data): mode = "usufy" info = [] try: verifier = self.modes.get(mode, {}).get("extra_fields", {}) for field in verifier.keys(): regexp = verifier[field] values = re.findall(regexp, data) for val in values: aux = {} aux["type"] = field aux["value"] = val aux["attributes"] = [] if aux not in info: info.append(aux) except AttributeError as e: for field in self.fieldsRegExp[mode].keys(): try: regexp = self.fieldsRegExp[mode][field]["start"]+"([^\)]+)"+self.fieldsRegExp[mode][field]["end"] tmp = re.findall(regexp, data) values = [] for t in tmp: if self.fieldsRegExp[mode][field]["end"] in t: values.append(t.split(self.fieldsRegExp[mode][field]["end"])[0]) else: values.append(t) except: regexp = self.fieldsRegExp[mode][field] values = re.findall(regexp, data) for val in values: aux = {} aux["type"] = field aux["value"] = val aux["attributes"] = [] if aux not in info: info.append(aux) return info
Method to process and extract the entities of a usufy Args: ----- data: The information from which the info will be extracted. Return: ------- A list of the entities found.
juraj-google-style
def train_on_batch(self, data: List[Iterable], labels: Iterable[list]) -> None: (X, Y) = self._transform_batch(data, labels) self.model_.train_on_batch(X, Y)
Trains model on a single batch Args: data: a batch of word sequences labels: a batch of correct tag sequences Returns: the trained model
codesearchnet
def ContainsIgnoreCase(self, value): self._awql = self._CreateSingleValueCondition(value, 'CONTAINS_IGNORE_CASE') return self._query_builder
Sets the type of the WHERE clause as "contains ignore case". Args: value: The value to be used in the WHERE condition. Returns: The query builder that this WHERE builder links to.
codesearchnet
def add_option(self, section, name, value): if self._is_live(): raise RuntimeError('Submitted units cannot update their options') option = {'section': section, 'name': name, 'value': value} self._data['options'].append(option) return True
Add an option to a section of the unit file Args: section (str): The name of the section, If it doesn't exist it will be created name (str): The name of the option to add value (str): The value of the option Returns: True: The item was added
codesearchnet
def get_direct_band_gap_dict(self): if self.is_metal(): raise ValueError('get_direct_band_gap_dict shouldonly be used with non-metals') direct_gap_dict = {} for (spin, v) in self.bands.items(): above = v[np.all((v > self.efermi), axis=1)] min_above = np.min(above, axis=0) below = v[np.all((v < self.efermi), axis=1)] max_below = np.max(below, axis=0) diff = (min_above - max_below) kpoint_index = np.argmin(diff) band_indices = [np.argmax(below[(:, kpoint_index)]), (np.argmin(above[(:, kpoint_index)]) + len(below))] direct_gap_dict[spin] = {'value': diff[kpoint_index], 'kpoint_index': kpoint_index, 'band_indices': band_indices} return direct_gap_dict
Returns a dictionary of information about the direct band gap Returns: a dictionary of the band gaps indexed by spin along with their band indices and k-point index
codesearchnet
def _line_is_numpy_parameter_type(line_info): line_stripped = line_info.remaining.strip() if ':' in line_stripped: previous_indent = line_info.previous.indentation current_indent = line_info.indentation if ':' in line_info.previous.line and current_indent > previous_indent: return False else: return True return False
Returns whether the line contains a numpy style parameter type definition. We look for a line of the form: x : type And we have to exclude false positives on argument descriptions containing a colon by checking the indentation of the line above. Args: line_info: Information about the current line. Returns: True if the line is a numpy parameter type definition, False otherwise.
github-repos
def participant_from_submission_path(submission_path): basename = os.path.basename(submission_path) file_ext = None for e in ALLOWED_EXTENSIONS: if basename.endswith(e): file_ext = e break if (not file_ext): raise ValueError(('Invalid submission path: ' + submission_path)) basename = basename[:(- len(file_ext))] if basename.isdigit(): return {'team_id': int(basename)} if basename.startswith('baseline_'): return {'baseline_id': basename[len('baseline_'):]} raise ValueError(('Invalid submission path: ' + submission_path))
Parses type of participant based on submission filename. Args: submission_path: path to the submission in Google Cloud Storage Returns: dict with one element. Element key correspond to type of participant (team, baseline), element value is ID of the participant. Raises: ValueError: is participant can't be determined based on submission path.
codesearchnet
def properties(cls, with_bases=True): if with_bases: return accumulate_from_superclasses(cls, "__properties__") else: return set(cls.__properties__)
Collect the names of properties on this class. This method *optionally* traverses the class hierarchy and includes properties defined on any parent classes. Args: with_bases (bool, optional) : Whether to include properties defined on parent classes in the results. (default: True) Returns: set[str] : property names
juraj-google-style
def get_special_tokens_mask(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None, already_has_special_tokens: bool=False) -> List[int]: if already_has_special_tokens: return super().get_special_tokens_mask(token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True) if token_ids_1 is not None: return [1] + [0] * len(token_ids_0) + [1] + [0] * len(token_ids_1) return [1] + [0] * len(token_ids_0)
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
github-repos
def hinge(y_true, y_pred): y_pred = ops.convert_to_tensor(y_pred) y_true = ops.cast(y_true, dtype=y_pred.dtype) y_true = ops.convert_to_tensor(y_true) y_true = convert_binary_labels_to_hinge(y_true) return ops.mean(ops.maximum(1.0 - y_true * y_pred, 0.0), axis=-1)
Computes the hinge loss between `y_true` & `y_pred`. Formula: ```python loss = mean(maximum(1 - y_true * y_pred, 0), axis=-1) ``` Args: y_true: The ground truth values. `y_true` values are expected to be -1 or 1. If binary (0 or 1) labels are provided they will be converted to -1 or 1 with shape = `[batch_size, d0, .. dN]`. y_pred: The predicted values with shape = `[batch_size, d0, .. dN]`. Returns: Hinge loss values with shape = `[batch_size, d0, .. dN-1]`. Example: >>> y_true = np.random.choice([-1, 1], size=(2, 3)) >>> y_pred = np.random.random(size=(2, 3)) >>> loss = keras.losses.hinge(y_true, y_pred)
github-repos
def decode(self, targets, encoder_outputs, attention_bias): with tf.name_scope('decode'): decoder_inputs = self.embedding_softmax_layer(targets) with tf.name_scope('shift_targets'): decoder_inputs = tf.pad(decoder_inputs, [[0, 0], [1, 0], [0, 0]])[(:, :(- 1), :)] with tf.name_scope('add_pos_encoding'): length = tf.shape(decoder_inputs)[1] decoder_inputs += model_utils.get_position_encoding(length, self.params.hidden_size) if self.train: mlperf_log.transformer_print(key=mlperf_log.MODEL_HP_LAYER_POSTPROCESS_DROPOUT, value=self.params.layer_postprocess_dropout) decoder_inputs = tf.nn.dropout(decoder_inputs, (1 - self.params.layer_postprocess_dropout)) decoder_self_attention_bias = model_utils.get_decoder_self_attention_bias(length) outputs = self.decoder_stack(decoder_inputs, encoder_outputs, decoder_self_attention_bias, attention_bias) logits = self.embedding_softmax_layer.linear(outputs) return logits
Generate logits for each value in the target sequence. Args: targets: target values for the output sequence. int tensor with shape [batch_size, target_length] encoder_outputs: continuous representation of input sequence. float tensor with shape [batch_size, input_length, hidden_size] attention_bias: float tensor with shape [batch_size, 1, 1, input_length] Returns: float32 tensor with shape [batch_size, target_length, vocab_size]
codesearchnet
def _allocate_subnets(self, conf): allocated_subnets = [] try: for net_spec in conf.get('nets', {}).itervalues(): if (net_spec['type'] != 'nat'): continue gateway = net_spec.get('gw') if gateway: allocated_subnet = self._subnet_store.acquire(self.paths.uuid(), gateway) else: allocated_subnet = self._subnet_store.acquire(self.paths.uuid()) net_spec['gw'] = str(allocated_subnet.iter_hosts().next()) allocated_subnets.append(allocated_subnet) except: self._subnet_store.release(allocated_subnets) raise return (allocated_subnets, conf)
Allocate all the subnets needed by the given configuration spec Args: conf (dict): Configuration spec where to get the nets definitions from Returns: tuple(list, dict): allocated subnets and modified conf
codesearchnet
def pair_wise_dice_loss(inputs: Tensor, labels: Tensor) -> Tensor: inputs = inputs.sigmoid().flatten(1) numerator = 2 * torch.matmul(inputs, labels.T) denominator = inputs.sum(-1)[:, None] + labels.sum(-1)[None, :] loss = 1 - (numerator + 1) / (denominator + 1) return loss
A pair wise version of the dice loss, see `dice_loss` for usage. Args: inputs (`torch.Tensor`): A tensor representing a mask labels (`torch.Tensor`): A tensor with the same shape as inputs. Stores the binary classification labels for each element in inputs (0 for the negative class and 1 for the positive class). Returns: `torch.Tensor`: The computed loss between each pairs.
github-repos
def unpack_rpc_payload(resp_format, payload): code = _create_argcode(resp_format, payload) return struct.unpack(code, payload)
Unpack an RPC payload according to resp_format. Args: resp_format (str): a struct format code (without the <) for the parameter format for this RPC. This format code may include the final character V, which means that it expects a variable length bytearray. payload (bytes): The binary payload that should be unpacked. Returns: list: A list of the unpacked payload items.
codesearchnet
def compile_keywords(keywords): mdt = [] cz_keywords = [] en_keywords = [] for keyword in keywords: keyword = keyword_to_info(keyword.encode("utf-8")) if not keyword: continue cz_keywords.append({ "uid": keyword["uid"], "zahlavi": keyword["zahlavi"], "zdroj": "czenas", }) if keyword.get("mdt"): mdt.append({ "mdt": keyword["mdt"], "mrf": keyword["mrf"], }) angl_ekvivalent = keyword.get("angl_ekvivalent") if angl_ekvivalent: en_keywords.append({ "zahlavi": angl_ekvivalent, "zdroj": keyword.get("zdroj_angl_ekvivalentu") or "eczenas", }) return mdt, cz_keywords, en_keywords
Translate `keywords` to full keyword records as they are used in Aleph. Returns tuple with three lists, each of which is later used in different part of the MRC/MARC record. Args: keywords (list): List of keyword strings. Returns: tuple: (mdt_list, cz_keyword_list, en_keyword_list)
juraj-google-style
def PushItem(self, item, block=True): try: self._queue.put(item, block=block) except Queue.Full as exception: raise errors.QueueFull(exception)
Pushes an item onto the queue. Args: item (object): item to add. block (Optional[bool]): True to block the process when the queue is full. Raises: QueueFull: if the item could not be pushed the queue because it's full.
codesearchnet
class FlaxBaseModelOutputWithPoolingAndNoAttention(ModelOutput): last_hidden_state: Optional[jnp.ndarray] = None pooler_output: Optional[jnp.ndarray] = None hidden_states: Optional[Tuple[jnp.ndarray]] = None
Base class for model's outputs that also contains a pooling of the last hidden states. Args: last_hidden_state (`jnp.ndarray` of shape `(batch_size, num_channels, height, width)`): Sequence of hidden-states at the output of the last layer of the model. pooler_output (`jnp.ndarray` of shape `(batch_size, hidden_size)`): Last layer hidden-state after a pooling operation on the spatial dimensions. hidden_states (`tuple(jnp.ndarray)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): Tuple of `jnp.ndarray` (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape `(batch_size, num_channels, height, width)`. Hidden-states of the model at the output of each layer plus the optional initial embedding outputs.
github-repos
def download(branch=None, buildMod=False): gradlew = "./gradlew" if os.name == 'nt': gradlew = "gradlew.bat" if branch is None: branch = malmo_version subprocess.check_call(["git", "clone", "-b", branch, "https: os.chdir(malmo_install_dir) os.chdir("Minecraft") try: pathlib.Path("src/main/resources/version.properties").write_text("malmomod.version={}\n".format(malmo_version)) if buildMod: subprocess.check_call([gradlew, "setupDecompWorkspace", "build", "testClasses", "-x", "test", "--stacktrace", "-Pversion={}" .format(malmo_version)]) minecraft_dir = os.getcwd() finally: os.chdir("../..") if "MALMO_XSD_PATH" not in os.environ: print("Please make sure you set the MALMO_XSD_PATH environment variable to \"{}/Schemas\"!" .format(str(pathlib.Path(malmo_install_dir).absolute()))) return minecraft_dir
Download Malmo from github and optionaly build the Minecraft Mod. Args: branch: optional branch to clone. Default is release version. buildMod: don't build the Mod unless build arg is given as True. Returns: The path for the Malmo Minecraft mod.
juraj-google-style
def search_globs(path, patterns): for pattern in (p for p in patterns if p): if pattern.startswith('/'): regex = fnmatch.translate(pattern[1:]) regex = regex.replace('\\Z', '') temp_path = path[1:] if path.startswith('/') else path m = re.search(regex, temp_path) if m and m.start() == 0: return True else: regex = fnmatch.translate(pattern) regex = regex.replace('\\Z', '') if re.search(regex, path): return True return False
Test whether the given *path* contains any patterns in *patterns* Args: path (str): A file path to test for matches. patterns (list[str]): A list of glob string patterns to test against. If *path* matches any of those patters, it will return True. Returns: bool: **True** if the ``path`` matches any pattern in *patterns*.
juraj-google-style
def save(self, vleaf, fpath, cleanup=False, format=None): graph = self.create_graphviz_digraph(vleaf, format=format) graph.render(fpath, cleanup=cleanup)
Save the graph to a given file path. Args: vleaf (`nnabla.Variable`): End variable. All variables and functions which can be traversed from this variable are shown in the reuslt. fpath (`str`): The file path used to save. cleanup (`bool`): Clean up the source file after rendering. Default is False. format (str): Force overwrite ``format`` (``'pdf', 'png', ...)``) configuration.
codesearchnet
def set_consistent(self, consistent_config): self.topology._add_job_control_plane() self.oport.operator.consistent(consistent_config) return self._make_placeable()
Indicates that the stream is the start of a consistent region. Args: consistent_config(consistent.ConsistentRegionConfig): the configuration of the consistent region. Returns: Stream: Returns this stream. .. versionadded:: 1.11
juraj-google-style
def __init__(self, channel): self.ListVoices = channel.unary_unary( "/google.cloud.texttospeech.v1.TextToSpeech/ListVoices", request_serializer=google_dot_cloud_dot_texttospeech__v1_dot_proto_dot_cloud__tts__pb2.ListVoicesRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_texttospeech__v1_dot_proto_dot_cloud__tts__pb2.ListVoicesResponse.FromString, ) self.SynthesizeSpeech = channel.unary_unary( "/google.cloud.texttospeech.v1.TextToSpeech/SynthesizeSpeech", request_serializer=google_dot_cloud_dot_texttospeech__v1_dot_proto_dot_cloud__tts__pb2.SynthesizeSpeechRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_texttospeech__v1_dot_proto_dot_cloud__tts__pb2.SynthesizeSpeechResponse.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def _get_course_content_from_ecommerce(course_id, site_code=None): api = get_ecommerce_client(site_code=site_code) try: api_response = api.courses(course_id).get() except Exception: logger.exception( 'An error occurred while retrieving data for course run [%s] from the Catalog API.', course_id, exc_info=True ) return {} return { 'title': api_response.get('name'), 'verification_deadline': api_response.get('verification_deadline') }
Get course information using the Ecommerce course api. In case of error returns empty response. Arguments: course_id (str): course key of the course site_code (str): site code Returns: course information from Ecommerce
juraj-google-style
def _inspect_summary_cache(self, cache, replica_id, step_num, output_stream, tensor_trace_order): def _inspect_tensor(tensor): if self._parameters.trace_mode == tensor_tracer_flags.TRACE_MODE_NAN_INF: return cond.cond(math_ops.greater(tensor, 0.0), lambda: 'has NaNs/Infs!', lambda: 'has no NaNs or Infs.') else: return tensor if not tensor_trace_order.traced_tensors: logging.warn('Inspect mode has no tensors in the cache to check.') return control_flow_ops.no_op if self._parameters.trace_mode == tensor_tracer_flags.TRACE_MODE_NAN_INF: step_has_nan_or_inf = math_ops.greater(math_ops.reduce_sum(cache), 0.0) else: step_has_nan_or_inf = math_ops.reduce_any(gen_math_ops.logical_or(gen_math_ops.is_nan(cache), gen_math_ops.is_inf(cache))) step_error_message = cond.cond(step_has_nan_or_inf, lambda: 'NaNs or Infs in the step!', lambda: 'No numerical issues have been found for the step.') if self._parameters.collect_summary_per_core: stats = ['\n\n', 'core:', replica_id, ',', 'step:', step_num, '-->', step_error_message, 'Printing tensors for mode:%s...' % self._parameters.trace_mode] else: stats = ['\n\n', 'step:', step_num, '-->', step_error_message, 'Printing tensors for mode:%s...' % self._parameters.trace_mode] for tensor_name, cache_idx in sorted(tensor_trace_order.tensorname_to_cache_idx.items(), key=lambda item: item[1]): if self._parameters.collect_summary_per_core: stats.extend(['\n', 'core:', replica_id, ',', 'step:', step_num, ',', tensor_name, '-->', _inspect_tensor(cache[cache_idx, 0])]) else: stats.extend(['\n', 'step:', step_num, ',', tensor_name, '-->', _inspect_tensor(cache[cache_idx, 0])]) return logging_ops.print_v2(*stats, summarize=-1, output_stream=output_stream)
Generates a print operation to print trace inspection. Args: cache: Tensor storing the trace results for the step. replica_id: Tensor storing the replica id of the running core. step_num: Step number. output_stream: Where to print the outputs, e.g., file path, or sys.stderr. tensor_trace_order: TensorTraceOrder object holding tensorname to id map. Returns: The Op to flush the cache to file.
github-repos
def create_sketch(self, name, description): resource_url = '{0:s}/sketches/'.format(self.api_base_url) form_data = {'name': name, 'description': description} response = self.session.post(resource_url, json=form_data) response_dict = response.json() sketch_id = response_dict['objects'][0]['id'] return sketch_id
Create a new sketch with the specified name and description. Args: name (str): Title of sketch description (str): Description of sketch Returns: int: ID of created sketch
codesearchnet
def from_mass_fractions(cls, mass_fractions, formula=None): mass_fractions = process_wildcard(mass_fractions) atomic_fractions = convert_mass_to_atomic_fractions(mass_fractions) if not formula: formula = generate_name(atomic_fractions) return cls(cls._key, mass_fractions, atomic_fractions, formula)
Creates a composition from a mass fraction :class:`dict`. Args: mass_fractions (dict): mass fraction :class:`dict`. The keys are atomic numbers and the values weight fractions. Wildcard are accepted, e.g. ``{5: '?', 25: 0.4}`` where boron will get a mass fraction of 0.6. formula (str): optional chemical formula for the composition. If ``None``, a formula will be generated for the composition.
juraj-google-style
def ensure_proc_terminate(proc): if isinstance(proc, list): for p in proc: ensure_proc_terminate(p) return def stop_proc_by_weak_ref(ref): proc = ref() if (proc is None): return if (not proc.is_alive()): return proc.terminate() proc.join() assert isinstance(proc, mp.Process) atexit.register(stop_proc_by_weak_ref, weakref.ref(proc))
Make sure processes terminate when main process exit. Args: proc (multiprocessing.Process or list)
codesearchnet
def exclude_from_weight_decay(self, var_list=None, var_names=None): if hasattr(self, '_built') and self._built: raise ValueError('`exclude_from_weight_decay()` can only be configured before the optimizer is built.') if var_list: self._exclude_from_weight_decay = set((self._var_key(variable) for variable in var_list)) else: self._exclude_from_weight_decay = set() if var_names and len(var_names) > 0: self._exclude_from_weight_decay_pattern = re.compile('|'.join(set(var_names))) else: self._exclude_from_weight_decay_pattern = None self._exclude_from_weight_decay_cache = dict()
Exclude variables from weight decay. This method must be called before the optimizer's `build` method is called. You can set specific variables to exclude out, or set a list of strings as the anchor words, if any of which appear in a variable's name, then the variable is excluded. Args: var_list: A list of `Variable`s to exclude from weight decay. var_names: A list of strings. If any string in `var_names` appear in the model variable's name, then this model variable is excluded from weight decay. For example, `var_names=['bias']` excludes all bias variables from weight decay.
github-repos
def send_message_event(self, room_id, event_type, content, txn_id=None, timestamp=None): if not txn_id: txn_id = self._make_txn_id() path = "/rooms/%s/send/%s/%s" % ( quote(room_id), quote(event_type), quote(str(txn_id)), ) params = {} if timestamp: params["ts"] = timestamp return self._send("PUT", path, content, query_params=params)
Perform PUT /rooms/$room_id/send/$event_type Args: room_id (str): The room ID to send the message event in. event_type (str): The event type to send. content (dict): The JSON content to send. txn_id (int): Optional. The transaction ID to use. timestamp (int): Set origin_server_ts (For application services only)
juraj-google-style
def __init__(self, learning_rate, use_locking=False, name='GradientDescent'): super(GradientDescentOptimizer, self).__init__(use_locking, name) self._learning_rate = learning_rate self._learning_rate_tensor = None
Construct a new gradient descent optimizer. Args: learning_rate: A Tensor or a floating point value. The learning rate to use. use_locking: If True use locks for update operations. name: Optional name prefix for the operations created when applying gradients. Defaults to "GradientDescent". @compatibility(eager) When eager execution is enabled, `learning_rate` can be a callable that takes no arguments and returns the actual value to use. This can be useful for changing these values across different invocations of optimizer functions. @end_compatibility
github-repos
def __iadd__(self, values): self._check_type(values, '+=') self.extend(values) return self
Add all values to the end of self. Args: values (Iterable): Values to append Raises: ValueError: If any values are already present
juraj-google-style
def __init__(self, resolver_context, file_object=None): super(FileObjectIO, self).__init__(resolver_context) self._file_object = file_object self._file_object_set_in_init = bool(file_object) self._size = None
Initializes a file-like object. Args: resolver_context (Context): resolver context. file_object (Optional[FileIO]): file-like object.
juraj-google-style
def _DictAsString(result, verbose=False): class_attrs = inspectutils.GetClassAttrsDict(result) result_visible = {key: value for key, value in result.items() if completion.MemberVisible(result, key, value, class_attrs=class_attrs, verbose=verbose)} if not result_visible: return '{}' longest_key = max((len(str(key)) for key in result_visible.keys())) format_string = f'{{key:{longest_key + 1}s}} {{value}}' lines = [] for key, value in result.items(): if completion.MemberVisible(result, key, value, class_attrs=class_attrs, verbose=verbose): line = format_string.format(key=f'{key}:', value=_OneLineResult(value)) lines.append(line) return '\n'.join(lines)
Returns a dict as a string. Args: result: The dict to convert to a string verbose: Whether to include 'hidden' members, those keys starting with _. Returns: A string representing the dict
github-repos
def latest(self, **kwargs): path = self._get_id_path('latest') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Get the most newly created TV show. This is a live response and will continuously change. Args: language: (optional) ISO 639 code. Returns: A dict respresentation of the JSON returned from the API.
juraj-google-style
def delete_device(self, auth_body, device_id): content = {'auth': auth_body} return self._send('DELETE', ('/devices/%s' % device_id), content=content)
Deletes the given device, and invalidates any access token associated with it. NOTE: This endpoint uses the User-Interactive Authentication API. Args: auth_body (dict): Authentication params. device_id (str): The device ID of the device to delete.
codesearchnet
def SelectFieldPrompt(field_name, context_str, *options): option_format_str = '[ {} ] "{}"' option_dict = {} print(context_str) print('Please select one of the following options for field "{}"'.format(field_name)) for (cnt, option) in enumerate(options): option_dict['{}'.format((cnt + 1))] = option if (not callable(option)): print(option_format_str.format((cnt + 1), u(str(option)))) else: print(option_format_str.format((cnt + 1), option.__name__)) choice = None while (choice not in option_dict): choice = input('option> ').strip() new_value = option_dict[choice] if callable(new_value): return new_value() else: return new_value
Prompts user to pick from provided options. It is possible to provide a function as an option although it is not yet tested. This could allow a user to be prompted to provide their own value rather than the listed options. Args: field_name (string): Name of the field. context_str (string): Printed to give the user context. options: Variable arguments, should be vobject Components in a list. As retrieved from a vCard.contents dictionary. Returns: One of the options passed in. Ideally always a list.
codesearchnet
def construct(cls, name, range=None): other = Requirement(None) other.name_ = name other.range_ = VersionRange() if range is None else range return other
Create a requirement directly from an object name and VersionRange. Args: name: Object name string. range: VersionRange object. If None, an unversioned requirement is created.
juraj-google-style
def destringize(self, string): m = segment_destr_pattern.match(string) self.genome_id = int(m.group(1)) self.chr_id = int(m.group(2)) self.direction = m.group(3) self.left = int(m.group(4)) self.right = int(m.group(5))
Get RNF values for this segment from its textual representation and save them into this object. Args: string (str): Textual representation of a segment.
juraj-google-style
def delete(self, invoice_id, **kwargs): url = "{}/{}".format(self.base_url, invoice_id) return self.delete_url(url, {}, **kwargs)
Delete an invoice You can delete an invoice which is in the draft state. Args: invoice_id : Id for delete the invoice Returns: The response is always be an empty array like this - []
juraj-google-style
def row_limits(self): return self._row_splits[1:]
Returns the limit indices for rows in this row partition. These indices specify where the values for each row end. `partition.row_limits()` is equal to `partition.row_splits()[:-1]`. Returns: A 1-D integer Tensor with shape `[self.nrows]`. The returned tensor is nonnegative, and is sorted in ascending order. `self.row_limits()[-1] == self.nvals()`.
github-repos
def get_create_agent(agent_kwargs): def create_agent(sess, environment, summary_writer=None): return BatchDQNAgent( env_batch_size=environment.batch_size, sess=sess, num_actions=environment.action_space.n, summary_writer=summary_writer, tf_device="/gpu:*", **agent_kwargs) return create_agent
Factory for dopamine agent initialization. Args: agent_kwargs: dict of BatchDQNAgent parameters Returns: Function(sess, environment, summary_writer) -> BatchDQNAgent instance.
juraj-google-style
def isset(alias_name): warnings.warn('Will be removed in v1.0', DeprecationWarning, stacklevel=2) raw_value = read(alias_name, allow_none=True) if raw_value: if re.compile(r'.+: return True else: warnings.warn('"{0}_PORT={1}" does not look like a docker link.'.format(alias_name, raw_value), stacklevel=2) return False return False
Return a boolean if the docker link is set or not and is a valid looking docker link value. Args: alias_name: The link alias name
juraj-google-style
def get_namespace(self, name_seq): namespaces = self.namespaces result = [] for name in name_seq: namespaces = namespaces.get(name) if (not namespaces): break result.append(name) return result
Returns the prefix of names from name_seq that are known namespaces. Args: name_seq: ['names', 'of', 'possible', 'namespace', 'to', 'find'] Returns: ['names', 'that', 'are', 'namespaces', 'possibly', 'empty', 'list']
codesearchnet
def import_module(name): parts = name.split('.') path = None module_name = '' fhandle = None for index, part in enumerate(parts): module_name = part if index == 0 else '%s.%s' % (module_name, part) path = [path] if path is not None else path try: fhandle, path, descr = imp.find_module(part, path) if module_name in sys.modules: mod = sys.modules[module_name] else: mod = imp.load_module(module_name, fhandle, path, descr) finally: if fhandle: fhandle.close() return mod
Imports a module into the current runtime environment This function emulates the Python import system that allows for importing full path modules. It will break down the module and import each part (or skip if it is already loaded in cache). Args: name (str): The name of the module to import. This should be the full path of the module Returns: The module that was imported
juraj-google-style
def get_lacp_mode(self, name): members = self.get_members(name) if (not members): return DEFAULT_LACP_MODE for member in self.get_members(name): match = re.search('channel-group\\s\\d+\\smode\\s(?P<value>.+)', self.get_block(('^interface %s' % member))) return match.group('value')
Returns the LACP mode for the specified Port-Channel interface Args: name(str): The Port-Channel interface name to return the LACP mode for from the configuration Returns: The configured LACP mode for the interface. Valid mode values are 'on', 'passive', 'active'
codesearchnet
def zenith_luminance(self, value=9999.0): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `zenith_luminance`'.format(value)) if (value < 0.0): raise ValueError('value need to be greater or equal 0.0 for field `zenith_luminance`') self._zenith_luminance = value
Corresponds to IDD Field `zenith_luminance` will be missing if >= 9999 Args: value (float): value for IDD Field `zenith_luminance` Unit: Cd/m2 value >= 0.0 Missing value: 9999.0 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def reset_folder(self, folder): warnings.warn('This is a destructive action that cannot be undone.') self.post('reset', data={}, params={'folder': folder})
Erase the database index from a given folder and restart Syncthing. Args: folder (str): Folder ID. Returns: None
juraj-google-style
def get(self, attr: FetchAttribute) -> MaybeBytes: attr_name = attr.value.decode('ascii') method = getattr(self, '_get_' + attr_name.replace('.', '_')) return method(attr)
Return the bytes representation of the given message attribue. Args: attr: The fetch attribute. Raises: :class:`NotFetchable`
juraj-google-style
def UpdateNumberOfEventSources( self, number_of_consumed_sources, number_of_produced_sources): consumed_sources_delta = 0 if number_of_consumed_sources is not None: if number_of_consumed_sources < self.number_of_consumed_sources: raise ValueError( 'Number of consumed sources smaller than previous update.') consumed_sources_delta = ( number_of_consumed_sources - self.number_of_consumed_sources) self.number_of_consumed_sources = number_of_consumed_sources self.number_of_consumed_sources_delta = consumed_sources_delta produced_sources_delta = 0 if number_of_produced_sources is not None: if number_of_produced_sources < self.number_of_produced_sources: raise ValueError( 'Number of produced sources smaller than previous update.') produced_sources_delta = ( number_of_produced_sources - self.number_of_produced_sources) self.number_of_produced_sources = number_of_produced_sources self.number_of_produced_sources_delta = produced_sources_delta return consumed_sources_delta > 0 or produced_sources_delta > 0
Updates the number of event sources. Args: number_of_consumed_sources (int): total number of event sources consumed by the process. number_of_produced_sources (int): total number of event sources produced by the process. Returns: bool: True if either number of event sources has increased. Raises: ValueError: if the consumed or produced number of event sources is smaller than the value of the previous update.
juraj-google-style
def index_one(self, instance, force=False): if not self.is_indexed(instance) and not force: doc = self._as_document(instance) self._index_document(doc, force=force) logger.debug('{} indexed as\n {}'.format(instance.__class__, pformat(doc))) return True logger.debug('{} already indexed.'.format(instance.__class__)) return False
Indexes exactly one object of the Ambry system. Args: instance (any): instance to index. force (boolean): if True replace document in the index. Returns: boolean: True if document added to index, False if document already exists in the index.
juraj-google-style
def detect_suicidal_func(func): if func.is_constructor: return False if (func.visibility != 'public'): return False calls = [c.name for c in func.internal_calls] if (not (('suicide(address)' in calls) or ('selfdestruct(address)' in calls))): return False if func.is_protected(): return False return True
Detect if the function is suicidal Detect the public functions calling suicide/selfdestruct without protection Returns: (bool): True if the function is suicidal
codesearchnet
def env(): return _env
Returns the object holds the test environment information. Tests should modify this in the main process if needed, and it will be passed to the worker processes each time a test case is run. Returns: a TestEnvironment object.
github-repos
def get_ax_fig_plt(ax=None, **kwargs): import matplotlib.pyplot as plt if ax is None: fig = plt.figure(**kwargs) ax = fig.add_subplot(1, 1, 1) else: fig = plt.gcf() return ax, fig, plt
Helper function used in plot functions supporting an optional Axes argument. If ax is None, we build the `matplotlib` figure and create the Axes else we return the current active figure. Args: kwargs: keyword arguments are passed to plt.figure if ax is not None. Returns: ax: :class:`Axes` object figure: matplotlib figure plt: matplotlib pyplot module.
juraj-google-style
def _get_covariance(self, X): result = pd.DataFrame(index=range(len(X))) column_names = self.get_column_names(X) for column_name in column_names: column = self.get_column(X, column_name) distrib = self.distribs[column_name] cdf = distrib.cumulative_distribution(column) if (distrib.constant_value is not None): cdf = (np.ones(column.shape) - EPSILON) result = self.set_column(result, column_name, stats.norm.ppf(cdf)) result = result[(result != np.inf).all(axis=1)] return pd.DataFrame(data=result).cov().values
Compute covariance matrix with transformed data. Args: X: `numpy.ndarray` or `pandas.DataFrame`. Returns: np.ndarray
codesearchnet
def set_reprompt_text(self, text): self.response.reprompt.outputSpeech.type = 'PlainText' self.response.reprompt.outputSpeech.text = text
Set response reprompt output speech as plain text type. Args: text: str. Response speech used when type is 'PlainText'. Cannot exceed 8,000 characters.
codesearchnet
def _ensure_package_loaded(path, component): logger = logging.getLogger(__name__) packages = component.find_products('support_package') if (len(packages) == 0): return None elif (len(packages) > 1): raise ExternalError("Component had multiple products declared as 'support_package", products=packages) if ((len(path) > 2) and (':' in path[2:])): (path, _, _) = path.rpartition(':') package_base = packages[0] relative_path = os.path.normpath(os.path.relpath(path, start=package_base)) if relative_path.startswith('..'): raise ExternalError('Component had python product output of support_package', package=package_base, product=path, relative_path=relative_path) if (not relative_path.endswith('.py')): raise ExternalError('Python product did not end with .py', path=path) relative_path = relative_path[:(- 3)] if (os.pathsep in relative_path): raise ExternalError('Python support wheels with multiple subpackages not yet supported', relative_path=relative_path) support_distro = component.support_distribution if (support_distro not in sys.modules): logger.debug('Creating dynamic support wheel package: %s', support_distro) (file, path, desc) = imp.find_module(os.path.basename(package_base), [os.path.dirname(package_base)]) imp.load_module(support_distro, file, path, desc) return '{}.{}'.format(support_distro, relative_path)
Ensure that the given module is loaded as a submodule. Returns: str: The name that the module should be imported as.
codesearchnet
def delete_request(profile, resource): url = get_url(profile, resource) headers = get_headers(profile) return requests.delete(url, headers=headers)
Do a DELETE request to Github's API. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. resource The part of a Github API URL that comes after ``.../:repo/git``. For instance, for ``.../:repo/git/commits``, it's ``/commits``. Returns: The response returned by the ``requests`` library when it does the POST request.
codesearchnet
def get_candidates(self, input_ids: torch.LongTensor) -> Tuple[torch.LongTensor, Optional[torch.FloatTensor]]: max_new_tokens = int(self.num_assistant_tokens) if max_new_tokens == 0: return (input_ids, None) input_ids = input_ids.to(self.assistant_model.device) remove_from_pkv = 0 assistant_input_ids, remove_from_pkv = self._prepare_assistant_input_ids(input_ids) self.prev_assistant_ids = assistant_input_ids min_new_tokens = max(min(max_new_tokens, self.main_model_min_length - assistant_input_ids.shape[-1]), 0) self._update_past_and_masks(assistant_input_ids, remove_from_pkv) generation_args = self._prepare_generation_args(assistant_input_ids, min_new_tokens, max_new_tokens) self.assistant_kwargs.pop('attention_mask', None) assistant_output = self.assistant_model.generate(**generation_args, **self.assistant_kwargs) new_target_ids = self._process_assistant_outputs(input_ids, assistant_output.sequences, assistant_input_ids) self.prev_target_ids_len = input_ids.shape[1] self.assistant_kwargs['past_key_values'] = assistant_output.past_key_values self.prev_assistant_ids = assistant_output.sequences if self.prev_target_ids_len >= new_target_ids.shape[1]: return (input_ids, None) return (new_target_ids, None)
Fetches the candidates to be tried for the current input. Args: input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): Indices of input sequence tokens in the vocabulary. [What are input IDs?](../glossary#input-ids) Return: `torch.LongTensor` of shape `(batch_size, candidate_length)` containing the candidate sequences to be assessed by the model and a `torch.FloatTensor` of shape `(batch_size, candidate_length, vocabulary_size)` containing the logits associated to each candidate.
github-repos
def diff_commonPrefix(self, text1, text2): if ((not text1) or (not text2) or (text1[0] != text2[0])): return 0 pointermin = 0 pointermax = min(len(text1), len(text2)) pointermid = pointermax pointerstart = 0 while (pointermin < pointermid): if (text1[pointerstart:pointermid] == text2[pointerstart:pointermid]): pointermin = pointermid pointerstart = pointermin else: pointermax = pointermid pointermid = (((pointermax - pointermin) return pointermid
Determine the common prefix of two strings. Args: text1: First string. text2: Second string. Returns: The number of characters common to the start of each string.
codesearchnet
def supervised_to_dict(dataset, text2self): def my_fn(inputs, targets): if text2self: return {"targets": targets} else: return {"inputs": inputs, "targets": targets} return dataset.map(my_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE)
Turns a supervised dataset into a dataset with a feature dictionary. if text2self, then the features dictionary contains a "targets" key. else, the features dictionary contains "inputs" and "targets" keys. Args: dataset: a tf.data.Dataset text2self: a boolean Returns: a tf.data.Dataset
juraj-google-style
def _build(self, inputs, **normalization_build_kwargs): if ((self._normalization_ctor in {batch_norm.BatchNorm, batch_norm_v2.BatchNormV2}) and ('is_training' not in normalization_build_kwargs)): raise ValueError('Boolean is_training flag must be explicitly specified when using batch normalization.') self._input_shape = tuple(inputs.get_shape().as_list()) net = inputs final_index = (len(self._layers) - 1) for (i, layer) in enumerate(self._layers): net = layer(net) if ((i != final_index) or self._normalize_final): if (self._normalization_ctor is not None): normalizer = self._normalization_ctor(name='batch_norm_{}'.format(i), **self._normalization_kwargs) net = normalizer(net, **util.remove_unsupported_kwargs(normalizer, normalization_build_kwargs)) elif normalization_build_kwargs: tf.logging.warning('No normalization configured, but extra kwargs provided: {}'.format(normalization_build_kwargs)) if ((i != final_index) or self._activate_final): net = self._activation(net) return net
Assembles the `ConvNet2D` and connects it to the graph. Args: inputs: A 4D Tensor of shape `[batch_size, input_height, input_width, input_channels]`. **normalization_build_kwargs: kwargs passed to the normalization module at _build time. Returns: A 4D Tensor of shape `[batch_size, output_height, output_width, output_channels[-1]]`. Raises: ValueError: If `is_training` is not explicitly specified when using batch normalization.
codesearchnet
def create_rag_adapter() -> EmbeddingTypeAdapter[Chunk, Chunk]: return EmbeddingTypeAdapter(input_fn=_extract_chunk_text, output_fn=_add_embedding_fn)
Creates adapter for converting between Chunk and Embedding types. The adapter: - Extracts text from Chunk.content.text for embedding - Creates Embedding objects from model output - Sets Embedding in Chunk.embedding Returns: EmbeddingTypeAdapter configured for RAG pipeline types
github-repos
def random_expr_with_required_var(depth, required_var, optional_list, ops): if (not depth): if required_var: return required_var return str(optional_list[random.randrange(len(optional_list))]) max_depth_side = random.randrange(2) other_side_depth = random.randrange(depth) required_var_side = random.randrange(2) left = random_expr_with_required_var(((depth - 1) if max_depth_side else other_side_depth), (required_var if required_var_side else None), optional_list, ops) right = random_expr_with_required_var(((depth - 1) if (not max_depth_side) else other_side_depth), (required_var if (not required_var_side) else None), optional_list, ops) op = ops[random.randrange(len(ops))] return ExprNode(left, right, op)
Generate a random expression tree with a required variable. The required variable appears exactly once in the expression. Args: depth: At least one leaf will be this many levels down from the top. required_var: A char. This char is guaranteed to be placed exactly once at a leaf somewhere in the tree. This is the var to solve for. optional_list: A list of chars. These chars are randomly selected as leaf values. These are constant vars. ops: A list of ExprOp instances. Returns: An ExprNode instance which is the root of the generated expression tree.
codesearchnet
def execute_code_block(elem, doc): command = select_executor(elem, doc).split(' ') code = elem.text if 'plt' in elem.attributes or 'plt' in elem.classes: code = save_plot(code, elem) command.append(code) if 'args' in elem.attributes: for arg in elem.attributes['args'].split(): command.append(arg) cwd = elem.attributes['wd'] if 'wd' in elem.attributes else None return subprocess.run(command, encoding='utf8', stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=cwd).stdout
Executes a code block by passing it to the executor. Args: elem The AST element. doc The document. Returns: The output of the command.
juraj-google-style
def transmit(self, payload, **kwargs): kwargs['app_label'] = 'sap_success_factors' kwargs['model_name'] = 'SapSuccessFactorsLearnerDataTransmissionAudit' kwargs['remote_user_id'] = 'sapsf_user_id' super(SapSuccessFactorsLearnerTransmitter, self).transmit(payload, **kwargs)
Send a completion status call to SAP SuccessFactors using the client. Args: payload: The learner completion data payload to send to SAP SuccessFactors
codesearchnet
def decode_payload(cls, request): if request.headers.get(cls.PAYLOAD_VERSION_HEADER) != cls.PAYLOAD_VERSION: raise DeprecationWarning( "Task is generated by an older incompatible version of mapreduce. " "Please kill this job manually") return cls._decode_payload(request.body)
Decode task payload. HugeTask controls its own payload entirely including urlencoding. It doesn't depend on any particular web framework. Args: request: a webapp Request instance. Returns: A dict of str to str. The same as the params argument to __init__. Raises: DeprecationWarning: When task payload constructed from an older incompatible version of mapreduce.
juraj-google-style
def warning(msg: str, *args, **kwargs) -> None: _DEFAULT_LOGGER.warning(msg, *args, **kwargs)
Logs warning message. Args: msg: Message with possible format string. *args: Values for variables in the format string. **kwargs: Keyword arguments for the logger.
github-repos
def locked_put(self, credentials): entity = self._model.get_or_insert(self._key_name) setattr(entity, self._property_name, credentials) entity.put() if self._cache: self._cache.set(self._key_name, credentials.to_json())
Write a Credentials to the datastore. Args: credentials: Credentials, the credentials to store.
codesearchnet
def GetArtifactDependencies(rdf_artifact, recursive=False, depth=1): deps = set() for source in rdf_artifact.sources: if (source.type in (rdf_artifacts.ArtifactSource.SourceType.ARTIFACT, rdf_artifacts.ArtifactSource.SourceType.ARTIFACT_GROUP)): if source.attributes.GetItem('names'): deps.update(source.attributes.GetItem('names')) if (depth > 10): raise RuntimeError('Max artifact recursion depth reached.') deps_set = set(deps) if recursive: for dep in deps: artifact_obj = REGISTRY.GetArtifact(dep) new_dep = GetArtifactDependencies(artifact_obj, True, depth=(depth + 1)) if new_dep: deps_set.update(new_dep) return deps_set
Return a set of artifact dependencies. Args: rdf_artifact: RDF object artifact. recursive: If True recurse into dependencies to find their dependencies. depth: Used for limiting recursion depth. Returns: A set of strings containing the dependent artifact names. Raises: RuntimeError: If maximum recursion depth reached.
codesearchnet
def __init__(self, predicate, if_true, if_false): super(TernaryConditional, self).__init__(predicate, if_true, if_false) self.predicate = predicate self.if_true = if_true self.if_false = if_false self.validate()
Construct an expression that evaluates a predicate and returns one of two results. Args: predicate: Expression to evaluate, and based on which to choose the returned value if_true: Expression to return if the predicate was true if_false: Expression to return if the predicate was false Returns: new TernaryConditional object
juraj-google-style
class FlaxDataCollatorForLanguageModeling: tokenizer: PreTrainedTokenizerBase mlm_probability: float = 0.15 def __post_init__(self): if self.tokenizer.mask_token is None: raise ValueError('This tokenizer does not have a mask token which is necessary for masked language modeling. You should pass `mlm=False` to train on causal language modeling instead.') def __call__(self, examples: list[dict[str, np.ndarray]], pad_to_multiple_of: int) -> dict[str, np.ndarray]: batch = self.tokenizer.pad(examples, pad_to_multiple_of=pad_to_multiple_of, return_tensors=TensorType.NUMPY) special_tokens_mask = batch.pop('special_tokens_mask', None) batch['input_ids'], batch['labels'] = self.mask_tokens(batch['input_ids'], special_tokens_mask=special_tokens_mask) return batch def mask_tokens(self, inputs: np.ndarray, special_tokens_mask: Optional[np.ndarray]) -> tuple[np.ndarray, np.ndarray]: labels = inputs.copy() probability_matrix = np.full(labels.shape, self.mlm_probability) special_tokens_mask = special_tokens_mask.astype('bool') probability_matrix[special_tokens_mask] = 0.0 masked_indices = np.random.binomial(1, probability_matrix).astype('bool') labels[~masked_indices] = -100 indices_replaced = np.random.binomial(1, np.full(labels.shape, 0.8)).astype('bool') & masked_indices inputs[indices_replaced] = self.tokenizer.convert_tokens_to_ids(self.tokenizer.mask_token) indices_random = np.random.binomial(1, np.full(labels.shape, 0.5)).astype('bool') indices_random &= masked_indices & ~indices_replaced random_words = np.random.randint(self.tokenizer.vocab_size, size=labels.shape, dtype='i4') inputs[indices_random] = random_words[indices_random] return (inputs, labels)
Data collator used for language modeling. Inputs are dynamically padded to the maximum length of a batch if they are not all of the same length. Args: tokenizer (:class:`~transformers.PreTrainedTokenizer` or :class:`~transformers.PreTrainedTokenizerFast`): The tokenizer used for encoding the data. mlm_probability (:obj:`float`, `optional`, defaults to 0.15): The probability with which to (randomly) mask tokens in the input. .. note:: For best performance, this data collator should be used with a dataset having items that are dictionaries or BatchEncoding, with the :obj:`"special_tokens_mask"` key, as returned by a :class:`~transformers.PreTrainedTokenizer` or a :class:`~transformers.PreTrainedTokenizerFast` with the argument :obj:`return_special_tokens_mask=True`.
github-repos
def is_mobile(user_agent): if user_agent: b = reg_b.search(user_agent) v = reg_v.search(user_agent[0:4]) return b or v return False
Checks if the user browser from the given user agent is mobile. Args: user_agent: A given user agent. Returns: True if the browser from the user agent is mobile.
juraj-google-style
def relocate(source, destination, move=False): venv = api.VirtualEnvironment(source) if not move: venv.relocate(destination) return None venv.move(destination) return None
Adjust the virtual environment settings and optional move it. Args: source (str): Path to the existing virtual environment. destination (str): Desired path of the virtual environment. move (bool): Whether or not to actually move the files. Default False.
juraj-google-style
def run(self, fetch_list, feed_dict=None, sess=None): if (tf.get_default_graph() != self._graph): raise ValueError('The current default graph is different from the graph used at construction time of RecurrentRunner.') if (feed_dict is None): all_feeds_dict = {} else: all_feeds_dict = dict(feed_dict) all_feeds_dict.update(self._state_feeds) all_fetches_list = list(fetch_list) all_fetches_list += self._state_fetches sess = (sess or tf.get_default_session()) fetches = sess.run(all_fetches_list, all_feeds_dict) states = fetches[len(fetch_list):] for (i, s) in enumerate(states): self._state_feeds[self._state_feed_names[i]] = s return fetches[:len(fetch_list)]
Runs the graph with the provided feeds and fetches. This function wraps sess.Run(), but takes care of state saving and restoring by feeding in states and storing the new state values. Args: fetch_list: A list of requested output tensors. feed_dict: A dictionary of feeds - see Session.Run(). Optional. sess: The Tensorflow session to run. Can be None. Returns: The requested tensors as numpy arrays. Raises: ValueError: If the default graph during object construction was different from the current default graph.
codesearchnet
def update(self, resource, id_or_uri=None, timeout=-1): uri = resource.pop('uri', None) if not uri: if not id_or_uri: raise ValueError("URI was not provided") uri = self._client.build_uri(id_or_uri) return self._client.update(resource=resource, uri=uri, timeout=timeout)
Updates the specified alert resource. Args: resource (dict): Object to update. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: dict: Updated alert.
juraj-google-style
def _Open(self, path_spec, mode='rb'): if not path_spec.HasParent(): raise errors.PathSpecError( 'Unsupported path specification without parent.') range_offset = getattr(path_spec, 'range_offset', None) if range_offset is None: raise errors.PathSpecError( 'Unsupported path specification without encoding method.') range_size = getattr(path_spec, 'range_size', None) if range_size is None: raise errors.PathSpecError( 'Unsupported path specification without encoding method.') self._range_offset = range_offset self._range_size = range_size
Opens the file system defined by path specification. Args: path_spec (PathSpec): a path specification. mode (Optional[str]): file access mode. The default is 'rb' which represents read-only binary. Raises: AccessError: if the access to open the file was denied. IOError: if the file system could not be opened. PathSpecError: if the path specification is incorrect. ValueError: if the path specification is invalid.
juraj-google-style
def preprocess_input(x, data_format=None): return x
A placeholder method for backward compatibility. The preprocessing logic has been included in the EfficientNetV2 model implementation. Users are no longer required to call this method to normalize the input data. This method does nothing and only kept as a placeholder to align the API surface between old and new version of model. Args: x: A floating point `numpy.array` or a tensor. data_format: Optional data format of the image tensor/array. Defaults to None, in which case the global setting `keras.backend.image_data_format()` is used (unless you changed it, it defaults to "channels_last").{mode} Returns: Unchanged `numpy.array` or tensor.
github-repos
def set_led(self, colorcode): data = [] data.append(10) data.append(self.servoid) data.append(RAM_WRITE_REQ) data.append(LED_CONTROL_RAM) data.append(1) data.append(colorcode) send_data(data)
Set the LED Color of Herkulex Args: colorcode (int): The code for colors (0x00-OFF 0x02-BLUE 0x03-CYAN 0x04-RED 0x05-ORANGE 0x06-VIOLET 0x07-WHITE
codesearchnet
def _CheckAttribute(self, attribute, value): if not isinstance(attribute, Attribute): raise AttributeError("Attribute %s must be of type aff4.Attribute()" % attribute) if not isinstance(value, attribute.attribute_type): raise ValueError("Value for attribute %s must be of type %s()" % (attribute, attribute.attribute_type.__name__))
Check that the value is of the expected type. Args: attribute: An instance of Attribute(). value: An instance of RDFValue. Raises: ValueError: when the value is not of the expected type. AttributeError: When the attribute is not of type Attribute().
juraj-google-style
def search(self, query, verbose=0): if verbose > 0: print("searching " + query) query = query.lower() qgram = ng(query, self.slb) qocument = set() for q in qgram: if q in self.ngrams.keys(): for i in self.ngrams[q]: qocument.add(i) self.qocument = qocument results = {} for i in qocument: for j in self.D[i].keys(): if not j in results.keys(): results[j] = 0 results[j] = results[j] + self.D[i][j] sorted_results = sorted(results.items(), key=operator.itemgetter(1), reverse=True) return [self.elements[f[0]] for f in sorted_results]
Searches files satisfying query It first decompose the query in ngrams, then score each document containing at least one ngram with the number. The ten document having the most ngrams in common with the query are selected. Args: query (str): what to search; results_number (int): number of results to return (default: 10)
juraj-google-style
def _open_tracing_interface(self, conn_id, callback): try: handle = self._find_handle(conn_id) services = self._connections[handle]['services'] except (ValueError, KeyError): callback(conn_id, self.id, False, 'Connection closed unexpectedly before we could open the streaming interface') return self._command_task.async_command(['_enable_tracing', handle, services], self._on_interface_finished, {'connection_id': conn_id, 'callback': callback})
Enable the debug tracing interface for this IOTile device Args: conn_id (int): the unique identifier for the connection callback (callback): Callback to be called when this command finishes callback(conn_id, adapter_id, success, failure_reason)
juraj-google-style
def cut_spectrum(sp, l0, lf): if l0 >= lf: raise ValueError("l0 must be lower than lf") idx0 = np.argmin(np.abs(sp.x - l0)) idx1 = np.argmin(np.abs(sp.x - lf)) out = copy.deepcopy(sp) out.x = out.x[idx0:idx1] out.y = out.y[idx0:idx1] return out
Cuts spectrum given a wavelength interval, leaving origina intact Args: sp: Spectrum instance l0: initial wavelength lf: final wavelength Returns: Spectrum: cut spectrum
juraj-google-style
def get_ieee_rotation(structure, refine_rotation=True): sga = SpacegroupAnalyzer(structure) dataset = sga.get_symmetry_dataset() trans_mat = dataset['transformation_matrix'] conv_latt = Lattice(np.transpose(np.dot(np.transpose( structure.lattice.matrix), np.linalg.inv(trans_mat)))) xtal_sys = sga.get_crystal_system() vecs = conv_latt.matrix lengths = np.array(conv_latt.abc) angles = np.array(conv_latt.angles) rotation = np.zeros((3, 3)) if xtal_sys == "cubic": rotation = [vecs[i] / lengths[i] for i in range(3)] elif xtal_sys == "tetragonal": rotation = np.array([vec / mag for (mag, vec) in sorted(zip(lengths, vecs), key=lambda x: x[0])]) if abs(lengths[2] - lengths[1]) < abs(lengths[1] - lengths[0]): rotation[0], rotation[2] = rotation[2], rotation[0].copy() rotation[1] = get_uvec(np.cross(rotation[2], rotation[0])) elif xtal_sys == "orthorhombic": rotation = [vec / mag for (mag, vec) in sorted(zip(lengths, vecs))] rotation = np.roll(rotation, 2, axis=0) elif xtal_sys in ("trigonal", "hexagonal"): tf_index = np.argmin(abs(angles - 120.)) non_tf_mask = np.logical_not(angles == angles[tf_index]) rotation[2] = get_uvec(vecs[tf_index]) rotation[0] = get_uvec(vecs[non_tf_mask][0]) rotation[1] = get_uvec(np.cross(rotation[2], rotation[0])) elif xtal_sys == "monoclinic": u_index = np.argmax(abs(angles - 90.)) n_umask = np.logical_not(angles == angles[u_index]) rotation[1] = get_uvec(vecs[u_index]) c = [vec / mag for (mag, vec) in sorted(zip(lengths[n_umask], vecs[n_umask]))][0] rotation[2] = np.array(c) rotation[0] = np.cross(rotation[1], rotation[2]) elif xtal_sys == "triclinic": rotation = [vec / mag for (mag, vec) in sorted(zip(lengths, vecs))] rotation[1] = get_uvec(np.cross(rotation[2], rotation[0])) rotation[0] = np.cross(rotation[1], rotation[2]) rotation = SquareTensor(rotation) if refine_rotation: rotation = rotation.refine_rotation() return rotation
Given a structure associated with a tensor, determines the rotation matrix for IEEE conversion according to the 1987 IEEE standards. Args: structure (Structure): a structure associated with the tensor to be converted to the IEEE standard refine_rotation (bool): whether to refine the rotation using SquareTensor.refine_rotation
juraj-google-style
def read(self, input_buffer, kmip_version=enums.KMIPVersion.KMIP_1_0): super(GetAttributesResponsePayload, self).read( input_buffer, kmip_version=kmip_version ) local_buffer = utils.BytearrayStream(input_buffer.read(self.length)) if self.is_tag_next(enums.Tags.UNIQUE_IDENTIFIER, local_buffer): unique_identifier = primitives.TextString( tag=enums.Tags.UNIQUE_IDENTIFIER ) unique_identifier.read(local_buffer, kmip_version=kmip_version) self.unique_identifier = unique_identifier.value else: raise exceptions.InvalidKmipEncoding( "The GetAttributes response payload encoding is missing the " "unique identifier." ) if kmip_version < enums.KMIPVersion.KMIP_2_0: self._attributes = list() while self.is_tag_next(enums.Tags.ATTRIBUTE, local_buffer): attribute = objects.Attribute() attribute.read(local_buffer, kmip_version=kmip_version) self._attributes.append(attribute) else: if self.is_tag_next(enums.Tags.ATTRIBUTES, local_buffer): attributes = objects.Attributes() attributes.read(local_buffer, kmip_version=kmip_version) temp_attr = objects.convert_attributes_to_template_attribute( attributes ) self._attributes = temp_attr.attributes else: raise exceptions.InvalidKmipEncoding( "The GetAttributes response payload encoding is missing " "the attributes structure." ) self.is_oversized(local_buffer)
Read the data encoding the GetAttributes response payload and decode it into its constituent parts. Args: input_buffer (stream): A data stream containing encoded object data, supporting a read method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be decoded. Optional, defaults to KMIP 1.0.
juraj-google-style
def get_install_value(self, value_name, wanted_type=None): try: (item_value, item_type) = self.__reg_query_value(self.__reg_uninstall_handle, value_name) except pywintypes.error as exc: if (exc.winerror == winerror.ERROR_FILE_NOT_FOUND): return None raise if (wanted_type and (item_type not in self.__reg_types[wanted_type])): item_value = None return item_value
For the uninstall section of the registry return the name value. Args: value_name (str): Registry value name. wanted_type (str): The type of value wanted if the type does not match None is return. wanted_type support values are ``str`` ``int`` ``list`` ``bytes``. Returns: value: Value requested or None if not found.
codesearchnet
def validate(self, graph): if (not nx.is_directed_acyclic_graph(graph)): raise DirectedAcyclicGraphInvalid(graph_name=self._name)
Validate the graph by checking whether it is a directed acyclic graph. Args: graph (DiGraph): Reference to a DiGraph object from NetworkX. Raises: DirectedAcyclicGraphInvalid: If the graph is not a valid dag.
codesearchnet
def get_defaults(path): defaults = {} if os.path.isfile(path): with open(path) as f: for line in f: line = line.strip() if '=' not in line or line.startswith(' continue k, v = line.split('=', 1) v = v.strip('"').strip("'") defaults[k] = v return defaults else: return {}
Reads file for configuration defaults. Arguments: - path (str) Absolute filepath (usually ~/.licenser) Returns: - (dict) Defaults for name, email, license, .txt extension
juraj-google-style
def __init__(self, bits: List[int], energy_layers: List[tf.keras.layers.Layer], name: Union[None, str]=None): super().__init__(name=name) self._bits = energy_utils.check_bits(bits) self._energy_layers = energy_layers
Initializes a BitstringEnergy. Args: bits: Unique labels for the bits on which this distribution is supported. energy_layers: Concatenation of these layers yields trainable map from bitstrings to scalars. name: Optional name for the model.
github-repos
def _LiteralEval(value): root = ast.parse(value, mode='eval') if isinstance(root.body, ast.BinOp): raise ValueError(value) for node in ast.walk(root): for field, child in ast.iter_fields(node): if isinstance(child, list): for index, subchild in enumerate(child): if isinstance(subchild, ast.Name): child[index] = _Replacement(subchild) elif isinstance(child, ast.Name): replacement = _Replacement(child) setattr(node, field, replacement) return ast.literal_eval(root)
Parse value as a Python literal, or container of containers and literals. First the AST of the value is updated so that bare-words are turned into strings. Then the resulting AST is evaluated as a literal or container of only containers and literals. This allows for the YAML-like syntax {a: b} to represent the dict {'a': 'b'} Args: value: A string to be parsed as a literal or container of containers and literals. Returns: The Python value representing the value arg. Raises: ValueError: If the value is not an expression with only containers and literals. SyntaxError: If the value string has a syntax error.
github-repos
def from_grpc_status(status_code, message, **kwargs): error_class = exception_class_for_grpc_status(status_code) error = error_class(message, **kwargs) if error.grpc_status_code is None: error.grpc_status_code = status_code return error
Create a :class:`GoogleAPICallError` from a :class:`grpc.StatusCode`. Args: status_code (grpc.StatusCode): The gRPC status code. message (str): The exception message. kwargs: Additional arguments passed to the :class:`GoogleAPICallError` constructor. Returns: GoogleAPICallError: An instance of the appropriate subclass of :class:`GoogleAPICallError`.
juraj-google-style