code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def process(i: int, sentence: str, sep_indices: typing.Set[int], scale: int) -> str: feature = get_feature(sentence[i - 3] if i > 2 else INVALID, sentence[i - 2] if i > 1 else INVALID, sentence[i - 1], sentence[i] if i < len(sentence) else INVALID, sentence[i + 1] if i + 1 < len(sentence) else INVALID, sentence[i + 2] if i + 2 < len(sentence) else INVALID) positive = i in sep_indices line = '\t'.join(['%d' % scale if positive else '%d' % -scale] + feature) return line
Outputs an encoded line of features from the given index. Args: i (int): index sentence (str): A sentence sep_indices (typing.Set[int]): A set of separator indices. scale (int): A weight scale for the entries.
github-repos
def plot_rb_data(xdata, ydatas, yavg, yerr, fit, survival_prob, ax=None, show_plt=True): if (not HAS_MATPLOTLIB): raise ImportError('The function plot_rb_data needs matplotlib. Run "pip install matplotlib" before.') if (ax is None): plt.figure() ax = plt.gca() for ydata in ydatas: ax.plot(xdata, ydata, color='gray', linestyle='none', marker='x') ax.errorbar(xdata, yavg, yerr=yerr, color='r', linestyle='--', linewidth=3) ax.plot(xdata, survival_prob(xdata, *fit), color='blue', linestyle='-', linewidth=2) ax.tick_params(labelsize=14) ax.set_xlabel('Clifford Length', fontsize=16) ax.set_ylabel('Z', fontsize=16) ax.grid(True) if show_plt: plt.show()
Plot randomized benchmarking data. Args: xdata (list): list of subsequence lengths ydatas (list): list of lists of survival probabilities for each sequence yavg (list): mean of the survival probabilities at each sequence length yerr (list): error of the survival fit (list): fit parameters survival_prob (callable): function that computes survival probability ax (Axes or None): plot axis (if passed in) show_plt (bool): display the plot. Raises: ImportError: If matplotlib is not installed.
codesearchnet
def update_all(cls, *criterion, **kwargs): try: r = cls.query.filter(*criterion).update(kwargs, 'fetch') cls.session.commit() return r except: cls.session.rollback() raise
Batch method for updating all instances obeying the criterion Args: *criterion: SQLAlchemy query criterion for filtering what instances to update **kwargs: The parameters to be updated Examples: >>> User.update_all(active=True) >>> Customer.update_all(Customer.country=='India', active=True) The second example sets active=True for all customers with country India.
codesearchnet
def push_stack(stack, substack, op_id): if ((substack is not None) and (not isinstance(substack, Stack))): raise ValueError(('Substack should be type tangent.Stack or None, instead found %s' % type(substack))) if __debug__: stack.append((substack, op_id)) else: stack.append(substack)
Proxy of push, where we know we're pushing a stack onto a stack. Used when differentiating call trees,where sub-functions get their own stack. See push() for more. Args: stack: The stack object, which must support appending values. substack: The stack to append. op_id: A unique variable that is also passed into the corresponding pop. Allows optimization passes to track pairs of pushes and pops. Raises: ValueError: If a non-stack value for `substack` is passed.
codesearchnet
def input(self): return self._get_node_attribute_at_index(0, 'input_tensors', 'input')
Retrieves the input tensor(s) of a symbolic operation. Only returns the tensor(s) corresponding to the *first time* the operation was called. Returns: Input tensor or list of input tensors.
github-repos
def _show_status_for_work(self, work): work_count = len(work.work) work_completed = {} work_completed_count = 0 for v in itervalues(work.work): if v['is_completed']: work_completed_count += 1 worker_id = v['claimed_worker_id'] if worker_id not in work_completed: work_completed[worker_id] = { 'completed_count': 0, 'last_update': 0.0, } work_completed[worker_id]['completed_count'] += 1 work_completed[worker_id]['last_update'] = max( work_completed[worker_id]['last_update'], v['claimed_worker_start_time']) print('Completed {0}/{1} work'.format(work_completed_count, work_count)) for k in sorted(iterkeys(work_completed)): last_update_time = time.strftime( '%Y-%m-%d %H:%M:%S', time.localtime(work_completed[k]['last_update'])) print('Worker {0}: completed {1} last claimed work at {2}'.format( k, work_completed[k]['completed_count'], last_update_time))
Shows status for given work pieces. Args: work: instance of either AttackWorkPieces or DefenseWorkPieces
juraj-google-style
def _convert_values_to_tf_tensors(sample: rd.RepresentativeSample) -> Mapping[str, core.Tensor]: tensor_mapping = {} for name, tensorlike_value in sample.items(): if isinstance(tensorlike_value, core.Tensor): tensor_value = tensorlike_value else: tensor_value = tensor_conversion.convert_to_tensor_v2_with_dispatch(tensorlike_value) tensor_mapping[name] = tensor_value return tensor_mapping
Converts TensorLike values of `sample` to Tensors. Creates a copy of `sample`, where each value is converted to Tensors unless it is already a Tensor. The values are not converted in-place (i.e. `sample` is not mutated). Args: sample: A representative sample, which is a map of {name -> tensorlike value}. Returns: Converted map of {name -> tensor}.
github-repos
def from_config(cls, config): return cls(**config)
Creates a regularizer from its config. This method is the reverse of `get_config`, capable of instantiating the same regularizer from the config dictionary. This method is used by saving and loading models to HDF5 formats, Keras model cloning, some visualization utilities, and exporting models to and from JSON. Args: config: A Python dictionary, typically the output of get_config. Returns: A regularizer instance.
github-repos
def _get_pdf_filenames_at(source_directory): if not os.path.isdir(source_directory): raise ValueError("%s is not a directory!" % source_directory) return [os.path.join(source_directory, filename) for filename in os.listdir(source_directory) if filename.endswith(PDF_EXTENSION)]
Find all PDF files in the specified directory. Args: source_directory (str): The source directory. Returns: list(str): Filepaths to all PDF files in the specified directory. Raises: ValueError
juraj-google-style
def get_user_shakes(self): endpoint = '/api/shakes' data = self._make_request(verb='GET', endpoint=endpoint) shakes = [Shake.NewFromJSON(shk) for shk in data['shakes']] return shakes
Get a list of Shake objects for the currently authenticated user. Returns: A list of Shake objects.
codesearchnet
def download_sifts_xml(pdb_id, outdir='', force_rerun=False): baseURL = 'ftp: filename = '{}.xml.gz'.format(pdb_id.lower()) outfile = op.join(outdir, filename.split('.')[0] + '.sifts.xml') if ssbio.utils.force_rerun(flag=force_rerun, outfile=outfile): response = urlopen(baseURL + filename) with open(outfile, 'wb') as f: f.write(gzip.decompress(response.read())) return outfile
Download the SIFTS file for a PDB ID. Args: pdb_id (str): PDB ID outdir (str): Output directory, current working directory if not specified. force_rerun (bool): If the file should be downloaded again even if it exists Returns: str: Path to downloaded file
juraj-google-style
def Open(self, path_spec): self._file_system = resolver.Resolver.OpenFileSystem(path_spec) if self._file_system is None: raise errors.VolumeSystemError('Unable to resolve path specification.') type_indicator = self._file_system.type_indicator if type_indicator != definitions.TYPE_INDICATOR_TSK_PARTITION: raise errors.VolumeSystemError('Unsupported type indicator.')
Opens a volume defined by path specification. Args: path_spec (PathSpec): a path specification. Raises: VolumeSystemError: if the TSK partition virtual file system could not be resolved.
juraj-google-style
def __mul__(self, right: torch.Tensor) -> Rotation: if not isinstance(right, torch.Tensor): raise TypeError('The other multiplicand must be a Tensor') if self._rot_mats is not None: rot_mats = self._rot_mats * right[..., None, None] return Rotation(rot_mats=rot_mats, quats=None) elif self._quats is not None: quats = self._quats * right[..., None] return Rotation(rot_mats=None, quats=quats, normalize_quats=False) else: raise ValueError('Both rotations are None')
Pointwise left multiplication of the rotation with a tensor. Can be used to e.g. mask the Rotation. Args: right: The tensor multiplicand Returns: The product
github-repos
def mcast_ip(ip_addr, return_tuple=True): regex_mcast_ip = __re.compile("^(((2[2-3][4-9])|(23[0-3]))\.((25[0-5])|(2[0-4][0-9])|(1[0-9][0-9])|([1-9]?[0-9]))\.((25[0-5])|(2[0-4][0-9])|(1[0-9][0-9])|([1-9]?[0-9]))\.((25[0-5])|(2[0-4][0-9])|(1[0-9][0-9])|([1-9]?[0-9])))$") if return_tuple: while not regex_mcast_ip.match(ip_addr): print("Not a good multicast IP.") print("Please try again.") ip_addr = input("Please enter a multicast IP address in the following format x.x.x.x: ") return ip_addr elif not return_tuple: if not regex_mcast_ip.match(ip_addr): return False else: return True
Function to check if a address is multicast Args: ip_addr: Multicast IP address in the following format 239.1.1.1 return_tuple: Set to True it returns a IP, set to False returns True or False Returns: see return_tuple for return options
juraj-google-style
def execute(self, triple_map, output, **kwargs): subjects = [] logical_src_iterator = str(triple_map.logicalSource.iterator) json_object = kwargs.get('obj', self.source) if (logical_src_iterator == '.'): results = [None] else: json_path_exp = jsonpath_ng.parse(logical_src_iterator) results = [r.value for r in json_path_exp.find(json_object)][0] for row in results: subject = self.generate_term(term_map=triple_map.subjectMap, **kwargs) for pred_obj_map in triple_map.predicateObjectMap: predicate = pred_obj_map.predicate if (pred_obj_map.template is not None): output.add((subject, predicate, self.generate_term(term_map=pred_obj_map, **kwargs))) if (pred_obj_map.parentTriplesMap is not None): self.__handle_parents__(output, parent_map=pred_obj_map.parentTriplesMap, subject=subject, predicate=predicate, obj=row, **kwargs) if (pred_obj_map.reference is not None): ref_exp = jsonpath_ng.parse(str(pred_obj_map.reference)) found_objects = [r.value for r in ref_exp.find(row)] for obj in found_objects: if rdflib.term._is_valid_uri(obj): rdf_obj = rdflib.URIRef(str(obj)) else: rdf_obj = rdflib.Literal(str(obj)) output.add((subject, predicate, rdf_obj)) if (pred_obj_map.constant is not None): output.add((subject, predicate, pred_obj_map.constant)) subjects.append(subject) return subjects
Method executes mapping between JSON source and output RDF Args: ----- triple_map: SimpleNamespace
codesearchnet
def is_variable_initialized(ref, name=None): if ref.dtype._is_ref_dtype: return gen_state_ops.is_variable_initialized(ref=ref, name=name) return ref.is_initialized(name=name)
Checks whether a tensor has been initialized. Outputs boolean scalar indicating whether the tensor has been initialized. Args: ref: A mutable `Tensor`. Should be from a `Variable` node. May be uninitialized. name: A name for the operation (optional). Returns: A `Tensor` of type `bool`.
github-repos
def install_local(self): folder = self._get_local_folder() installed = self.installed_dir() self._check_module(installed.parent) installed.symlink_to(folder.resolve())
Make a symlink in install folder to a local NApp. Raises: FileNotFoundError: If NApp is not found.
codesearchnet
def monitorTUN(self): packet = self.checkTUN() if packet: try: ret = self._faraday.send(packet) return ret except AttributeError as error: print('AttributeError')
Monitors the TUN adapter and sends data over serial port. Returns: ret: Number of bytes sent over serial port
codesearchnet
def detect_arbitrary_send(self, contract): ret = [] for f in [f for f in contract.functions if f.contract == contract]: nodes = self.arbitrary_send(f) if nodes: ret.append((f, nodes)) return ret
Detect arbitrary send Args: contract (Contract) Returns: list((Function), (list (Node)))
juraj-google-style
def write_schema_to_file(cls, schema, file_pointer=stdout, folder=MISSING, context=DEFAULT_DICT): schema = cls._get_schema(schema) json_schema = cls.generate_json_schema(schema, context=context) if folder: schema_filename = getattr(schema.Meta, 'json_schema_filename', '.'.join([schema.__class__.__name__, 'json'])) json_path = os.path.join(folder, schema_filename) file_pointer = open(json_path, 'w') json.dump(json_schema, file_pointer, indent=2) return json_schema
Given a Marshmallow schema, create a JSON Schema for it. Args: schema (marshmallow.Schema|str): The Marshmallow schema, or the Python path to one, to create the JSON schema for. Keyword Args: file_pointer (file, optional): The pointer to the file to write this schema to. If not provided, the schema will be dumped to ``sys.stdout``. folder (str, optional): The folder in which to save the JSON schema. The name of the schema file can be optionally controlled my the schema's ``Meta.json_schema_filename``. If that attribute is not set, the class's name will be used for the filename. If writing the schema to a specific file is desired, please pass in a ``file_pointer``. context (dict, optional): The Marshmallow context to be pushed to the schema generates the JSONSchema. Returns: dict: The JSON schema in dictionary form.
codesearchnet
def add_arc(self, src, dst, char): for s_idx in [src, dst]: if s_idx >= len(self.states): for i in range(len(self.states), s_idx + 1): self.states.append(DFAState(i)) for arc in self.states[src].arcs: if arc.ilabel == self.isyms.__getitem__(char) or char == EPSILON: self.nfa = True break self.states[src].arcs.append( DFAArc(src, dst, self.isyms.__getitem__(char)))
Adds a new Arc Args: src (int): The source state identifier dst (int): The destination state identifier char (str): The character for the transition Returns: None
juraj-google-style
def get_victim_social_asset(self, main_type, sub_type, unique_id, asset_id, params=None): params = params or {} return self.victim_social_asset(main_type, sub_type, unique_id, asset_id, params=params)
Args: main_type: sub_type: unique_id: asset_id: params: Return:
juraj-google-style
def insert_top(self, node): if not isinstance(node, grammar.STATEMENTS): raise ValueError self.to_insert_top.append(node)
Insert statements at the top of the function body. Note that multiple calls to `insert_top` will result in the statements being prepended in that order; this is different behavior from `prepend`. Args: node: The statement to prepend. Raises: ValueError: If the given node is not a statement.
juraj-google-style
def get_height_rect( self, x: int, y: int, width: int, height: int, string: str ) -> int: string_ = string.encode("utf-8") return int( lib.get_height_rect( self.console_c, x, y, width, height, string_, len(string_) ) )
Return the height of this text word-wrapped into this rectangle. Args: x (int): The x coordinate from the left. y (int): The y coordinate from the top. width (int): Maximum width to render the text. height (int): Maximum lines to render the text. string (str): A Unicode string. Returns: int: The number of lines of text once word-wrapped.
juraj-google-style
class _ConfusionMatrixConditionCount(Metric): def __init__(self, confusion_matrix_cond, thresholds=None, name=None, dtype=None): super().__init__(name=name, dtype=dtype) self._confusion_matrix_cond = confusion_matrix_cond self.init_thresholds = thresholds self.thresholds = metrics_utils.parse_init_thresholds(thresholds, default_threshold=0.5) self._thresholds_distributed_evenly = metrics_utils.is_evenly_distributed_thresholds(self.thresholds) self.accumulator = self.add_variable(shape=(len(self.thresholds),), initializer=initializers.Zeros(), name='accumulator') def update_state(self, y_true, y_pred, sample_weight=None): return metrics_utils.update_confusion_matrix_variables({self._confusion_matrix_cond: self.accumulator}, y_true, y_pred, thresholds=self.thresholds, thresholds_distributed_evenly=self._thresholds_distributed_evenly, sample_weight=sample_weight) def result(self): if len(self.thresholds) == 1: result = self.accumulator[0] else: result = self.accumulator return backend.convert_to_tensor(result) def get_config(self): config = {'thresholds': self.init_thresholds} base_config = super().get_config() return {**base_config, **config}
Calculates the number of the given confusion matrix condition. Args: confusion_matrix_cond: One of `metrics_utils.ConfusionMatrix` conditions. thresholds: (Optional) Defaults to `0.5`. A float value or a python list / tuple of float threshold values in `[0, 1]`. A threshold is compared with prediction values to determine the truth value of predictions (i.e., above the threshold is `True`, below is `False`). One metric value is generated for each threshold value. name: (Optional) string name of the metric instance. dtype: (Optional) data type of the metric result.
github-repos
def body(self, body): self._request.body = body self.add_matcher(matcher('BodyMatcher', body))
Defines the body data to match. ``body`` argument can be a ``str``, ``binary`` or a regular expression. Arguments: body (str|binary|regex): body data to match. Returns: self: current Mock instance.
juraj-google-style
def _index_to_ansi_values(self, index): if (self.__class__.__name__[0] == 'F'): if (index < 8): index += ANSI_FG_LO_BASE else: index += (ANSI_FG_HI_BASE - 8) elif (index < 8): index += ANSI_BG_LO_BASE else: index += (ANSI_BG_HI_BASE - 8) return [str(index)]
Converts an palette index to the corresponding ANSI color. Arguments: index - an int (from 0-15) Returns: index as str in a list for compatibility with values.
codesearchnet
def _calculate_hash(files, root): file_hash = hashlib.md5() for fname in sorted(files): f = os.path.join(root, fname) file_hash.update((fname + '\x00').encode()) with open(f, 'rb') as fd: for chunk in iter((lambda : fd.read(4096)), ''): if (not chunk): break file_hash.update(chunk) file_hash.update('\x00'.encode()) return file_hash.hexdigest()
Returns a hash of all of the given files at the given root. Args: files (list[str]): file names to include in the hash calculation, relative to ``root``. root (str): base directory to analyze files in. Returns: str: A hash of the hashes of the given files.
codesearchnet
def _UpdateProcessingStatus(self, pid, process_status, used_memory): self._RaiseIfNotRegistered(pid) if not process_status: return process = self._processes_per_pid[pid] status_indicator = process_status.get('processing_status', None) self._RaiseIfNotMonitored(pid) display_name = process_status.get('display_name', '') number_of_consumed_event_tags = process_status.get( 'number_of_consumed_event_tags', None) number_of_produced_event_tags = process_status.get( 'number_of_produced_event_tags', None) number_of_consumed_events = process_status.get( 'number_of_consumed_events', None) number_of_produced_events = process_status.get( 'number_of_produced_events', None) number_of_consumed_reports = process_status.get( 'number_of_consumed_reports', None) number_of_produced_reports = process_status.get( 'number_of_produced_reports', None) number_of_consumed_sources = process_status.get( 'number_of_consumed_sources', None) number_of_produced_sources = process_status.get( 'number_of_produced_sources', None) number_of_consumed_warnings = process_status.get( 'number_of_consumed_warnings', None) number_of_produced_warnings = process_status.get( 'number_of_produced_warnings', None) if status_indicator != definitions.STATUS_INDICATOR_IDLE: last_activity_timestamp = process_status.get( 'last_activity_timestamp', 0.0) if last_activity_timestamp: last_activity_timestamp += self._PROCESS_WORKER_TIMEOUT current_timestamp = time.time() if current_timestamp > last_activity_timestamp: logger.error(( 'Process {0:s} (PID: {1:d}) has not reported activity within ' 'the timeout period.').format(process.name, pid)) status_indicator = definitions.STATUS_INDICATOR_NOT_RESPONDING self._processing_status.UpdateWorkerStatus( process.name, status_indicator, pid, used_memory, display_name, number_of_consumed_sources, number_of_produced_sources, number_of_consumed_events, number_of_produced_events, number_of_consumed_event_tags, number_of_produced_event_tags, number_of_consumed_reports, number_of_produced_reports, number_of_consumed_warnings, number_of_produced_warnings)
Updates the processing status. Args: pid (int): process identifier (PID) of the worker process. process_status (dict[str, object]): status values received from the worker process. used_memory (int): size of used memory in bytes. Raises: KeyError: if the process is not registered with the engine.
juraj-google-style
def _ReadRecordAttributeValueOffset( self, file_object, file_offset, number_of_attribute_values): offsets_data_size = number_of_attribute_values * 4 offsets_data = file_object.read(offsets_data_size) context = dtfabric_data_maps.DataTypeMapContext(values={ 'number_of_attribute_values': number_of_attribute_values}) data_type_map = self._GetDataTypeMap( 'keychain_record_attribute_value_offsets') try: attribute_value_offsets = self._ReadStructureFromByteStream( offsets_data, file_offset, data_type_map, context=context) except (ValueError, errors.ParseError) as exception: raise errors.ParseError(( 'Unable to map record attribute value offsets data at offset: ' '0x{0:08x} with error: {1!s}').format(file_offset, exception)) return attribute_value_offsets
Reads the record attribute value offsets. Args: file_object (file): file-like object. file_offset (int): offset of the record attribute values offsets relative to the start of the file. number_of_attribute_values (int): number of attribute values. Returns: keychain_record_attribute_value_offsets: record attribute value offsets. Raises: ParseError: if the record attribute value offsets cannot be read.
juraj-google-style
def update_x(self, x, indices=None): x = _make_np_bool(x) if indices is None: if len(self._x) != len(x): raise QiskitError("During updating whole x, you can not change " "the number of qubits.") self._x = x else: if not isinstance(indices, list) and not isinstance(indices, np.ndarray): indices = [indices] for p, idx in enumerate(indices): self._x[idx] = x[p] return self
Update partial or entire x. Args: x (numpy.ndarray or list): to-be-updated x indices (numpy.ndarray or list or optional): to-be-updated qubit indices Returns: Pauli: self Raises: QiskitError: when updating whole x, the number of qubits must be the same.
juraj-google-style
def json_to_key_value(json_data, key_field, value_field=None, array=False): if not isinstance(json_data, list): json_data = [json_data] key_value_array = [] for d in json_data: if d.get(key_field) is not None and value_field is None: key = key_field value = d.get(key_field) elif d.get(key_field) is not None and d.get(value_field) is not None: key = d.get(key_field) value = d.get(value_field) else: continue key_value_array.append({'key': key, 'value': value}) if len(key_value_array) == 1 and not array: return key_value_array[0] return key_value_array
Convert JSON data to a KeyValue/KeyValueArray. Args: json_data (dictionary|list): Array/List of JSON data. key_field (string): Field name for the key. value_field (string): Field name for the value or use the value of the key field. array (boolean): Always return array even if only on result. Returns: (dictionary|list): A dictionary or list representing a KeyValue or KeyValueArray.
juraj-google-style
def instantiate(self, cls=None): if cls is None: cls = self.cls if cls is None: raise TypeError("cls must a class") return cls.create(*self.args, **self.kwargs)
Return an instantiated Expression as ``cls.create(*self.args, **self.kwargs)`` Args: cls (class): The class of the instantiated expression. If not given, ``self.cls`` will be used.
juraj-google-style
def test_torch_export(self, config=None, inputs_dict=None, tolerance=0.0001): if not self.test_torch_exportable: self.skipTest(reason='test_torch_exportable=False for this model.') def recursively_check(eager_outputs, exported_outputs): is_tested = False if isinstance(eager_outputs, torch.Tensor): torch.testing.assert_close(eager_outputs, exported_outputs, atol=tolerance, rtol=tolerance) return True elif isinstance(eager_outputs, (tuple, list)): for eager_output, exported_output in zip(eager_outputs, exported_outputs): is_tested = is_tested or recursively_check(eager_output, exported_output) return is_tested elif isinstance(eager_outputs, dict): for key in eager_outputs: is_tested = is_tested or recursively_check(eager_outputs[key], exported_outputs[key]) return is_tested return is_tested default_config, default_inputs_dict = self.model_tester.prepare_config_and_inputs_for_common() config = config or default_config inputs_dict = inputs_dict or default_inputs_dict for model_class in self.all_model_classes: if model_class.__name__.endswith('ForPreTraining'): continue with self.subTest(model_class.__name__): model = model_class(config).eval().to(torch_device) exported_model = torch.export.export(model, args=(), kwargs=inputs_dict, strict=getattr(self, 'test_torch_exportable_strictly', True)) with torch.no_grad(): torch.manual_seed(1234) eager_outputs = model(**inputs_dict) torch.manual_seed(1234) exported_outputs = exported_model.module().forward(**inputs_dict) is_tested = recursively_check(eager_outputs, exported_outputs) self.assertTrue(is_tested, msg=f'No outputs were compared for {model_class.__name__}')
Test if model can be exported with torch.export.export() Args: config (PretrainedConfig): Config to use for the model, if None, use default config from model_tester inputs_dict (dict): Inputs to use for the model, if None, use default inputs from model_tester tolerance (float): `atol` for torch.allclose(), defined in signature for test overriding
github-repos
def label_total_duration(self, label_list_ids=None): duration = collections.defaultdict(float) for label_list in self.label_lists.values(): if label_list_ids is None or label_list.idx in label_list_ids: for label_value, label_duration in label_list.label_total_duration().items(): duration[label_value] += label_duration return duration
Return a dictionary containing the number of seconds, every label-value is occurring in this utterance. Args: label_list_ids (list): If not None, only labels from label-lists with an id contained in this list are considered. Returns: dict: A dictionary containing the number of seconds with the label-value as key.
juraj-google-style
def retry(func): def retried_func(*args, **kwargs): max_tries = 3 tries = 0 while True: try: resp = func(*args, **kwargs) except requests.exceptions.ConnectionError as exc: exc.msg = 'Connection error for session; exiting' raise exc except requests.exceptions.HTTPError as exc: exc.msg = 'HTTP error for session; exiting' raise exc if ((resp.status_code != 200) and (tries < max_tries)): logger.warning('retrying request; current status code: {}'.format(resp.status_code)) tries += 1 time.sleep((tries ** 2)) continue break if (resp.status_code != 200): error_message = resp.json()['error']['message'] logger.error('HTTP Error code: {}: {}'.format(resp.status_code, error_message)) logger.error('Rule payload: {}'.format(kwargs['rule_payload'])) raise requests.exceptions.HTTPError return resp return retried_func
Decorator to handle API retries and exceptions. Defaults to three retries. Args: func (function): function for decoration Returns: decorated function
codesearchnet
def GetNewSessionID(self): base = self.runner_args.base_session_id if (base is None): base = (self.runner_args.client_id or aff4.ROOT_URN) base = base.Add('flows') return rdfvalue.SessionID(base=base, queue=self.runner_args.queue)
Returns a random session ID for this flow based on the runner args. Returns: A formatted session id URN.
codesearchnet
def get_browser(browser_name, capabilities=None, **options): if browser_name == "chrome": return webdriver.Chrome(desired_capabilities=capabilities, **options) if browser_name == "edge": return webdriver.Edge(capabilities=capabilities, **options) if browser_name in ["ff", "firefox"]: return webdriver.Firefox(capabilities=capabilities, **options) if browser_name in ["ie", "internet_explorer"]: return webdriver.Ie(capabilities=capabilities, **options) if browser_name == "phantomjs": return webdriver.PhantomJS(desired_capabilities=capabilities, **options) if browser_name == "remote": return webdriver.Remote(desired_capabilities=capabilities, **options) if browser_name == "safari": return webdriver.Safari(desired_capabilities=capabilities, **options) raise ValueError("unsupported browser: {}".format(repr(browser_name)))
Returns an instance of the given browser with the given capabilities. Args: browser_name (str): The name of the desired browser. capabilities (Dict[str, str | bool], optional): The desired capabilities of the browser. Defaults to None. options: Arbitrary keyword arguments for the browser-specific subclass of :class:`webdriver.Remote`. Returns: WebDriver: An instance of the desired browser.
juraj-google-style
def c_overturned(step): (rbot, rtop) = misc.get_rbounds(step) (cinit, rad) = init_c_overturn(step) radf = ((((rtop ** 3) + (rbot ** 3)) - (rad ** 3)) ** (1 / 3)) return (cinit, radf)
Theoretical overturned concentration. This compute the resulting composition profile if fractional crystallization of a SMO is assumed and then a purely radial overturn happens. Args: step (:class:`~stagpy.stagyydata._Step`): a step of a StagyyData instance. Returns: tuple of :class:`numpy.array`: the composition and the radial position at which it is evaluated.
codesearchnet
def read_counts(node): cfg.forward(node, cfg.ReachingDefinitions()) rc = ReadCounts() rc.visit(node) return rc.n_read
Check how many times a variable definition was used. Args: node: An AST to analyze. Returns: A dictionary from assignment nodes to the number of times the assigned to variable was used.
codesearchnet
def is_alias_command(subcommands, args): if not args: return False for subcommand in subcommands: if args[:2] == ['alias', subcommand]: return True return False
Check if the user is invoking one of the comments in 'subcommands' in the from az alias . Args: subcommands: The list of subcommands to check through. args: The CLI arguments to process. Returns: True if the user is invoking 'az alias {command}'.
juraj-google-style
def __init__(self, name, default_name=None, values=None) -> None: if not (default_name is None or isinstance(default_name, str)): raise TypeError('`default_name` type (%s) is not a string type. You likely meant to pass this into the `values` kwarg.' % type(default_name)) self._name = default_name if name is None else name self._default_name = default_name self._values = values
Initialize the context manager. Args: name: The name argument that is passed to the op function. default_name: The default name to use if the `name` argument is `None`. values: The list of `Tensor` arguments that are passed to the op function. Raises: TypeError: if `default_name` is passed in but not a string.
github-repos
def load_primitive(name): for base_path in get_primitives_paths(): parts = name.split('.') number_of_parts = len(parts) for folder_parts in range(number_of_parts): folder = os.path.join(base_path, *parts[:folder_parts]) filename = ('.'.join(parts[folder_parts:]) + '.json') json_path = os.path.join(folder, filename) if os.path.isfile(json_path): with open(json_path, 'r') as json_file: LOGGER.debug('Loading primitive %s from %s', name, json_path) return json.load(json_file) raise ValueError('Unknown primitive: {}'.format(name))
Locate and load the JSON annotation of the given primitive. All the paths found in PRIMTIVE_PATHS will be scanned to find a JSON file with the given name, and as soon as a JSON with the given name is found it is returned. Args: name (str): name of the primitive to look for. The name should correspond to the primitive, not to the filename, as the `.json` extension will be added dynamically. Returns: dict: The content of the JSON annotation file loaded into a dict. Raises: ValueError: A `ValueError` will be raised if the primitive cannot be found.
codesearchnet
def get_model_proto(iterator) -> model_pb2.ModelProto: if isinstance(iterator, iterator_ops.OwnedIterator): iterator_resource = iterator._iterator_resource elif isinstance(iterator, dataset_ops.NumpyIterator): iterator_resource = iterator._iterator._iterator_resource else: raise ValueError('Only supports `tf.data.Iterator`-typed `iterator`.') if not context.executing_eagerly(): raise ValueError(f'{get_model_proto.__name__} is not supported in graph mode.') model_proto_string_tensor = ged_ops.iterator_get_model_proto(iterator_resource) model_proto_bytes = model_proto_string_tensor.numpy() return model_pb2.ModelProto.FromString(model_proto_bytes)
Gets the analytical model inside of `iterator` as `model_pb2.ModelProto`. Args: iterator: An `iterator_ops.OwnedIterator` or `dataset_ops.NumpyIterator` Returns: The model inside of this iterator as a model proto. Raises: NotFoundError: If this iterator's autotune is not enabled.
github-repos
def set_total_channel_deposit(self, registry_address: PaymentNetworkID, token_address: TokenAddress, partner_address: Address, total_deposit: TokenAmount, retry_timeout: NetworkTimeout=DEFAULT_RETRY_TIMEOUT): chain_state = views.state_from_raiden(self.raiden) token_addresses = views.get_token_identifiers(chain_state, registry_address) channel_state = views.get_channelstate_for(chain_state=chain_state, payment_network_id=registry_address, token_address=token_address, partner_address=partner_address) if (not is_binary_address(token_address)): raise InvalidAddress('Expected binary address format for token in channel deposit') if (not is_binary_address(partner_address)): raise InvalidAddress('Expected binary address format for partner in channel deposit') if (token_address not in token_addresses): raise UnknownTokenAddress('Unknown token address') if (channel_state is None): raise InvalidAddress('No channel with partner_address for the given token') if (self.raiden.config['environment_type'] == Environment.PRODUCTION): per_token_network_deposit_limit = RED_EYES_PER_TOKEN_NETWORK_LIMIT else: per_token_network_deposit_limit = UINT256_MAX token = self.raiden.chain.token(token_address) token_network_registry = self.raiden.chain.token_network_registry(registry_address) token_network_address = token_network_registry.get_token_network(token_address) token_network_proxy = self.raiden.chain.token_network(token_network_address) channel_proxy = self.raiden.chain.payment_channel(canonical_identifier=channel_state.canonical_identifier) if (total_deposit == 0): raise DepositMismatch('Attempted to deposit with total deposit being 0') addendum = (total_deposit - channel_state.our_state.contract_balance) total_network_balance = token.balance_of(registry_address) if ((total_network_balance + addendum) > per_token_network_deposit_limit): raise DepositOverLimit(f'The deposit of {addendum} will exceed the token network limit of {per_token_network_deposit_limit}') balance = token.balance_of(self.raiden.address) functions = token_network_proxy.proxy.contract.functions deposit_limit = functions.channel_participant_deposit_limit().call() if (total_deposit > deposit_limit): raise DepositOverLimit(f'The additional deposit of {addendum} will exceed the channel participant limit of {deposit_limit}') if (not (balance >= addendum)): msg = 'Not enough balance to deposit. {} Available={} Needed={}'.format(pex(token_address), balance, addendum) raise InsufficientFunds(msg) channel_proxy.set_total_deposit(total_deposit=total_deposit, block_identifier=views.state_from_raiden(self.raiden).block_hash) target_address = self.raiden.address waiting.wait_for_participant_newbalance(raiden=self.raiden, payment_network_id=registry_address, token_address=token_address, partner_address=partner_address, target_address=target_address, target_balance=total_deposit, retry_timeout=retry_timeout)
Set the `total_deposit` in the channel with the peer at `partner_address` and the given `token_address` in order to be able to do transfers. Raises: InvalidAddress: If either token_address or partner_address is not 20 bytes long. TransactionThrew: May happen for multiple reasons: - If the token approval fails, e.g. the token may validate if account has enough balance for the allowance. - The deposit failed, e.g. the allowance did not set the token aside for use and the user spent it before deposit was called. - The channel was closed/settled between the allowance call and the deposit call. AddressWithoutCode: The channel was settled during the deposit execution. DepositOverLimit: The total deposit amount is higher than the limit.
codesearchnet
def create_balanced_geojson(input_file, classes, output_file='balanced.geojson', samples_per_class=None): if (not output_file.endswith('.geojson')): output_file += '.geojson' with open(input_file) as f: data = geojson.load(f) sorted_classes = {clss: [] for clss in classes} for feat in data['features']: try: sorted_classes[feat['properties']['class_name']].append(feat) except KeyError: continue if (not samples_per_class): smallest_class = min(sorted_classes, key=(lambda clss: len(sorted_classes[clss]))) samples_per_class = len(sorted_classes[smallest_class]) try: samps = [random.sample(feats, samples_per_class) for feats in sorted_classes.values()] final = [feat for sample in samps for feat in sample] except ValueError: raise Exception('Insufficient features in at least one class. Set samples_per_class to None to use maximum amount of features.') np.random.shuffle(final) data['features'] = final with open(output_file, 'wb') as f: geojson.dump(data, f)
Create a geojson comprised of balanced classes from the class_name property in input_file. Randomly selects polygons from all classes. Args: input_file (str): File name classes (list[str]): Classes in input_file to include in the balanced output file. Must exactly match the 'class_name' property in the features of input_file. output_file (str): Name under which to save the balanced output file. Defualts to balanced.geojson. samples_per_class (int or None): Number of features to select per class in input_file. If None will use the smallest class size. Defaults to None.
codesearchnet
def _ctc_state_trans(label_seq): with ops.name_scope('ctc_state_trans'): label_seq = ops.convert_to_tensor(label_seq, name='label_seq') batch_size = _get_dim(label_seq, 0) num_labels = _get_dim(label_seq, 1) num_label_states = num_labels + 1 num_states = 2 * num_label_states label_states = math_ops.range(num_label_states) blank_states = label_states + num_label_states start_to_label = [[1, 0]] blank_to_label = array_ops_stack.stack([label_states[1:], blank_states[:-1]], 1) label_to_blank = array_ops_stack.stack([blank_states, label_states], 1) indices = array_ops.concat([start_to_label, blank_to_label, label_to_blank], 0) values = array_ops.ones([_get_dim(indices, 0)]) trans = array_ops.scatter_nd(indices, values, shape=[num_states, num_states]) trans += linalg_ops.eye(num_states) batch_idx = array_ops.zeros_like(label_states[2:]) indices = array_ops_stack.stack([batch_idx, label_states[2:], label_states[1:-1]], 1) indices = array_ops.tile(array_ops.expand_dims(indices, 0), [batch_size, 1, 1]) batch_idx = array_ops.expand_dims(math_ops.range(batch_size), 1) * [1, 0, 0] indices += array_ops.expand_dims(batch_idx, 1) repeats = math_ops.equal(label_seq[:, :-1], label_seq[:, 1:]) values = 1.0 - math_ops.cast(repeats, dtypes.float32) batched_shape = [batch_size, num_states, num_states] label_to_label = array_ops.scatter_nd(indices, values, batched_shape) return array_ops.expand_dims(trans, 0) + label_to_label
Computes CTC alignment model transition matrix. Args: label_seq: tensor of shape [batch_size, max_seq_length] Returns: tensor of shape [batch_size, states, states] with a state transition matrix computed for each sequence of the batch.
github-repos
def get_clinvar_id(self, submission_id): submission_obj = self.clinvar_submission_collection.find_one({'_id': ObjectId(submission_id)}) clinvar_subm_id = submission_obj.get('clinvar_subm_id') return clinvar_subm_id
Returns the official Clinvar submission ID for a submission object Args: submission_id(str): submission_id(str) : id of the submission Returns: clinvar_subm_id(str): a string with a format: SUB[0-9]. It is obtained from clinvar portal when starting a new submission
juraj-google-style
def _fetch_certs(request, certs_url): response = request(certs_url, method='GET') if response.status != http_client.OK: raise exceptions.TransportError( 'Could not fetch certificates at {}'.format(certs_url)) return json.loads(response.data.decode('utf-8'))
Fetches certificates. Google-style cerificate endpoints return JSON in the format of ``{'key id': 'x509 certificate'}``. Args: request (google.auth.transport.Request): The object used to make HTTP requests. certs_url (str): The certificate endpoint URL. Returns: Mapping[str, str]: A mapping of public key ID to x.509 certificate data.
juraj-google-style
def make_spiral_texture(spirals=6.0, ccw=False, offset=0.0, resolution=1000): dist = np.sqrt(np.linspace(0., 1., resolution)) if ccw: direction = 1. else: direction = -1. angle = dist * spirals * np.pi * 2. * direction spiral_texture = ( (np.cos(angle) * dist / 2.) + 0.5, (np.sin(angle) * dist / 2.) + 0.5 ) return spiral_texture
Makes a texture consisting of a spiral from the origin. Args: spirals (float): the number of rotations to make ccw (bool): make spirals counter-clockwise (default is clockwise) offset (float): if non-zero, spirals start offset by this amount resolution (int): number of midpoints along the spiral Returns: A texture.
juraj-google-style
def load_config(self): logger.debug('loading config file: %s', self.config_file) if os.path.exists(self.config_file): with open(self.config_file) as file_handle: return json.load(file_handle) else: logger.error('configuration file is required for eventify') logger.error('unable to load configuration for service') raise EventifyConfigError( 'Configuration is required! Missing: %s' % self.config_file )
Load configuration for the service Args: config_file: Configuration file path
juraj-google-style
def set_forced_variation(self, experiment_key, user_id, variation_key): experiment = self.get_experiment_from_key(experiment_key) if not experiment: return False experiment_id = experiment.id if variation_key is None: if user_id in self.forced_variation_map: experiment_to_variation_map = self.forced_variation_map.get(user_id) if experiment_id in experiment_to_variation_map: del(self.forced_variation_map[user_id][experiment_id]) self.logger.debug('Variation mapped to experiment "%s" has been removed for user "%s".' % ( experiment_key, user_id )) else: self.logger.debug('Nothing to remove. Variation mapped to experiment "%s" for user "%s" does not exist.' % ( experiment_key, user_id )) else: self.logger.debug('Nothing to remove. User "%s" does not exist in the forced variation map.' % user_id) return True if not validator.is_non_empty_string(variation_key): self.logger.debug('Variation key is invalid.') return False forced_variation = self.get_variation_from_key(experiment_key, variation_key) if not forced_variation: return False variation_id = forced_variation.id if user_id not in self.forced_variation_map: self.forced_variation_map[user_id] = {experiment_id: variation_id} else: self.forced_variation_map[user_id][experiment_id] = variation_id self.logger.debug('Set variation "%s" for experiment "%s" and user "%s" in the forced variation map.' % ( variation_id, experiment_id, user_id )) return True
Sets users to a map of experiments to forced variations. Args: experiment_key: Key for experiment. user_id: The user ID. variation_key: Key for variation. If None, then clear the existing experiment-to-variation mapping. Returns: A boolean value that indicates if the set completed successfully.
juraj-google-style
def diff_prettyHtml(self, diffs): html = [] for (op, data) in diffs: text = (data.replace("&", "&amp;").replace("<", "&lt;") .replace(">", "&gt;").replace("\n", "&para;<br>")) if op == self.DIFF_INSERT: html.append("<ins style=\"background: elif op == self.DIFF_DELETE: html.append("<del style=\"background: elif op == self.DIFF_EQUAL: html.append("<span>%s</span>" % text) return "".join(html)
Convert a diff array into a pretty HTML report. Args: diffs: Array of diff tuples. Returns: HTML representation.
juraj-google-style
def sendto(self, transport, addr): msg = (bytes(self) + b'\r\n') logger.debug('%s:%s < %s', *(addr + (self,))) transport.sendto(msg, addr)
Send request to a given address via given transport. Args: transport (asyncio.DatagramTransport): Write transport to send the message on. addr (Tuple[str, int]): IP address and port pair to send the message to.
codesearchnet
def compress(item_list, flag_list): assert len(item_list) == len(flag_list), ( 'lists should correspond. len(item_list)=%r len(flag_list)=%r' % (len(item_list), len(flag_list))) filtered_items = list(util_iter.iter_compress(item_list, flag_list)) return filtered_items
like np.compress but for lists Returns items in item list where the corresponding item in flag list is True Args: item_list (list): list of items to mask flag_list (list): list of booleans used as a mask Returns: list : filtered_items - masked items
juraj-google-style
def parse_value(self, text: str) -> Optional[bool]: if text == "true": return True if text == "false": return False
Parse boolean value. Args: text: String representation of the value.
juraj-google-style
class AveragePooling1D(keras_layers.AveragePooling1D, base.Layer): def __init__(self, pool_size, strides, padding='valid', data_format='channels_last', name=None, **kwargs): if strides is None: raise ValueError('Argument `strides` must not be None.') super(AveragePooling1D, self).__init__(pool_size=pool_size, strides=strides, padding=padding, data_format=data_format, name=name, **kwargs)
Average Pooling layer for 1D inputs. Args: pool_size: An integer or tuple/list of a single integer, representing the size of the pooling window. strides: An integer or tuple/list of a single integer, specifying the strides of the pooling operation. padding: A string. The padding method, either 'valid' or 'same'. Case-insensitive. data_format: A string, one of `channels_last` (default) or `channels_first`. The ordering of the dimensions in the inputs. `channels_last` corresponds to inputs with shape `(batch, length, channels)` while `channels_first` corresponds to inputs with shape `(batch, channels, length)`. name: A string, the name of the layer.
github-repos
def reply_all(self, reply_comment): payload = '{ "Comment": "' + reply_comment + '"}' endpoint = 'https: self._make_api_call('post', endpoint, data=payload)
Replies to everyone on the email, including those on the CC line. With great power, comes great responsibility. Args: reply_comment: The string comment to send to everyone on the email.
juraj-google-style
def AddLabels(self, labels): for label in labels: if (not self._VALID_LABEL_REGEX.match(label)): raise ValueError('Unsupported label: "{0:s}". A label must only consist of alphanumeric characters or underscores.'.format(label)) for label in labels: if (label not in self.labels): self.labels.append(label)
Adds labels to the event tag. Args: labels (list[str]): labels. Raises: ValueError: if a label is malformed.
codesearchnet
def __init__(self, xid=None, reason=None, desc=None): super().__init__(xid) self.reason = reason self.desc = desc
Assign parameters to object attributes. Args: xid (int): Header's xid. reason (~pyof.v0x04.asynchronous.port_status.PortReason): Addition, deletion or modification. desc (~pyof.v0x04.common.port.Port): Port description.
juraj-google-style
def _restore_volume(self, fade): self.device.mute = self.mute if (self.volume == 100): fixed_vol = self.device.renderingControl.GetOutputFixed([('InstanceID', 0)])['CurrentFixed'] else: fixed_vol = False if (not fixed_vol): self.device.bass = self.bass self.device.treble = self.treble self.device.loudness = self.loudness if fade: self.device.volume = 0 self.device.ramp_to_volume(self.volume) else: self.device.volume = self.volume
Reinstate volume. Args: fade (bool): Whether volume should be faded up on restore.
codesearchnet
def run_server(self, blocking=True): self._server_lock.acquire() try: if self._stop_requested: raise ValueError('Server has already stopped') if self._server_started: raise ValueError('Server has already started running') no_max_message_sizes = [('grpc.max_receive_message_length', -1), ('grpc.max_send_message_length', -1)] self.server = grpc.server(futures.ThreadPoolExecutor(max_workers=10), options=no_max_message_sizes) debug_service_pb2_grpc.add_EventListenerServicer_to_server(self, self.server) self.server.add_insecure_port('[::]:%d' % self._server_port) self.server.start() self._server_started = True finally: self._server_lock.release() if blocking: while not self._stop_requested: time.sleep(1.0)
Start running the server. Args: blocking: If `True`, block until `stop_server()` is invoked. Raises: ValueError: If server stop has already been requested, or if the server has already started running.
github-repos
def reminders_add(self, *, text: str, time: str, **kwargs) -> SlackResponse: self._validate_xoxp_token() kwargs.update({"text": text, "time": time}) return self.api_call("reminders.add", json=kwargs)
Creates a reminder. Args: text (str): The content of the reminder. e.g. 'eat a banana' time (str): When this reminder should happen: the Unix timestamp (up to five years from now e.g. '1602288000'), the number of seconds until the reminder (if within 24 hours), or a natural language description (Ex. 'in 15 minutes' or 'every Thursday')
juraj-google-style
def snakecase(string): string = re.sub(r"[\-\.\s]", '_', str(string)) if not string: return string return lowercase(string[0]) + re.sub(r"[A-Z]", lambda matched: '_' + lowercase(matched.group(0)), string[1:])
Convert string into snake case. Join punctuation with underscore Args: string: String to convert. Returns: string: Snake cased string.
juraj-google-style
def _ParseUSNChangeJournal(self, parser_mediator, usn_change_journal): if not usn_change_journal: return usn_record_map = self._GetDataTypeMap('usn_record_v2') usn_record_data = usn_change_journal.read_usn_record() while usn_record_data: current_offset = usn_change_journal.get_offset() try: usn_record = self._ReadStructureFromByteStream( usn_record_data, current_offset, usn_record_map) except (ValueError, errors.ParseError) as exception: raise errors.ParseError(( 'Unable to parse USN record at offset: 0x{0:08x} with error: ' '{1!s}').format(current_offset, exception)) name_offset = usn_record.name_offset - 60 utf16_stream = usn_record.name[name_offset:usn_record.name_size] try: name_string = utf16_stream.decode('utf-16-le') except (UnicodeDecodeError, UnicodeEncodeError) as exception: name_string = utf16_stream.decode('utf-16-le', errors='replace') parser_mediator.ProduceExtractionWarning(( 'unable to decode USN record name string with error: ' '{0:s}. Characters that cannot be decoded will be replaced ' 'with "?" or "\\ufffd".').format(exception)) event_data = NTFSUSNChangeEventData() event_data.file_attribute_flags = usn_record.file_attribute_flags event_data.file_reference = usn_record.file_reference event_data.filename = name_string event_data.offset = current_offset event_data.parent_file_reference = usn_record.parent_file_reference event_data.update_reason_flags = usn_record.update_reason_flags event_data.update_sequence_number = usn_record.update_sequence_number event_data.update_source_flags = usn_record.update_source_flags if not usn_record.update_date_time: date_time = dfdatetime_semantic_time.SemanticTime('Not set') else: date_time = dfdatetime_filetime.Filetime( timestamp=usn_record.update_date_time) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_ENTRY_MODIFICATION) parser_mediator.ProduceEventWithEventData(event, event_data) usn_record_data = usn_change_journal.read_usn_record()
Parses an USN change journal. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. usn_change_journal (pyfsntsfs.usn_change_journal): USN change journal. Raises: ParseError: if an USN change journal record cannot be parsed.
juraj-google-style
def wait_for_healthy(self, timeout_s=1200, interval=30): timeout = time.time() + timeout_s while self.health() != 'HEALTHY': logging.warning('Waiting for TPU "%s" with state "%s" and health "%s" to become healthy', self.name(), self.state(), self.health()) if time.time() + interval > timeout: raise RuntimeError('Timed out waiting for TPU "%s" to become healthy' % self.name()) time.sleep(interval) logging.warning('TPU "%s" is healthy.', self.name())
Wait for TPU to become healthy or raise error if timeout reached. Args: timeout_s (int): The timeout in seconds for waiting TPU to become healthy. interval (int): The interval in seconds to poll the TPU for health. Raises: RuntimeError: If the TPU doesn't become healthy by the timeout.
github-repos
def managed_sans(self): if (not self.__managed_sans): self.__managed_sans = ManagedSANs(self.__connection) return self.__managed_sans
Gets the Managed SANs API client. Returns: ManagedSANs:
codesearchnet
def deprecated(replacement=None, message=None): def wrap(old): def wrapped(*args, **kwargs): msg = ('%s is deprecated' % old.__name__) if (replacement is not None): if isinstance(replacement, property): r = replacement.fget elif isinstance(replacement, (classmethod, staticmethod)): r = replacement.__func__ else: r = replacement msg += ('; use %s in %s instead.' % (r.__name__, r.__module__)) if (message is not None): msg += ('\n' + message) warnings.simplefilter('default') warnings.warn(msg, DeprecationWarning, stacklevel=2) return old(*args, **kwargs) return wrapped return wrap
Decorator to mark classes or functions as deprecated, with a possible replacement. Args: replacement (callable): A replacement class or method. message (str): A warning message to be displayed. Returns: Original function, but with a warning to use the updated class.
codesearchnet
def from_environment_variables(cls): ip = os.environ.get('ONEVIEWSDK_IP', '') image_streamer_ip = os.environ.get('ONEVIEWSDK_IMAGE_STREAMER_IP', '') api_version = int(os.environ.get('ONEVIEWSDK_API_VERSION', OneViewClient.DEFAULT_API_VERSION)) ssl_certificate = os.environ.get('ONEVIEWSDK_SSL_CERTIFICATE', '') username = os.environ.get('ONEVIEWSDK_USERNAME', '') auth_login_domain = os.environ.get('ONEVIEWSDK_AUTH_LOGIN_DOMAIN', '') password = os.environ.get('ONEVIEWSDK_PASSWORD', '') proxy = os.environ.get('ONEVIEWSDK_PROXY', '') sessionID = os.environ.get('ONEVIEWSDK_SESSIONID', '') timeout = os.environ.get('ONEVIEWSDK_CONNECTION_TIMEOUT') config = dict(ip=ip, image_streamer_ip=image_streamer_ip, api_version=api_version, ssl_certificate=ssl_certificate, credentials=dict(userName=username, authLoginDomain=auth_login_domain, password=password, sessionID=sessionID), proxy=proxy, timeout=timeout) return cls(config)
Construct OneViewClient using environment variables. Allowed variables: ONEVIEWSDK_IP (required), ONEVIEWSDK_USERNAME (required), ONEVIEWSDK_PASSWORD (required), ONEVIEWSDK_AUTH_LOGIN_DOMAIN, ONEVIEWSDK_API_VERSION, ONEVIEWSDK_IMAGE_STREAMER_IP, ONEVIEWSDK_SESSIONID, ONEVIEWSDK_SSL_CERTIFICATE, ONEVIEWSDK_CONNECTION_TIMEOUT and ONEVIEWSDK_PROXY. Returns: OneViewClient:
codesearchnet
def from_event(cls, ion_event): if ion_event.value is not None: args, kwargs = cls._to_constructor_args(ion_event.value) else: args, kwargs = (), {} value = cls(*args, **kwargs) value.ion_event = ion_event value.ion_type = ion_event.ion_type value.ion_annotations = ion_event.annotations return value
Constructs the given native extension from the properties of an event. Args: ion_event (IonEvent): The event to construct the native value from.
juraj-google-style
def Register(self, name, constructor): precondition.AssertType(name, Text) if (name in self._constructors): message = "Duplicated constructors %r and %r for name '%s'" message %= (constructor, self._constructors[name], name) raise ValueError(message) self._constructors[name] = constructor
Registers a new constructor in the factory. Args: name: A name associated with given constructor. constructor: A constructor function that creates instances. Raises: ValueError: If there already is a constructor associated with given name.
codesearchnet
def num_rewards(self): if (not self.is_reward_range_finite): tf.logging.error('Infinite reward range, `num_rewards returning None`') return None if (not self.is_processed_rewards_discrete): tf.logging.error('Processed rewards are not discrete, `num_rewards` returning None') return None (min_reward, max_reward) = self.reward_range return ((max_reward - min_reward) + 1)
Returns the number of distinct rewards. Returns: Returns None if the reward range is infinite or the processed rewards aren't discrete, otherwise returns the number of distinct rewards.
codesearchnet
def rename(script, label='blank', layer_num=None): filter_xml = ''.join([ ' <filter name="Rename Current Mesh">\n', ' <Param name="newName" ', 'value="{}" '.format(label), 'description="New Label" ', 'type="RichString" ', '/>\n', ' </filter>\n']) if isinstance(script, mlx.FilterScript): if (layer_num is None) or (layer_num == script.current_layer()): util.write_filter(script, filter_xml) script.layer_stack[script.current_layer()] = label else: cur_layer = script.current_layer() change(script, layer_num) util.write_filter(script, filter_xml) change(script, cur_layer) script.layer_stack[layer_num] = label else: util.write_filter(script, filter_xml) return None
Rename layer label Can be useful for outputting mlp files, as the output file names use the labels. Args: script: the mlx.FilterScript object or script filename to write the filter to. label (str): new label for the mesh layer layer_num (int): layer number to rename. Default is the current layer. Not supported on the file base API. Layer stack: Renames a layer MeshLab versions: 2016.12 1.3.4BETA
juraj-google-style
def set_trunk_groups(self, vid, value=None, default=False, disable=False): if default: return self.configure_vlan(vid, 'default trunk group') if disable: return self.configure_vlan(vid, 'no trunk group') current_value = self.get(vid)['trunk_groups'] failure = False value = make_iterable(value) for name in set(value).difference(current_value): if (not self.add_trunk_group(vid, name)): failure = True for name in set(current_value).difference(value): if (not self.remove_trunk_group(vid, name)): failure = True return (not failure)
Configures the list of trunk groups support on a vlan This method handles configuring the vlan trunk group value to default if the default flag is set to True. If the default flag is set to False, then this method will calculate the set of trunk group names to be added and to be removed. EosVersion: 4.13.7M Args: vid (str): The VLAN ID to configure value (str): The list of trunk groups that should be configured for this vlan id. default (bool): Configures the trunk group value to default if this value is true disable (bool): Negates the trunk group value if set to true Returns: True if the operation was successful otherwise False
codesearchnet
async def addFeedData(self, name, items, seqn=None): async with await self.snap() as snap: snap.strict = False return await snap.addFeedData(name, items, seqn=seqn)
Add data using a feed/parser function. Args: name (str): The name of the feed record format. items (list): A list of items to ingest. seqn ((str,int)): An (iden, offs) tuple for this feed chunk. Returns: (int): The next expected offset (or None) if seqn is None.
juraj-google-style
def jobs(self): return list(self._cluster_spec.keys())
Returns a list of job names in this cluster. Returns: A list of strings, corresponding to the names of jobs in this cluster.
github-repos
def partial_run_setup(self, fetches, feeds=None): def _feed_fn(feed): for tensor_type, _, _, feed_fn in _REGISTERED_EXPANSIONS: if isinstance(feed, tensor_type): return feed_fn(feed) raise TypeError(f'Feed argument {feed} has invalid type "{type(feed).__name__}"') if self._closed: raise RuntimeError('Attempted to use a closed Session.') if self.graph.version == 0: raise RuntimeError('The Session graph is empty. Add operations to the graph before calling run().') if feeds is None: feeds = [] feed_list = [] is_list_feed = isinstance(feeds, (list, tuple)) if not is_list_feed: feeds = [feeds] for feed in feeds: for subfeed in _feed_fn(feed): try: subfeed_t = self.graph.as_graph_element(subfeed, allow_tensor=True, allow_operation=False) feed_list.append(subfeed_t._as_tf_output()) except Exception as e: e.message = f'Cannot interpret argument `feed` key as Tensor: {e.message}' e.args = (e.message,) raise e fetch_handler = _FetchHandler(self._graph, fetches, {}) def _setup_fn(session, feed_list, fetch_list, target_list): self._extend_graph() return tf_session.TF_SessionPRunSetup_wrapper(session, feed_list, fetch_list, target_list) final_fetches = [t._as_tf_output() for t in fetch_handler.fetches()] final_targets = [op._c_op for op in fetch_handler.targets()] return self._do_call(_setup_fn, self._session, feed_list, final_fetches, final_targets)
Sets up a graph with feeds and fetches for partial run. NOTE: This function is deprecated and we do not expect adding new functionality to it. Please do not have your code depending on this function. This is EXPERIMENTAL and subject to change. Note that contrary to `run`, `feeds` only specifies the graph elements. The tensors will be supplied by the subsequent `partial_run` calls. Args: fetches: A single graph element, or a list of graph elements. feeds: A single graph element, or a list of graph elements. Returns: A handle for partial run. Raises: RuntimeError: If this `Session` is in an invalid state (e.g. has been closed). TypeError: If `fetches` or `feed_dict` keys are of an inappropriate type. tf.errors.OpError: Or one of its subclasses if a TensorFlow error happens.
github-repos
def resolve(self, file_path, follow_symlinks=True, allow_fd=False): if isinstance(file_path, int): if (allow_fd and (sys.version_info >= (3, 3))): return self.get_open_file(file_path).get_object() raise TypeError('path should be string, bytes or os.PathLike (if supported), not int') if follow_symlinks: file_path = make_string_path(file_path) return self.get_object_from_normpath(self.resolve_path(file_path)) return self.lresolve(file_path)
Search for the specified filesystem object, resolving all links. Args: file_path: Specifies the target FakeFile object to retrieve. follow_symlinks: If `False`, the link itself is resolved, otherwise the object linked to. allow_fd: If `True`, `file_path` may be an open file descriptor Returns: The FakeFile object corresponding to `file_path`. Raises: IOError: if the object is not found.
codesearchnet
def shape(self): if self._dense_shape is None: return tensor_shape.TensorShape(None) return tensor_util.constant_value_as_shape(self._dense_shape)
Gets the `tf.TensorShape` representing the shape of the dense tensor. Returns: A `tf.TensorShape` object.
github-repos
def mtf_transformer_paper_tr(size): n = (2 ** size) hparams = mtf_transformer_base() hparams.label_smoothing = 0.1 hparams.batch_size = 128 hparams.d_model = 1024 hparams.d_ff = int((4096 * n)) hparams.num_heads = int((8 * n)) hparams.shared_embedding_and_softmax_weights = False hparams.learning_rate_decay_steps = 51400 return hparams
Config for translation experiments. Train these on translate_enfr_wmt32k_packed for 154000 steps (3 epochs) The size parameter is an integer that controls the number of heads and the size of the size of the feedforward hidden layers. Increasing size by 1 doubles each of these. Args: size: an integer Returns: a hparams object
codesearchnet
def _starts_with_drive_letter(self, file_path): colon = self._matching_string(file_path, ':') return (self.is_windows_fs and len(file_path) >= 2 and file_path[:1].isalpha and (file_path[1:2]) == colon)
Return True if file_path starts with a drive letter. Args: file_path: the full path to be examined. Returns: `True` if drive letter support is enabled in the filesystem and the path starts with a drive letter.
juraj-google-style
def create_db(file_pth): conn = sqlite3.connect(file_pth) c = conn.cursor() c.execute('DROP TABLE IF EXISTS library_spectra_source') c.execute('CREATE TABLE library_spectra_source (\n id integer PRIMARY KEY,\n name text NOT NULL,\n created_at date,\n parsing_software text\n )') c.execute('DROP TABLE IF EXISTS metab_compound') c.execute('CREATE TABLE metab_compound (\n inchikey_id text PRIMARY KEY,\n name text,\n pubchem_id text,\n chemspider_id text,\n other_names text,\n exact_mass real,\n molecular_formula text,\n molecular_weight real,\n compound_class text,\n smiles text,\n created_at date,\n updated_at date\n\n )') c.execute('DROP TABLE IF EXISTS library_spectra_meta') c.execute('CREATE TABLE library_spectra_meta (\n id integer PRIMARY KEY,\n name text,\n collision_energy text,\n ms_level real,\n accession text NOT NULL,\n resolution text,\n polarity integer,\n fragmentation_type text,\n precursor_mz real,\n precursor_type text,\n instrument_type text,\n instrument text,\n copyright text,\n column text,\n mass_accuracy real,\n mass_error real,\n origin text,\n splash text,\n retention_index real, \n retention_time real,\n library_spectra_source_id integer NOT NULL,\n inchikey_id text NOT NULL,\n FOREIGN KEY(library_spectra_source_id) REFERENCES library_spectra_source(id),\n FOREIGN KEY(inchikey_id) REFERENCES metab_compound(inchikey_id)\n )') c.execute('DROP TABLE IF EXISTS library_spectra') c.execute('CREATE TABLE library_spectra (\n id integer PRIMARY KEY,\n mz real NOT NULL,\n i real NOT NULL,\n other text,\n library_spectra_meta_id integer NOT NULL,\n FOREIGN KEY (library_spectra_meta_id) REFERENCES library_spectra_meta(id)\n )') c.execute('DROP TABLE IF EXISTS library_spectra_annotation') c.execute('CREATE TABLE library_spectra_annotation (\n id integer PRIMARY KEY,\n mz real,\n tentative_formula text,\n mass_error real,\n library_spectra_meta_id integer NOT NULL,\n FOREIGN KEY (library_spectra_meta_id) REFERENCES library_spectra_meta(id)\n )')
Create an empty SQLite database for library spectra. Example: >>> from msp2db.db import create_db >>> db_pth = 'library.db' >>> create_db(file_pth=db_pth) Args: file_pth (str): File path for SQLite database
codesearchnet
def __init__(self, ctx, name, member_map, ast): super().__init__(ctx, name, member_map, ast) self.real_module = ctx.convert.constant_to_value(ast, subst=datatypes.AliasingDict(), node=ctx.root_node)
Initialize the overlay. Args: ctx: Instance of context.Context. name: A string containing the name of the underlying module. member_map: Dict of str to abstract.BaseValues that provide type information not available in the underlying module. ast: An pytd.TypeDeclUnit containing the AST for the underlying module. Used to access type information for members of the module that are not explicitly provided by the overlay.
github-repos
def get_signed_url(self, file_id): if (not is_valid_uuid(file_id)): raise StorageArgumentException('Invalid UUID for file_id: {0}'.format(file_id)) return self._authenticated_request.to_endpoint('file/{}/content/secure_link/'.format(file_id)).return_body().get()['signed_url']
Get a signed unauthenticated URL. It can be used to download the file content without the need for a token. The signed URL expires after 5 seconds. Args: file_id (str): The UUID of the file to get the link for. Returns: The signed url as a string Raises: StorageArgumentException: Invalid arguments StorageForbiddenException: Server response code 403 StorageNotFoundException: Server response code 404 StorageException: other 400-600 error codes
codesearchnet
def create_filter(condition: Callable[[ProcessorPart], bool]) -> PartProcessor: async def filter_with_condition(part: ProcessorPart) -> AsyncIterable[ProcessorPart]: if condition(part): yield part return _PartProcessorWrapper(filter_with_condition)
Creates a processor that filters parts based on `condition`. Args: condition: a part is returned by this processor iff `condition(part)=True` Returns: a processor filtering the input stream
github-repos
def _container_start_handler_factory(ion_type, before_yield=(lambda c, ctx: None)): assert ion_type.is_container @coroutine def container_start_handler(c, ctx): before_yield(c, ctx) (yield) (yield ctx.event_transition(IonEvent, IonEventType.CONTAINER_START, ion_type, value=None)) return container_start_handler
Generates handlers for tokens that begin with container start characters. Args: ion_type (IonType): The type of this container. before_yield (Optional[callable]): Called at initialization. Accepts the first character's ordinal and the current context; performs any necessary initialization actions.
codesearchnet
def __init__(self, windowfn, trigger=None, accumulation_mode=None, timestamp_combiner=None, allowed_lateness=0): if isinstance(windowfn, Windowing): windowing = windowfn windowfn = windowing.windowfn trigger = trigger or windowing.triggerfn accumulation_mode = accumulation_mode or windowing.accumulation_mode timestamp_combiner = timestamp_combiner or windowing.timestamp_combiner self.windowing = Windowing(windowfn, trigger, accumulation_mode, timestamp_combiner, allowed_lateness) super().__init__(self.WindowIntoFn(self.windowing))
Initializes a WindowInto transform. Args: windowfn (Windowing, WindowFn): Function to be used for windowing. trigger: (optional) Trigger used for windowing, or None for default. accumulation_mode: (optional) Accumulation mode used for windowing, required for non-trivial triggers. timestamp_combiner: (optional) Timestamp combniner used for windowing, or None for default.
github-repos
def _ScanFileSystem(self, scan_node, base_path_specs): if not scan_node or not scan_node.path_spec: raise errors.ScannerError('Invalid or missing file system scan node.') base_path_specs.append(scan_node.path_spec)
Scans a file system scan node for file systems. Args: scan_node (SourceScanNode): file system scan node. base_path_specs (list[PathSpec]): file system base path specifications. Raises: ScannerError: if the scan node is invalid.
juraj-google-style
def list_to_tuple(structure): def sequence_fn(instance, args): if isinstance(instance, list): return tuple(args) return nest_util.sequence_like(instance, args) return nest_util.pack_sequence_as(nest_util.Modality.CORE, structure, flatten(structure), False, sequence_fn=sequence_fn)
Replace all lists with tuples. The fork of nest that tf.data uses treats lists as atoms, while tf.nest treats them as structures to recurse into. Keras has chosen to adopt the latter convention, and must therefore deeply replace all lists with tuples before passing structures to Dataset.from_generator. Args: structure: A nested structure to be remapped. Returns: structure mapped to replace all lists with tuples.
github-repos
def select_one(self, selector): result = list(self.select(selector)) if len(result) > 1: raise ValueError("Found more than one model matching %s: %r" % (selector, result)) if len(result) == 0: return None return result[0]
Query this document for objects that match the given selector. Raises an error if more than one object is found. Returns single matching object, or None if nothing is found Args: selector (JSON-like query dictionary) : you can query by type or by name, e.g. ``{"type": HoverTool}``, ``{"name": "mycircle"}`` Returns: Model or None
juraj-google-style
def activate(self, user): org_user = self.organization.add_user(user, **self.activation_kwargs()) self.invitee = user self.save() return org_user
Updates the `invitee` value and saves the instance Provided as a way of extending the behavior. Args: user: the newly created user Returns: the linking organization user
juraj-google-style
def get_value_for_datastore(self, model_instance): value = super(JsonProperty, self).get_value_for_datastore(model_instance) if not value: return None json_value = value if not isinstance(value, dict): json_value = value.to_json() if not json_value: return None return datastore_types.Text(json.dumps( json_value, sort_keys=True, cls=JsonEncoder))
Gets value for datastore. Args: model_instance: instance of the model class. Returns: datastore-compatible value.
juraj-google-style
def tokenize(self, data, *args, **kwargs): self.lexer.input(data) tokens = list() while True: token = self.lexer.token() if not token: break tokens.append(token) return tokens
Invoke the lexer on an input string an return the list of tokens. This is relatively inefficient and should only be used for testing/debugging as it slurps up all tokens into one list. Args: data: The input to be tokenized. Returns: A list of LexTokens
juraj-google-style
def transform_to_mods_mono(marc_xml, uuid, url): marc_xml = _read_content_or_path(marc_xml) transformed = xslt_transformation(marc_xml, _absolute_template_path('MARC21slim2MODS3-4-NDK.xsl')) return _apply_postprocessing(marc_xml=marc_xml, xml=transformed, func=mods_postprocessor.postprocess_monograph, uuid=uuid, url=url)
Convert `marc_xml` to MODS data format. Args: marc_xml (str): Filename or XML string. Don't use ``\\n`` in case of filename. uuid (str): UUID string giving the package ID. url (str): URL of the publication (public or not). Returns: list: Collection of transformed xml strings.
codesearchnet
def clear_list(self, **kwargs): path = self._get_id_path('clear') kwargs.update({'session_id': self.session_id}) payload = {} response = self._POST(path, kwargs, payload) self._set_attrs_to_values(response) return response
Clears all of the items within a list. This is an irreversible action and should be treated with caution. A valid session id is required. Args: confirm: True (do it) | False (don't do it) Returns: A dict respresentation of the JSON returned from the API.
codesearchnet
def __init__(self, min_length=None, max_length=None, empty=True): super(StringTypeChecker, self).__init__( iter_type=str, min_length=min_length, max_length=max_length, empty=empty )
Initialization method. Args: min_length (int): minimum length of the string (included). max_length (int): maximum length of the string (included). empty (bool): whether empty string is allowed.
juraj-google-style
def write(self, ostream, kmip_version=enums.KMIPVersion.KMIP_1_0): binary = "{0:b}".format(abs(self.value)) binary = ("0" * (64 - (len(binary) % 64))) + binary if self.value < 0: binary = binary.replace('1', 'i') binary = binary.replace('0', '1') binary = binary.replace('i', '0') pivot = binary.rfind('0') binary = binary[0:pivot] + '1' + ('0' * len(binary[pivot + 1:])) hexadecimal = b'' for i in range(0, len(binary), 8): byte = binary[i:i + 8] byte = int(byte, 2) hexadecimal += struct.pack('!B', byte) self.length = len(hexadecimal) super(BigInteger, self).write(ostream, kmip_version=kmip_version) ostream.write(hexadecimal)
Write the encoding of the BigInteger to the output stream. Args: ostream (Stream): A buffer to contain the encoded bytes of a BigInteger object. Usually a BytearrayStream object. Required. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0.
juraj-google-style
def listen_forever(self, timeout_ms=30000, exception_handler=None, bad_sync_timeout=5): _bad_sync_timeout = bad_sync_timeout self.should_listen = True while (self.should_listen): try: self._sync(timeout_ms) _bad_sync_timeout = bad_sync_timeout except MatrixRequestError as e: logger.warning("A MatrixRequestError occured during sync.") if e.code >= 500: logger.warning("Problem occured serverside. Waiting %i seconds", bad_sync_timeout) sleep(bad_sync_timeout) _bad_sync_timeout = min(_bad_sync_timeout * 2, self.bad_sync_timeout_limit) elif exception_handler is not None: exception_handler(e) else: raise except Exception as e: logger.exception("Exception thrown during sync") if exception_handler is not None: exception_handler(e) else: raise
Keep listening for events forever. Args: timeout_ms (int): How long to poll the Home Server for before retrying. exception_handler (func(exception)): Optional exception handler function which can be used to handle exceptions in the caller thread. bad_sync_timeout (int): Base time to wait after an error before retrying. Will be increased according to exponential backoff.
juraj-google-style
def write_updates_to_csv(self, updates): with open(self._csv_file_name, 'w') as csvfile: csvwriter = self.csv_writer(csvfile) csvwriter.writerow(CSV_COLUMN_HEADERS) for update in updates: row = [update.name, update.current_version, update.new_version, update.prelease] csvwriter.writerow(row)
Given a list of updates, write the updates out to the provided CSV file. Args: updates (list): List of Update objects.
codesearchnet
def convert_shape(params, w_name, scope_name, inputs, layers, weights, names): print('Converting shape ...') def target_layer(x): import tensorflow as tf return tf.shape(x) lambda_layer = keras.layers.Lambda(target_layer) layers[scope_name] = lambda_layer(layers[inputs[0]])
Convert shape operation. Args: params: dictionary with layer parameters w_name: name prefix in state_dict scope_name: pytorch scope name inputs: pytorch node inputs layers: dictionary with keras tensors weights: pytorch state_dict names: use short names for keras layers
codesearchnet