code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def restore_walker(self, dumped_state): selector_string = dumped_state.get(u'selector') if selector_string is None: raise ArgumentError("Invalid stream walker state in restore_walker, missing 'selector' key", state=dumped_state) selector = DataStreamSelector.FromString(selector_string) walker = self.create_walker(selector) walker.restore(dumped_state) return walker
Restore a stream walker that was previously serialized. Since stream walkers need to be tracked in an internal list for notification purposes, we need to be careful with how we restore them to make sure they remain part of the right list. Args: dumped_state (dict): The dumped state of a stream walker from a previous call to StreamWalker.dump() Returns: StreamWalker: The correctly restored StreamWalker subclass.
juraj-google-style
def _get(self, feed_item): result = store.get(self._entity, feed_item.get(FieldMap.CREATIVE_ASSET_ID, None)) if not result: result = {'id': feed_item.get(FieldMap.CREATIVE_ASSET_ID, None), 'assetIdentifier': {'name': feed_item.get(FieldMap.CREATIVE_ASSET_NAME, None), 'type': feed_item.get(FieldMap.CREATIVE_TYPE, None)}} store.set(self._entity, [feed_item.get(FieldMap.CREATIVE_ASSET_ID, None)], result) return result
Retrieves an item from DCM or the local cache. Args: feed_item: The feed item representing the creative asset from the Bulkdozer feed. Returns: Instance of the DCM object either from the API or from the local cache.
github-repos
def default_output_fn(prediction, accept): return _worker.Response(response=_encoders.encode(prediction, accept), mimetype=accept)
Function responsible to serialize the prediction for the response. Args: prediction (obj): prediction returned by predict_fn . accept (str): accept content-type expected by the client. Returns: (worker.Response): a Flask response object with the following args: * Args: response: the serialized data to return accept: the content-type that the data was transformed to.
codesearchnet
def nvals(self): return self._row_splits[-1]
Returns the number of values partitioned by this `RowPartition`. If the sequence partitioned by this `RowPartition` is a tensor, then `nvals` is the size of that tensor's outermost dimension -- i.e., `nvals == values.shape[0]`. Returns: scalar integer Tensor
github-repos
def api(self, name, namespace='pyeapi.api'): module = load_module('{}.{}'.format(namespace, name)) if hasattr(module, 'initialize'): module.initialize(self) if hasattr(module, 'instance'): return module.instance(self) return module
Loads the specified api module This method is the API autoload mechanism that will load the API module specified by the name argument. The API module will be loaded and look first for an initialize() function and secondly for an instance() function. In both cases, the node object is passed to the module. Args: name (str): The name of the module to load. The name should be the name of the python file to import namespace (str): The namespace to use to load the module. The default value is 'pyeapi.api' Returns: The API module loaded with the node instance.
codesearchnet
def threshold(self) -> float: return self._tracker.get()
Returns the current quantile-based threshold value. Returns: float: The dynamically calculated threshold value based on the quantile tracker.
github-repos
def _format_origin_stack(origin_stack, call_traceback_proto): string_to_id = {} string_to_id[None] = 0 for frame in origin_stack: file_path, lineno, func_name, line_text = frame call_traceback_proto.origin_stack.traces.add(file_id=_string_to_id(file_path, string_to_id), lineno=lineno, function_id=_string_to_id(func_name, string_to_id), line_id=_string_to_id(line_text, string_to_id)) id_to_string = call_traceback_proto.origin_id_to_string for key, value in string_to_id.items(): id_to_string[value] = key if key is not None else ''
Format a traceback stack for a `CallTraceback` proto. Args: origin_stack: The stack list as returned by `traceback.extract_stack()`. call_traceback_proto: A `CallTraceback` proto whose fields are to be populated.
github-repos
def update_script_from_item(self, item): (script, path_to_script, script_item) = item.get_script() dictator = list(script_item.to_dict().values())[0] for instrument in list(script.instruments.keys()): script.instruments[instrument]['settings'] = dictator[instrument]['settings'] del dictator[instrument] for sub_script_name in list(script.scripts.keys()): sub_script_item = script_item.get_subscript(sub_script_name) self.update_script_from_item(sub_script_item) del dictator[sub_script_name] script.update(dictator) script.data_path = self.gui_settings['data_folder']
updates the script based on the information provided in item Args: script: script to be updated item: B26QTreeItem that contains the new settings of the script
codesearchnet
def gfortran_search_path(library_dirs): cmd = ('gfortran', '-print-search-dirs') process = subprocess.Popen(cmd, stdout=subprocess.PIPE) return_code = process.wait() if (return_code != 0): return library_dirs cmd_output = process.stdout.read().decode('utf-8') search_lines = cmd_output.strip().split('\n') library_lines = [line[len(FORTRAN_LIBRARY_PREFIX):] for line in search_lines if line.startswith(FORTRAN_LIBRARY_PREFIX)] if (len(library_lines) != 1): msg = GFORTRAN_MISSING_LIBS.format(cmd_output) print(msg, file=sys.stderr) return library_dirs library_line = library_lines[0] accepted = set(library_dirs) for part in library_line.split(os.pathsep): full_path = os.path.abspath(part.strip()) if os.path.isdir(full_path): accepted.add(full_path) else: msg = GFORTRAN_BAD_PATH.format(full_path) print(msg, file=sys.stderr) return sorted(accepted)
Get the library directory paths for ``gfortran``. Looks for ``libraries: =`` in the output of ``gfortran -print-search-dirs`` and then parses the paths. If this fails for any reason, this method will print an error and return ``library_dirs``. Args: library_dirs (List[str]): Existing library directories. Returns: List[str]: The library directories for ``gfortran``.
codesearchnet
def get(cls, **kwargs): fields = {} for field in cls.url_fields: value = kwargs.pop(field, None) if (value is None): cls._handle_wrong_field(field, ATTR_TYPE_URL) fields[field] = value model = cls(**fields) model._populate(**kwargs) return model
Retrieve an object by making a GET request to Transifex. Each value in `kwargs` that corresponds to a field defined in `self.url_fields` will be used in the URL path of the request, so that a particular entry of this model is identified and retrieved. Raises: AttributeError: if not all values for parameters in `url_fields` are passed as kwargs txlib.http.exceptions.NotFoundError: if the object with these attributes is not found on the remote server txlib.http.exceptions.ServerError subclass: depending on the particular server response Example: # Note: also catch exceptions >>> obj = MyModel.get(attr1=value1, attr2=value2)
codesearchnet
def prompt_for_password(url, user=None, default_user=None): if user is None: default_user = default_user or getpass.getuser() while user is None: user = compat.console_input( "Enter username for {} [{}]: ".format(url, default_user) ) if user.strip() == "" and default_user: user = default_user if user: pw = getpass.getpass( "Enter password for {}@{} (Ctrl+C to abort): ".format(user, url) ) if pw or pw == "": return (user, pw) return None
Prompt for username and password. If a user name is passed, only prompt for a password. Args: url (str): hostname user (str, optional): Pass a valid name to skip prompting for a user name default_user (str, optional): Pass a valid name that is used as default when prompting for a user name Raises: KeyboardInterrupt if user hits Ctrl-C Returns: (username, password) or None
juraj-google-style
async def undo_check_in(self): res = (await self.connection('POST', 'tournaments/{}/participants/{}/undo_check_in'.format(self._tournament_id, self._id))) self._refresh_from_json(res)
Undo the check in for this participant |methcoro| Warning: |unstable| Raises: APIException
codesearchnet
def get_json_type(obj): if hasattr(obj, 'get_config'): serialized = serialization.serialize_keras_object(obj) serialized['__passive_serialization__'] = True return serialized if type(obj).__module__ == np.__name__: if isinstance(obj, np.ndarray): return obj.tolist() else: return obj.item() if callable(obj): return obj.__name__ if type(obj).__name__ == type.__name__: return obj.__name__ if tf.available and isinstance(obj, tf.compat.v1.Dimension): return obj.value if tf.available and isinstance(obj, tf.TensorShape): return obj.as_list() if tf.available and isinstance(obj, tf.DType): return obj.name if isinstance(obj, collections.abc.Mapping): return dict(obj) if obj is Ellipsis: return {'class_name': '__ellipsis__'} if tf.available and isinstance(obj, tf.TypeSpec): from tensorflow.python.framework import type_spec_registry try: type_spec_name = type_spec_registry.get_name(type(obj)) return {'class_name': 'TypeSpec', 'type_spec': type_spec_name, 'serialized': obj._serialize()} except ValueError: raise ValueError(f'Unable to serialize {obj} to JSON, because the TypeSpec class {type(obj)} has not been registered.') if tf.available and isinstance(obj, tf.__internal__.CompositeTensor): spec = tf.type_spec_from_value(obj) tensors = [] for tensor in tf.nest.flatten(obj, expand_composites=True): tensors.append((tensor.dtype.name, tensor.numpy().tolist())) return {'class_name': 'CompositeTensor', 'spec': get_json_type(spec), 'tensors': tensors} if isinstance(obj, enum.Enum): return obj.value if isinstance(obj, bytes): return {'class_name': '__bytes__', 'value': obj.decode('utf-8')} raise TypeError(f'Unable to serialize {obj} to JSON. Unrecognized type {type(obj)}.')
Serializes any object to a JSON-serializable structure. Args: obj: the object to serialize Returns: JSON-serializable structure representing `obj`. Raises: TypeError: if `obj` cannot be serialized.
github-repos
def IsRaised(self): class IsRaisedContext(_EmptySubject): def __init__(self, actual, get_actual_message): super(IsRaisedContext, self).__init__(actual) self._get_actual_message = get_actual_message def __enter__(self): return self @asserts_truth def __exit__(self, exc_type, exc, exc_tb): if exc: if issubclass(exc_type, type(self._actual)): if hasattr(self._actual, 'message'): AssertThat(exc).HasMessage(self._get_actual_message()) AssertThat(exc).HasArgsThat().ContainsExactlyElementsIn(self._actual.args).InOrder() else: self._FailWithSubject('should have been raised, but caught <{0!r}>'.format(exc)) else: self._Resolve() self._FailWithSubject('should have been raised, but was not') return True return IsRaisedContext(self._actual, self._GetActualMessage)
Asserts that an exception matching this subject is raised. The raised exception must be the same type as (or a subclass of) this subject's. The raised exception's "message" and "args" attributes must match this subject's exactly. As this is a fairly strict match, _ExceptionClassSubject.IsRaised() may be easier to use. Returns: A context within which an expected exception may be raised.
github-repos
def date_clean(date, dashboard_style=False): if dashboard_style: dt = str(date) out = ((((dt[4:6] + '/') + dt[6:]) + '/') + dt[:4]) else: dt = str(date) out = ((((dt[:4] + '-') + dt[4:6]) + '-') + dt[6:]) return out
Clean the numerical date value in order to present it. Args: boo: numerical date (20160205) Returns: Stringified version of the input date ("2016-02-05")
codesearchnet
def _finished_callback(self, batch_fut, todo): self._running.remove(batch_fut) err = batch_fut.get_exception() if err is not None: tb = batch_fut.get_traceback() for (fut, _) in todo: if not fut.done(): fut.set_exception(err, tb)
Passes exception along. Args: batch_fut: the batch future returned by running todo_tasklet. todo: (fut, option) pair. fut is the future return by each add() call. If the batch fut was successful, it has already called fut.set_result() on other individual futs. This method only handles when the batch fut encountered an exception.
juraj-google-style
def __eq__(self, other): if type(self) is type(other) and \ self.kernel == other.kernel and \ self.discriminator == other.discriminator: return True return False
Two Acquires are the same if they are of the same type and have the same kernel and discriminator. Args: other (Acquire): Other Acquire Returns: bool: are self and other equal.
juraj-google-style
class InstructBlipVideoForConditionalGenerationModelOutput(ModelOutput): loss: Optional[Tuple[torch.FloatTensor]] = None logits: Optional[Tuple[torch.FloatTensor]] = None vision_outputs: Optional[torch.FloatTensor] = None qformer_outputs: Optional[Tuple[torch.FloatTensor]] = None language_model_outputs: Optional[Tuple[torch.FloatTensor]] = None def to_tuple(self) -> Tuple[Any]: return tuple((self[k] if k not in ['vision_outputs', 'qformer_outputs', 'language_model_outputs'] else getattr(self, k).to_tuple() for k in self.keys()))
Class defining the outputs of [`InstructBlipVideoForConditionalGeneration`]. Args: loss (`torch.FloatTensor`, *optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): Language modeling loss from the language model. logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): Prediction scores of the language modeling head of the language model. vision_outputs (`BaseModelOutputWithPooling`): Outputs of the vision encoder. qformer_outputs (`BaseModelOutputWithPoolingAndCrossAttentions`): Outputs of the Q-Former (Querying Transformer). language_model_outputs (`CausalLMOutputWithPast` or `Seq2SeqLMOutput`): Outputs of the language model.
github-repos
def convert_to_rgb(self, image: ImageInput) -> ImageInput: return convert_to_rgb(image)
Converts an image to RGB format. Only converts if the image is of type PIL.Image.Image, otherwise returns the image as is. Args: image (ImageInput): The image to convert. Returns: ImageInput: The converted image.
github-repos
def write_gff_file(self, outfile, force_rerun=False): if ssbio.utils.force_rerun(outfile=outfile, flag=force_rerun): with open(outfile, 'w') as out_handle: GFF.write([self], out_handle) self.feature_path = outfile
Write a GFF file for the protein features, ``features`` will now load directly from this file. Args: outfile (str): Path to new FASTA file to be written to force_rerun (bool): If an existing file should be overwritten
codesearchnet
def get_servo_torque(self): data = [] data.append(9) data.append(self.servoid) data.append(RAM_READ_REQ) data.append(PWM_RAM) data.append(BYTE2) send_data(data) rxdata = [] try: rxdata = SERPORT.read(13) if (ord(rxdata[10]) <= 127): return (((ord(rxdata[10]) & 3) << 8) | (ord(rxdata[9]) & 255)) else: return ((((ord(rxdata[10]) - 255) * 255) + (ord(rxdata[9]) & 255)) - 255) except HerkulexError: raise HerkulexError('could not communicate with motors')
Gets the current torque of Herkulex Gives the current load on the servo shaft. It is actually the PWM value to the motors Args: none Returns: int: the torque on servo shaft. range from -1023 to 1023 Raises: SerialException: Error occured while opening serial port
codesearchnet
def __init__(self, model): self._model = model if self.ALLOWED_SETTINGS: self.update_settings({setting: self.ALLOWED_SETTINGS[setting][0] for setting in self.ALLOWED_SETTINGS})
Create a new instance of this visualization. `BaseVisualization` is an interface and should only be instantiated via a subclass. Args: model (:obj:`.models.model.BaseModel`): NN model to be visualized.
juraj-google-style
def __init__(self, location): super(MarkLocation, self).__init__(location) self.location = location self.validate()
Create a new MarkLocation at the specified Location. Args: location: Location object, must not be at a property field in the query Returns: new MarkLocation object
juraj-google-style
def get_mail_keys(message, complete=True): if complete: log.debug("Get all headers") all_headers_keys = {i.lower() for i in message.keys()} all_parts = ADDRESSES_HEADERS | OTHERS_PARTS | all_headers_keys else: log.debug("Get only mains headers") all_parts = ADDRESSES_HEADERS | OTHERS_PARTS log.debug("All parts to get: {}".format(", ".join(all_parts))) return all_parts
Given an email.message.Message, return a set with all email parts to get Args: message (email.message.Message): email message object complete (bool): if True returns all email headers Returns: set with all email parts
juraj-google-style
def fermi_energy_from_outcar( filename='OUTCAR' ): outcar = open(filename, "r").read() fermi_energy = re.search(r"E-fermi\s*:\s*([-.\d]*)", outcar) fermi_energy = float(fermi_energy.group(1)) return fermi_energy
Finds and returns the fermi energy. Args: -filename: the name of the outcar file to be read Returns: (Float): The fermi energy as found in the OUTCAR
juraj-google-style
def is_namedtuple(instance, strict=False): return _pywrap_utils.IsNamedtuple(instance, strict)
Returns True iff `instance` is a `namedtuple`. Args: instance: An instance of a Python object. strict: If True, `instance` is considered to be a `namedtuple` only if it is a "plain" namedtuple. For instance, a class inheriting from a `namedtuple` will be considered to be a `namedtuple` iff `strict=False`. Returns: True if `instance` is a `namedtuple`.
github-repos
def DeleteAddress(self, script_hash): coin_keys_toremove = [] coins_to_remove = [] for key, coinref in self._coins.items(): if coinref.Output.ScriptHash.ToBytes() == script_hash.ToBytes(): coin_keys_toremove.append(key) coins_to_remove.append(coinref) for k in coin_keys_toremove: del self._coins[k] ok = False if script_hash.ToBytes() in self._contracts.keys(): ok = True del self._contracts[script_hash.ToBytes()] elif script_hash in self._watch_only: ok = True self._watch_only.remove(script_hash) return ok, coins_to_remove
Deletes an address from the wallet (includes watch-only addresses). Args: script_hash (UInt160): a bytearray (len 20) representing the public key. Returns: tuple: bool: True if address removed, False otherwise. list: a list of any ``neo.Wallet.Coin`` objects to be removed from the wallet.
juraj-google-style
def load_hgnc_bulk(self, gene_objs): LOG.info("Loading gene bulk with length %s", len(gene_objs)) try: result = self.hgnc_collection.insert_many(gene_objs) except (DuplicateKeyError, BulkWriteError) as err: raise IntegrityError(err) return result
Load a bulk of hgnc gene objects Raises IntegrityError if there are any write concerns Args: gene_objs(iterable(scout.models.hgnc_gene)) Returns: result (pymongo.results.InsertManyResult)
juraj-google-style
def forward(self, index: int, output: torch.Tensor, multi_stage_features: List[torch.Tensor], multi_stage_positional_embeddings: List[torch.Tensor], attention_mask: Optional[torch.Tensor]=None, query_embeddings: Optional[torch.Tensor]=None, output_attentions: Optional[bool]=False): level_index = index % self.num_feature_levels attention_mask[torch.where(attention_mask.sum(-1) == attention_mask.shape[-1])] = False output, cross_attn_weights = self.cross_attn(output, multi_stage_features[level_index], memory_mask=attention_mask, memory_key_padding_mask=None, pos=multi_stage_positional_embeddings[level_index], query_pos=query_embeddings) output, self_attn_weights = self.self_attn(output, output_mask=None, output_key_padding_mask=None, query_pos=query_embeddings) output = self.ffn(output) outputs = (output,) if output_attentions: outputs += (self_attn_weights, cross_attn_weights) return outputs
Args: index (`int`): index of the layer in the Transformer decoder. output (`torch.FloatTensor`): the object queries of shape `(N, batch, hidden_dim)` multi_stage_features (`List[torch.Tensor]`): the multi-scale features from the pixel decoder. multi_stage_positional_embeddings (`List[torch.Tensor]`): positional embeddings for the multi_stage_features attention_mask (`torch.FloatTensor`): attention mask for the masked cross attention layer query_embeddings (`torch.FloatTensor`, *optional*): position embeddings that are added to the queries and keys in the self-attention layer. output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail.
github-repos
def _ProcessDirectory(self, mediator, file_entry): self.processing_status = definitions.STATUS_INDICATOR_COLLECTING if self._processing_profiler: self._processing_profiler.StartTiming('collecting') for sub_file_entry in file_entry.sub_file_entries: if self._abort: break try: if (not sub_file_entry.IsAllocated()): continue except dfvfs_errors.BackEndError as exception: warning_message = 'unable to process directory entry: {0:s} with error: {1!s}'.format(sub_file_entry.name, exception) mediator.ProduceExtractionWarning(warning_message, path_spec=file_entry.path_spec) continue if (sub_file_entry.type_indicator == dfvfs_definitions.TYPE_INDICATOR_TSK): if (file_entry.IsRoot() and (sub_file_entry.name == '$OrphanFiles')): continue event_source = event_sources.FileEntryEventSource(path_spec=sub_file_entry.path_spec) stat_object = sub_file_entry.GetStat() if stat_object: event_source.file_entry_type = stat_object.type mediator.ProduceEventSource(event_source) self.last_activity_timestamp = time.time() if self._processing_profiler: self._processing_profiler.StopTiming('collecting') self.processing_status = definitions.STATUS_INDICATOR_RUNNING
Processes a directory file entry. Args: mediator (ParserMediator): mediates the interactions between parsers and other components, such as storage and abort signals. file_entry (dfvfs.FileEntry): file entry of the directory.
codesearchnet
def on_get(self, req, resp, handler=None, **kwargs): self.handle((handler or self.retrieve), req, resp, **kwargs)
Respond on GET HTTP request assuming resource retrieval flow. This request handler assumes that GET requests are associated with single resource instance retrieval. Thus default flow for such requests is: * Retrieve single resource instance of prepare its representation by calling retrieve method handler. Args: req (falcon.Request): request object instance. resp (falcon.Response): response object instance to be modified handler (method): list method handler to be called. Defaults to ``self.list``. **kwargs: additional keyword arguments retrieved from url template.
codesearchnet
def _updated_config(self): from tensorflow.python.keras import __version__ as keras_version config = self.get_config() model_config = {'class_name': self.__class__.__name__, 'config': config, 'keras_version': keras_version, 'backend': backend.backend()} return model_config
Util shared between different serialization methods. Returns: Model config with Keras version information added.
github-repos
def __subject_map__(self, map_iri): subject_map = SimpleNamespace() subject_map_bnode = self.rml.value( subject=map_iri, predicate=NS_MGR.rr.subjectMap.rdflib) if subject_map_bnode is None: return subject_map.class_ = self.rml.value( subject=subject_map_bnode, predicate=getattr(NS_MGR.rr, "class").rdflib) subject_map.template = self.rml.value( subject=subject_map_bnode, predicate=NS_MGR.rr.template.rdflib) subject_map.termType = self.rml.value( subject=subject_map_bnode, predicate=NS_MGR.rr.termType.rdflib) subject_map.deduplicate = self.rml.value( subject=subject_map_bnode, predicate=NS_MGR.kds.deduplicate.rdflib) subject_map.reference = self.rml.value( subject=subject_map_bnode, predicate=NS_MGR.rr.reference.rdflib) return subject_map
Creates a SimpleNamespace for the TripleMap's subjectMap and populates properties from the RML RDF graph Args: ----- map_iri: rdflib.URIRef,TripleMap IRI Returns: -------- SimpleNamespace
juraj-google-style
def _define_loop(graph, logdir, train_steps, eval_steps): loop = tools.Loop( logdir, graph.step, graph.should_log, graph.do_report, graph.force_reset) loop.add_phase( 'train', graph.done, graph.score, graph.summary, train_steps, report_every=train_steps, log_every=train_steps checkpoint_every=None, feed={graph.is_training: True}) loop.add_phase( 'eval', graph.done, graph.score, graph.summary, eval_steps, report_every=eval_steps, log_every=eval_steps checkpoint_every=10 * eval_steps, feed={graph.is_training: False}) return loop
Create and configure a training loop with training and evaluation phases. Args: graph: Object providing graph elements via attributes. logdir: Log directory for storing checkpoints and summaries. train_steps: Number of training steps per epoch. eval_steps: Number of evaluation steps per epoch. Returns: Loop object.
juraj-google-style
def mac_hex_to_ascii(mac_hex, inc_dots): v = mac_hex[2:] ret = '' for i in range(0, len(v), 4): ret += v[i:i+4] if ((inc_dots) & ((i+4) < len(v))): ret += '.' return ret
Format a hex MAC string to ASCII Args: mac_hex: Value from SNMP inc_dots: 1 to format as aabb.ccdd.eeff, 0 to format aabbccddeeff Returns: String representation of the mac_hex
juraj-google-style
def update_missing_keys(self, model, missing_keys: List[str], prefix: str) -> List[str]: return missing_keys
Override this method if you want to adjust the `missing_keys`. Args: missing_keys (`List[str]`, *optional*): The list of missing keys in the checkpoint compared to the state dict of the model
github-repos
def merge(self, workdir, gswfk_file, dfpt_files, gkk_files, out_gkk, binascii=0): raise NotImplementedError('This method should be tested') gswfk_file = os.path.absath(gswfk_file) dfpt_files = [os.path.abspath(s) for s in list_strings(dfpt_files)] gkk_files = [os.path.abspath(s) for s in list_strings(gkk_files)] print(('Will merge %d 1WF files, %d GKK file in output %s' % (len(dfpt_files), len(gkk_files), out_gkk))) if self.verbose: for (i, f) in enumerate(dfpt_files): print((' [%d] 1WF %s' % (i, f))) for (i, f) in enumerate(gkk_files): print((' [%d] GKK %s' % (i, f))) (self.stdin_fname, self.stdout_fname, self.stderr_fname) = map(os.path.join, (3 * [workdir]), ['mrggkk.stdin', 'mrggkk.stdout', 'mrggkk.stderr']) inp = StringIO() inp.write((out_gkk + '\n')) inp.write((str(binascii) + '\n')) inp.write((gswfk_file + '\n')) dims = ' '.join([str(d) for d in dims]) inp.write((dims + '\n')) for fname in dfpt_files: inp.write((fname + '\n')) for fname in gkk_files: inp.write((fname + '\n')) self.stdin_data = [s for s in inp.getvalue()] with open(self.stdin_fname, 'w') as fh: fh.writelines(self.stdin_data) fh.flush() os.fsync(fh.fileno()) self.execute(workdir) return out_gkk
Merge GGK files, return the absolute path of the new database. Args: gswfk_file: Ground-state WFK filename dfpt_files: List of 1WFK files to merge. gkk_files: List of GKK files to merge. out_gkk: Name of the output GKK file binascii: Integer flat. 0 --> binary output, 1 --> ascii formatted output
codesearchnet
def replaceWith(self, el): self.childs = el.childs self.params = el.params self.endtag = el.endtag self.openertag = el.openertag self._tagname = el.getTagName() self._element = el.tagToString() self._istag = el.isTag() self._isendtag = el.isEndTag() self._iscomment = el.isComment() self._isnonpairtag = el.isNonPairTag()
Replace value in this element with values from `el`. This useful when you don't want change all references to object. Args: el (obj): :class:`HTMLElement` instance.
codesearchnet
def post_state(self, name, state): self.post_command(OPERATIONS.CMD_UPDATE_STATE, {'name': name, 'new_status': state})
Asynchronously try to update the state for a service. If the update fails, nothing is reported because we don't wait for a response from the server. This function will return immmediately and not block. Args: name (string): The name of the service state (int): The new state of the service
codesearchnet
def diff_packages(pkg1, pkg2=None): if pkg2 is None: it = iter_packages(pkg1.name) pkgs = [x for x in it if x.version < pkg1.version] if not pkgs: raise RezError("No package to diff with - %s is the earliest " "package version" % pkg1.qualified_name) pkgs = sorted(pkgs, key=lambda x: x.version) pkg2 = pkgs[-1] def _check_pkg(pkg): if not (pkg.vcs and pkg.revision): raise RezError("Cannot diff package %s: it is a legacy format " "package that does not contain enough information" % pkg.qualified_name) _check_pkg(pkg1) _check_pkg(pkg2) path = mkdtemp(prefix="rez-pkg-diff") paths = [] for pkg in (pkg1, pkg2): print "Exporting %s..." % pkg.qualified_name path_ = os.path.join(path, pkg.qualified_name) vcs_cls_1 = plugin_manager.get_plugin_class("release_vcs", pkg1.vcs) vcs_cls_1.export(revision=pkg.revision, path=path_) paths.append(path_) difftool = config.difftool print "Opening diff viewer %s..." % difftool proc = Popen([difftool] + paths) proc.wait()
Invoke a diff editor to show the difference between the source of two packages. Args: pkg1 (`Package`): Package to diff. pkg2 (`Package`): Package to diff against. If None, the next most recent package version is used.
juraj-google-style
def find_log_dir(log_dir=None): if log_dir: dirs = [log_dir] elif FLAGS['log_dir'].value: dirs = [FLAGS['log_dir'].value] else: dirs = ['/tmp/', './'] for d in dirs: if (os.path.isdir(d) and os.access(d, os.W_OK)): return d _absl_logger.fatal("Can't find a writable directory for logs, tried %s", dirs)
Returns the most suitable directory to put log files into. Args: log_dir: str|None, if specified, the logfile(s) will be created in that directory. Otherwise if the --log_dir command-line flag is provided, the logfile will be created in that directory. Otherwise the logfile will be created in a standard location.
codesearchnet
def append(self, species, coords, coords_are_cartesian=False, validate_proximity=False, properties=None): return self.insert(len(self), species, coords, coords_are_cartesian=coords_are_cartesian, validate_proximity=validate_proximity, properties=properties)
Append a site to the structure. Args: species: Species of inserted site coords (3x1 array): Coordinates of inserted site coords_are_cartesian (bool): Whether coordinates are cartesian. Defaults to False. validate_proximity (bool): Whether to check if inserted site is too close to an existing site. Defaults to False. properties (dict): Properties of the site. Returns: New structure with inserted site.
codesearchnet
def kill_redis(self, check_alive=True): self._kill_process_type(ray_constants.PROCESS_TYPE_REDIS_SERVER, check_alive=check_alive)
Kill the Redis servers. Args: check_alive (bool): Raise an exception if any of the processes were already dead.
codesearchnet
def _get_definitions(source): max_len = 0 descs = collections.OrderedDict() lines = (s.strip() for s in source.splitlines()) non_empty_lines = (s for s in lines if s) for line in non_empty_lines: if line: arg, desc = re.split(r'\s\s+', line.strip()) arg_len = len(arg) if arg_len > max_len: max_len = arg_len descs[arg] = desc return descs, max_len
Extract a dictionary of arguments and definitions. Args: source: The source for a section of a usage string that contains definitions. Returns: A two-tuple containing a dictionary of all arguments and definitions as well as the length of the longest argument.
juraj-google-style
def ChangeScaleFactor(self, newfactor): if ((float(newfactor) > 0) and (float(newfactor) < self._MAX_ZOOM)): self._zoomfactor = newfactor
Changes the zoom of the graph manually. 1.0 is the original canvas size. Args: # float value between 0.0 and 5.0 newfactor: 0.7
codesearchnet
def field(self, field_name, boost=1, extractor=None): if ('/' in field_name): raise ValueError('Field {} contains illegal character `/`') self._fields[field_name] = Field(field_name, boost, extractor)
Adds a field to the list of document fields that will be indexed. Every document being indexed should have this field. None values for this field in indexed documents will not cause errors but will limit the chance of that document being retrieved by searches. All fields should be added before adding documents to the index. Adding fields after a document has been indexed will have no effect on already indexed documents. Fields can be boosted at build time. This allows terms within that field to have more importance on search results. Use a field boost to specify that matches within one field are more important that other fields. Args: field_name (str): Name of the field to be added, must not include a forward slash '/'. boost (int): Optional boost factor to apply to field. extractor (callable): Optional function to extract a field from the document. Raises: ValueError: If the field name contains a `/`.
codesearchnet
def strip_cdata(text): if (not is_cdata(text)): return text xml = '<e>{0}</e>'.format(text) node = etree.fromstring(xml) return node.text
Removes all CDATA blocks from `text` if it contains them. Note: If the function contains escaped XML characters outside of a CDATA block, they will be unescaped. Args: A string containing one or more CDATA blocks. Returns: An XML unescaped string with CDATA block qualifiers removed.
codesearchnet
def from_class(cls, target_class): module_name = target_class.__module__ class_name = target_class.__name__ return cls(module_name, '__init__', class_name)
Create a FunctionDescriptor from a class. Args: cls: Current class which is required argument for classmethod. target_class: the python class used to create the function descriptor. Returns: The FunctionDescriptor instance created according to the class.
codesearchnet
def transmute_sites( self, old_site_label, new_site_label, n_sites_to_change ): selected_sites = self.select_sites( old_site_label ) for site in random.sample( selected_sites, n_sites_to_change ): site.label = new_site_label self.site_labels = set( [ site.label for site in self.sites ] )
Selects a random subset of sites with a specific label and gives them a different label. Args: old_site_label (String or List(String)): Site label(s) of the sites to be modified.. new_site_label (String): Site label to be applied to the modified sites. n_sites_to_change (Int): Number of sites to modify. Returns: None
juraj-google-style
def open_remote_url(urls, **kwargs): if isinstance(urls, str): urls = [urls] for url in urls: try: web_file = requests.get(url, stream=True, **kwargs) if 'html' in web_file.headers['content-type']: raise ValueError("HTML source file retrieved.") return web_file except Exception as ex: logger.error('Fail to open remote url - {}'.format(ex)) continue
Open the url and check that it stores a file. Args: :urls: Endpoint to take the file
juraj-google-style
def load(cls, campaign_dir): if not Path(campaign_dir).is_absolute(): raise ValueError("Path is not absolute") if not Path(campaign_dir).exists(): raise ValueError("Directory does not exist") filename = "%s.json" % os.path.split(campaign_dir)[1] filepath = os.path.join(campaign_dir, filename) try: tinydb = TinyDB(filepath) assert set( tinydb.table('config').all()[0].keys()) == set(['script', 'params', 'commit']) except: os.remove(filepath) raise ValueError("Specified campaign directory seems corrupt") return cls(tinydb, campaign_dir)
Initialize from an existing database. It is assumed that the database json file has the same name as its containing folder. Args: campaign_dir (str): The path to the campaign directory.
juraj-google-style
def extract_output_file_path(args): if args and args[-1].endswith('>'): raise SyntaxError('Redirect file path is empty') elif args and args[-1].startswith('>'): try: _parse_interval(args[-1]) if len(args) > 1 and args[-2].startswith('-'): output_file_path = None else: output_file_path = args[-1][1:] args = args[:-1] except ValueError: output_file_path = args[-1][1:] args = args[:-1] elif len(args) > 1 and args[-2] == '>': output_file_path = args[-1] args = args[:-2] elif args and args[-1].count('>') == 1: gt_index = args[-1].index('>') if gt_index > 0 and args[-1][gt_index - 1] == '=': output_file_path = None else: output_file_path = args[-1][gt_index + 1:] args[-1] = args[-1][:gt_index] elif len(args) > 1 and args[-2].endswith('>'): output_file_path = args[-1] args = args[:-1] args[-1] = args[-1][:-1] else: output_file_path = None return (args, output_file_path)
Extract output file path from command arguments. Args: args: (list of str) command arguments. Returns: (list of str) Command arguments with the output file path part stripped. (str or None) Output file path (if any). Raises: SyntaxError: If there is no file path after the last ">" character.
github-repos
def _send_script(self, client, uuid, chunk, key, chunk_status): conn_id = self._validate_connection('send_script', uuid, key) if (conn_id is None): return conn_data = self._connections[uuid] conn_data['last_touch'] = monotonic() slug = self._build_device_slug(uuid) (index, count) = chunk_status if (index == 0): conn_data['script'] = bytes() conn_data['script'] += chunk if (index != (count - 1)): return conn_data['last_progress'] = None try: resp = (yield self._manager.send_script(conn_id, conn_data['script'], (lambda x, y: self._notify_progress_async(uuid, client, x, y)))) (yield None) conn_data['script'] = bytes() except Exception as exc: self._logger.exception('Error in manager send_script') resp = {'success': False, 'reason': ('Internal error: %s' % str(exc))} payload = {'client': client, 'type': 'response', 'operation': 'send_script', 'success': resp['success']} if (resp['success'] is False): payload['failure_reason'] = resp['reason'] self._publish_response(slug, payload)
Send a script to the connected device. Args: client (string): The client that sent the rpc request uuid (int): The id of the device we're opening the interface on chunk (bytes): The binary script to send to the device key (string): The key to authenticate the caller last_chunk (tuple): the chunk index and count of chunks of this script so that we know to either accumulate it or send it on to the device immediately.
codesearchnet
def _check_create_file_writer_args(inside_function, **kwargs): for arg_name, arg in kwargs.items(): if not isinstance(arg, ops.EagerTensor) and tensor_util.is_tf_type(arg): if inside_function: raise ValueError(f"Invalid graph Tensor argument '{arg_name}={arg}' to create_file_writer() inside an @tf.function. The create call will be lifted into the outer eager execution context, so it cannot consume graph tensors defined inside the function body.") else: raise ValueError(f"Invalid graph Tensor argument '{arg_name}={arg}' to eagerly executed create_file_writer().")
Helper to check the validity of arguments to a create_file_writer() call. Args: inside_function: whether the create_file_writer() call is in a tf.function **kwargs: the arguments to check, as kwargs to give them names. Raises: ValueError: if the arguments are graph tensors.
github-repos
def authenticate(self, request, email=None, password=None, username=None): email = (email or username) try: email_instance = models.EmailAddress.objects.get(is_verified=True, email=email) except models.EmailAddress.DoesNotExist: return None user = email_instance.user if user.check_password(password): return user return None
Attempt to authenticate a set of credentials. Args: request: The request associated with the authentication attempt. email: The user's email address. password: The user's password. username: An alias for the ``email`` field. This is provided for compatability with Django's built in authentication views. Returns: The user associated with the provided credentials if they are valid. Returns ``None`` otherwise.
codesearchnet
async def _location_auth_protect(self, location): netloc_sans_port = self.host.split(':')[0] netloc_sans_port = netloc_sans_port.replace(re.match(_WWX_MATCH, netloc_sans_port)[0], '') base_domain = '.'.join(netloc_sans_port.split('.')[(- 2):]) (l_scheme, l_netloc, _, _, _, _) = urlparse(location) location_sans_port = l_netloc.split(':')[0] location_sans_port = location_sans_port.replace(re.match(_WWX_MATCH, location_sans_port)[0], '') location_domain = '.'.join(location_sans_port.split('.')[(- 2):]) if (base_domain == location_domain): if (l_scheme < self.scheme): return False else: return True
Checks to see if the new location is 1. The same top level domain 2. As or more secure than the current connection type Returns: True (bool): If the current top level domain is the same and the connection type is equally or more secure. False otherwise.
codesearchnet
def search(self, query_string): query = self.create_query() parser = QueryParser(query_string, query) parser.parse() return self.query(query)
Performs a search against the index using lunr query syntax. Results will be returned sorted by their score, the most relevant results will be returned first. For more programmatic querying use `lunr.Index.query`. Args: query_string (str): A string to parse into a Query. Returns: dict: Results of executing the query.
juraj-google-style
def __init__(self, closure, type_spec): self._closure = closure self._type_spec = type_spec self._values = None self._has_fetched_to_local = False self._has_fetched_to_local_lock = threading.Lock() self._fetched_tensors = None self._error = None self._status_available_event = threading.Event() self._status = remote_value.RemoteValueStatus.NOT_READY
Initializes a `RemoteValueImpl`. Args: closure: The closure from which the `RemoteValue` is created. type_spec: The type spec for this `RemoteValue` which is used to trace functions that take this `RemoteValue` as input.
github-repos
def strip_html_tags(text, allowed_tags=None): if text is None: return if allowed_tags is None: allowed_tags = ALLOWED_TAGS return bleach.clean(text, tags=allowed_tags, attributes=['id', 'class', 'style', 'href', 'title'], strip=True)
Strip all tags from a string except those tags provided in `allowed_tags` parameter. Args: text (str): string to strip html tags from allowed_tags (list): allowed list of html tags Returns: a string without html tags
juraj-google-style
def _identify_eds_ing(first, second): A = set([first.L, first.R]) A.update(first.D) B = set([second.L, second.R]) B.update(second.D) depend_set = (A & B) (left, right) = sorted(list((A ^ B))) return (left, right, depend_set)
Find nodes connecting adjacent edges. Args: first(Edge): Edge object representing the first edge. second(Edge): Edge object representing the second edge. Returns: tuple[int, int, set[int]]: The first two values represent left and right node indicies of the new edge. The third value is the new dependence set.
codesearchnet
def get_mysql_vars(mysql: str, host: str, port: int, user: str) -> Dict[str, str]: cmdargs = [ mysql, "-h", host, "-P", str(port), "-e", "SHOW VARIABLES; SHOW STATUS", "-u", user, "-p" ] log.info("Connecting to MySQL with user: {}", user) log.debug(cmdargs) process = subprocess.Popen(cmdargs, stdout=subprocess.PIPE) out, err = process.communicate() lines = out.decode("utf8").splitlines() mysqlvars = {} for line in lines: var, val = line.split("\t") mysqlvars[var] = val return mysqlvars
Asks MySQL for its variables and status. Args: mysql: ``mysql`` executable filename host: host name port: TCP/IP port number user: username Returns: dictionary of MySQL variables/values
juraj-google-style
def bot_intent(self) -> 'IntentAPI': if (not self._bot_intent): self._bot_intent = IntentAPI(self.bot_mxid, self, state_store=self.state_store, log=self.intent_log) return self._bot_intent
Get the intent API for the appservice bot. Returns: The IntentAPI for the appservice bot.
codesearchnet
def radar_xsect(scatterer, h_pol=True): Z = scatterer.get_Z() if h_pol: return ((2 * np.pi) * (((Z[(0, 0)] - Z[(0, 1)]) - Z[(1, 0)]) + Z[(1, 1)])) else: return ((2 * np.pi) * (((Z[(0, 0)] + Z[(0, 1)]) + Z[(1, 0)]) + Z[(1, 1)]))
Radar cross section for the current setup. Args: scatterer: a Scatterer instance. h_pol: If True (default), use horizontal polarization. If False, use vertical polarization. Returns: The radar cross section.
codesearchnet
def status_update(self, crits_id, crits_type, status): obj_type = self._type_translation(crits_type) patch_url = "{0}/{1}/{2}/".format(self.url, obj_type, crits_id) params = { 'api_key': self.api_key, 'username': self.username, } data = { 'action': 'status_update', 'value': status, } r = requests.patch(patch_url, params=params, data=data, verify=self.verify, proxies=self.proxies) if r.status_code == 200: log.debug('Object {} set to {}'.format(crits_id, status)) return True else: log.error('Attempted to set object id {} to ' 'Informational, but did not receive a ' '200'.format(crits_id)) log.error('Error message was: {}'.format(r.text)) return False
Update the status of the TLO. By default, the options are: - New - In Progress - Analyzed - Deprecated Args: crits_id: The object id of the TLO crits_type: The type of TLO. This must be 'Indicator', '' status: The status to change. Returns: True if the status was updated. False otherwise. Raises: CRITsInvalidTypeError
juraj-google-style
def comments_2(self, value=None): if value is not None: try: value = str(value) except ValueError: raise ValueError('value {} need to be of type str ' 'for field `comments_2`'.format(value)) if ',' in value: raise ValueError('value should not contain a comma ' 'for field `comments_2`') self._comments_2 = value
Corresponds to IDD Field `comments_2` Args: value (str): value for IDD Field `comments_2` if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
juraj-google-style
def error_message(): sys.stderr.write('valid commands:\n') for cmd in get_valid_commands(): sys.stderr.write(('\t%s\n' % cmd)) return (- 1)
Writes out error message specifying the valid commands. Returns: Failure code for system exit
codesearchnet
def configure_plugin(self, name, options): url = self._url('/plugins/{0}/set', name) data = options if isinstance(data, dict): data = ['{0}={1}'.format(k, v) for k, v in six.iteritems(data)] res = self._post_json(url, data=data) self._raise_for_status(res) return True
Configure a plugin. Args: name (string): The name of the plugin. The ``:latest`` tag is optional, and is the default if omitted. options (dict): A key-value mapping of options Returns: ``True`` if successful
juraj-google-style
def concept_distance(c1, c2): cause_purview = tuple(set(c1.cause.purview + c2.cause.purview)) effect_purview = tuple(set(c1.effect.purview + c2.effect.purview)) return (repertoire_distance(c1.expand_cause_repertoire(cause_purview), c2.expand_cause_repertoire(cause_purview)) + repertoire_distance(c1.expand_effect_repertoire(effect_purview), c2.expand_effect_repertoire(effect_purview)))
Return the distance between two concepts in concept space. Args: c1 (Concept): The first concept. c2 (Concept): The second concept. Returns: float: The distance between the two concepts in concept space.
juraj-google-style
def send(self, response): self._connection.connection.set('{}:{}'.format(SIGNAL_REDIS_PREFIX, response.uid), pickle.dumps(response))
Send a response back to the client that issued a request. Args: response (Response): Reference to the response object that should be sent.
juraj-google-style
def insert(self, iterable, index=0, data=None, weight=1.0): if index == len(iterable): self.is_terminal = True self.key = iterable self.weight = weight if data: self.data.add(data) else: if iterable[index] not in self.children: self.children[iterable[index]] = TrieNode() self.children[iterable[index]].insert(iterable, index + 1, data)
Insert new node into tree Args: iterable(hashable): key used to find in the future. data(object): data associated with the key index(int): an index used for insertion. weight(float): the wait given for the item added.
juraj-google-style
def isset(alias_name): warnings.warn('Will be removed in v1.0', DeprecationWarning, stacklevel=2) raw_value = read(alias_name, allow_none=True) if raw_value: if re.compile('.+: return True else: warnings.warn('"{0}_PORT={1}" does not look like a docker link.'.format(alias_name, raw_value), stacklevel=2) return False return False
Return a boolean if the docker link is set or not and is a valid looking docker link value. Args: alias_name: The link alias name
codesearchnet
def decompress(self, value: LocalizedValue) -> List[str]: result = [] for lang_code, _ in settings.LANGUAGES: if value: result.append(value.get(lang_code)) else: result.append(None) return result
Decompresses the specified value so it can be spread over the internal widgets. Arguments: value: The :see:LocalizedValue to display in this widget. Returns: All values to display in the inner widgets.
juraj-google-style
def _wait_for_any_job(provider, job_ids, poll_interval): if not job_ids: return while True: tasks = provider.lookup_job_tasks({'*'}, job_ids=job_ids) running_jobs = set() failed_jobs = set() for t in tasks: status = t.get_field('task-status') job_id = t.get_field('job-id') if status in ['FAILURE', 'CANCELED']: failed_jobs.add(job_id) if status == 'RUNNING': running_jobs.add(job_id) remaining_jobs = running_jobs.difference(failed_jobs) if failed_jobs or len(remaining_jobs) != len(job_ids): return remaining_jobs SLEEP_FUNCTION(poll_interval)
Waits until any of the listed jobs is not running. In particular, if any of the jobs sees one of its tasks fail, we count the whole job as failing (but do not terminate the remaining tasks ourselves). Args: provider: job service provider job_ids: a list of job IDs (string) to wait for poll_interval: integer seconds to wait between iterations Returns: A set of the jobIDs with still at least one running task.
juraj-google-style
def goto(directory, create=False): current = os.getcwd() directory = os.path.abspath(directory) if os.path.isdir(directory) or (create and mkdir(directory)): logger.info("goto -> %s", directory) os.chdir(directory) try: yield True finally: logger.info("goto <- %s", directory) os.chdir(current) else: logger.info( "goto(%s) - directory does not exist, or cannot be " "created.", directory, ) yield False
Context object for changing directory. Args: directory (str): Directory to go to. create (bool): Create directory if it doesn't exists. Usage:: >>> with goto(directory) as ok: ... if not ok: ... print 'Error' ... else: ... print 'All OK'
juraj-google-style
def remove_child(self, child): if ((child in self.children.values()) and hasattr(child, 'identifier')): for k in self.children.keys(): if hasattr(self.children[k], 'identifier'): if (self.children[k].identifier == child.identifier): if (k in self._render_children_list): self._render_children_list.remove(k) self.children.pop(k) break
Removes a child instance from the Tag's children. Args: child (Tag): The child to be removed.
codesearchnet
def add_migrations(self, migrations): if self.__closed: raise MigrationSessionError("Can't change applied session") self._to_apply.extend(migrations)
Add migrations to be applied. Args: migrations: a list of migrations to add of the form [(app, migration_name), ...] Raises: MigrationSessionError if called on a closed MigrationSession
juraj-google-style
def booleans_processing(config, **kwargs): final_booleans = {} if 'output_attentions' in kwargs: final_booleans['output_attentions'] = kwargs['output_attentions'] if kwargs['output_attentions'] is not None else config.output_attentions final_booleans['output_hidden_states'] = kwargs['output_hidden_states'] if kwargs['output_hidden_states'] is not None else config.output_hidden_states final_booleans['return_dict'] = kwargs['return_dict'] if kwargs['return_dict'] is not None else config.return_dict if 'use_cache' in kwargs: final_booleans['use_cache'] = kwargs['use_cache'] if kwargs['use_cache'] is not None else getattr(config, 'use_cache', None) return final_booleans
Process the input booleans of each model. Args: config ([`PretrainedConfig`]): The config of the running model. **kwargs: The boolean parameters Returns: A dictionary with the proper values for each boolean
github-repos
def __init__(self, isbn): super(Isbn, self).__init__() self._isbn = isbn if len(isbn) in (9, 12): self.isbn = _isbn_cleanse(isbn, False) else: self.isbn = _isbn_cleanse(isbn)
Initialise a new ``Isbn`` object. Args: isbn (str): ISBN string
juraj-google-style
def AddUserAccount(self, user_account, session_identifier=CURRENT_SESSION): if (session_identifier not in self._user_accounts): self._user_accounts[session_identifier] = {} user_accounts = self._user_accounts[session_identifier] if (user_account.identifier in user_accounts): raise KeyError('User account: {0:s} already exists.'.format(user_account.identifier)) user_accounts[user_account.identifier] = user_account
Adds an user account. Args: user_account (UserAccountArtifact): user account artifact. session_identifier (Optional[str])): session identifier, where CURRENT_SESSION represents the active session. Raises: KeyError: if the user account already exists.
codesearchnet
def get_file_type(variant_source): file_type = 'unknown' valid_vcf_suffixes = ('.vcf', '.vcf.gz') if variant_source: logger.debug("Check file type with file: {0}".format(variant_source)) if variant_source.endswith('.db'): file_type = 'gemini' logger.debug("File {0} is a gemini database".format(variant_source)) elif variant_source.endswith(valid_vcf_suffixes): file_type = 'vcf' logger.debug("File {0} is a vcf".format(variant_source)) else: logger.debug("File is in a unknown format") return file_type
Check what kind of file variant source is Args: variant_source (str): Path to variant source Returns: file_type (str): 'vcf', 'gemini' or 'unknown'
juraj-google-style
def add_to_writer(self, writer: PdfFileWriter, start_recto: bool=True) -> None: if self.is_html: pdf = get_pdf_from_html(html=self.html, header_html=self.header_html, footer_html=self.footer_html, wkhtmltopdf_filename=self.wkhtmltopdf_filename, wkhtmltopdf_options=self.wkhtmltopdf_options) append_memory_pdf_to_writer(pdf, writer, start_recto=start_recto) elif self.is_filename: if (start_recto and ((writer.getNumPages() % 2) != 0)): writer.addBlankPage() writer.appendPagesFromReader(PdfFileReader(open(self.filename, 'rb'))) else: raise AssertionError("PdfPlan: shouldn't get here!")
Add the PDF described by this class to a PDF writer. Args: writer: a :class:`PyPDF2.PdfFileWriter` start_recto: start a new right-hand page?
codesearchnet
def depth_soil_density(self, value=None): if (value is not None): try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float for field `depth_soil_density`'.format(value)) self._depth_soil_density = value
Corresponds to IDD Field `depth_soil_density` Args: value (float): value for IDD Field `depth_soil_density` Unit: kg/m3 if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def submit_evaluation(self, variant_obj, user_obj, institute_obj, case_obj, link, criteria): variant_specific = variant_obj['_id'] variant_id = variant_obj['variant_id'] user_id = user_obj['_id'] user_name = user_obj.get('name', user_obj['_id']) institute_id = institute_obj['_id'] case_id = case_obj['_id'] evaluation_terms = [evluation_info['term'] for evluation_info in criteria] classification = get_acmg(evaluation_terms) evaluation_obj = build_evaluation( variant_specific=variant_specific, variant_id=variant_id, user_id=user_id, user_name=user_name, institute_id=institute_id, case_id=case_id, classification=classification, criteria=criteria ) self._load_evaluation(evaluation_obj) self.update_acmg(institute_obj, case_obj, user_obj, link, variant_obj, classification) return classification
Submit an evaluation to the database Get all the relevant information, build a evaluation_obj Args: variant_obj(dict) user_obj(dict) institute_obj(dict) case_obj(dict) link(str): variant url criteria(list(dict)): [ { 'term': str, 'comment': str, 'links': list(str) }, . . ]
juraj-google-style
def WriteSerialized(cls, attribute_container): json_dict = cls.WriteSerializedDict(attribute_container) return json.dumps(json_dict)
Writes an attribute container to serialized form. Args: attribute_container (AttributeContainer): attribute container. Returns: str: A JSON string containing the serialized form.
juraj-google-style
def _send_http_request(self, xml_request): headers = {'Host': self._host, 'Content-Type': 'text/xml', 'Recipient': self._storage} try: self._connection.request('POST', self._selector_url, xml_request, headers) response = self._connection.getresponse() except (httplib.CannotSendRequest, httplib.BadStatusLine): Debug.warn('\nRestarting socket, resending message!') self._open_connection() self._connection.request('POST', self._selector_url, xml_request, headers) response = self._connection.getresponse() data = response.read() return data
Send a request via HTTP protocol. Args: xml_request -- A fully formed xml request string for the CPS. Returns: The raw xml response string.
codesearchnet
def label_count(self, label_list_ids=None): count = collections.defaultdict(int) for utterance in self.utterances.values(): for (label_value, utt_count) in utterance.label_count(label_list_ids=label_list_ids).items(): count[label_value] += utt_count return count
Return a dictionary containing the number of times, every label-value in this corpus is occurring. Args: label_list_ids (list): If not None, only labels from label-lists with an id contained in this list are considered. Returns: dict: A dictionary containing the number of occurrences with the label-value as key.
codesearchnet
def signature_to_callable(self, sig): base_cls = self.ctx.convert.function_type ret = sig.annotations.get('return', self.ctx.convert.unsolvable) if not sig.kwonly_params and (self._detailed or sig.mandatory_param_count() == sig.maximum_param_count()): args = [sig.annotations.get(name, self.ctx.convert.unsolvable) for name in sig.param_names] params = {abstract_utils.ARGS: self.ctx.convert.merge_values(args), abstract_utils.RET: ret} params.update(enumerate(args)) return abstract.CallableClass(base_cls, params, self.ctx) else: params = {abstract_utils.ARGS: self.ctx.convert.unsolvable, abstract_utils.RET: ret} return abstract.ParameterizedClass(base_cls, params, self.ctx)
Converts a function.Signature object into a callable object. Args: sig: The signature to convert. Returns: An abstract.CallableClass representing the signature, or an abstract.ParameterizedClass if the signature has a variable number of arguments.
github-repos
def sspro_summary(self): summary = {} records = ssbio.protein.sequence.utils.fasta.load_fasta_file(self.out_sspro) for r in records: seq_summary = {} seq_summary['percent_H-sspro'] = (r.seq.count('H') / float(len(r))) seq_summary['percent_E-sspro'] = (r.seq.count('E') / float(len(r))) seq_summary['percent_C-sspro'] = (r.seq.count('C') / float(len(r))) summary[r.id] = seq_summary return summary
Parse the SSpro output file and return a summary of secondary structure composition. The output file is just a FASTA formatted file, so you can get residue level information by parsing it like a normal sequence file. Returns: dict: Percentage of: H: helix E: strand C: the rest
codesearchnet
def get_feature_variable_string(self, feature_key, variable_key, user_id, attributes=None): variable_type = entities.Variable.Type.STRING return self._get_feature_variable_for_type(feature_key, variable_key, variable_type, user_id, attributes)
Returns value for a certain string variable attached to a feature. Args: feature_key: Key of the feature whose variable's value is being accessed. variable_key: Key of the variable whose value is to be accessed. user_id: ID for user. attributes: Dict representing user attributes. Returns: String value of the variable. None if: - Feature key is invalid. - Variable key is invalid. - Mismatch with type of variable.
codesearchnet
def claim(self, file_readers): (prefix_to_reader, unclaimed_readers) = self._find_strelka_files(file_readers) prefix_by_patients = self._split_prefix_by_patient(prefix_to_reader) self._validate_vcf_readers(prefix_by_patients) vcf_readers = self._create_vcf_readers(prefix_to_reader) return (unclaimed_readers, vcf_readers)
Recognizes and claims Strelka VCFs form the set of all input VCFs. Each defined caller has a chance to evaluate and claim all the incoming files as something that it can process. Args: file_readers: the collection of currently unclaimed files Returns: A tuple of unclaimed readers and StrelkaVcfReaders.
juraj-google-style
def op_signature_def(op, key): return build_signature_def(outputs={key: utils.build_tensor_info_from_op(op)})
Creates a signature def with the output pointing to an op. Note that op isn't strictly enforced to be an Op object, and may be a Tensor. It is recommended to use the build_signature_def() function for Tensors. Args: op: An Op (or possibly Tensor). key: Key to graph element in the SignatureDef outputs. Returns: A SignatureDef with a single output pointing to the op.
github-repos
def add_object_to_path(self, obj, location): location = self._handle_location(location) location.append(obj.as_list_data()) results = [item for item in location.getchildren() if (item.findtext('id') == obj.id)][0] return results
Add an object of type JSSContainerObject to location. This method determines the correct list representation of an object and adds it to "location". For example, add a Computer to a ComputerGroup. The ComputerGroup will not have a child Computers/Computer tag with subelements "name" and "id". Args: obj: A JSSContainerObject subclass. location: Element or a string path argument to find() Returns: Element for the object just added.
codesearchnet
def report( vulnerabilities, fileobj, print_sanitised, ): TZ_AGNOSTIC_FORMAT = "%Y-%m-%dT%H:%M:%SZ" time_string = datetime.utcnow().strftime(TZ_AGNOSTIC_FORMAT) machine_output = { 'generated_at': time_string, 'vulnerabilities': [ vuln.as_dict() for vuln in vulnerabilities if print_sanitised or not isinstance(vuln, SanitisedVulnerability) ] } result = json.dumps( machine_output, indent=4 ) with fileobj: fileobj.write(result)
Prints issues in JSON format. Args: vulnerabilities: list of vulnerabilities to report fileobj: The output file object, which may be sys.stdout
juraj-google-style
def consume(self, callback, queue): self.consumers[queue] = callback if self._client_ready.called: return self.client.consume(callback, queue)
Register a new consumer. This consumer will be configured for every protocol this factory produces so it will be reconfigured on network failures. If a connection is already active, the consumer will be added to it. Args: callback (callable): The callback to invoke when a message arrives. queue (str): The name of the queue to consume from.
juraj-google-style
def from_json(cls, data): assert 'name' in data, 'Required keyword "name" is missing!' assert 'data_type' in data, 'Required keyword "data_type" is missing!' if cls._type_enumeration is None: cls._type_enumeration = _DataTypeEnumeration(import_modules=False) if data['data_type'] == 'GenericType': assert 'base_unit' in data, \ 'Keyword "base_unit" is missing and is required for GenericType.' return cls._type_enumeration._GENERICTYPE(data['name'], data['base_unit']) elif data['data_type'] in cls._type_enumeration._TYPES: clss = cls._type_enumeration._TYPES[data['data_type']] if data['data_type'] == data['name'].title().replace(' ', ''): return clss() else: instance = clss() instance._name = data['name'] return instance else: raise ValueError( 'Data Type {} could not be recognized'.format(data['data_type']))
Create a data type from a dictionary. Args: data: Data as a dictionary. { "name": data type name of the data type as a string "data_type": the class name of the data type as a string "base_unit": the base unit of the data type }
juraj-google-style
def get_vocabulary(self, include_special_tokens=True): if self.lookup_table.size() == 0: vocab, indices = ([], []) else: keys, values = self.lookup_table.export() vocab, indices = (values, keys) if self.invert else (keys, values) vocab, indices = (self._tensor_vocab_to_numpy(vocab), indices.numpy()) lookup = collections.defaultdict(lambda: self.oov_token, zip(indices, vocab)) vocab = [lookup[x] for x in range(self.vocabulary_size())] if self.mask_token is not None and self.output_mode == 'int': vocab[0] = self.mask_token if not include_special_tokens: vocab = vocab[self._token_start_index():] if self.vocabulary_dtype == 'string': return [i.decode('utf-8') if isinstance(i, bytes) else i for i in vocab] else: return vocab
Returns the current vocabulary of the layer. Args: include_special_tokens: If `True`, the returned vocabulary will include mask and OOV tokens, and a term's index in the vocabulary will equal the term's index when calling the layer. If `False`, the returned vocabulary will not include any mask or OOV tokens.
github-repos
def occurs_in_type(v, type2): pruned_type2 = prune(type2) if (pruned_type2 == v): return True elif isinstance(pruned_type2, TypeOperator): return occurs_in(v, pruned_type2.types) return False
Checks whether a type variable occurs in a type expression. Note: Must be called with v pre-pruned Args: v: The TypeVariable to be tested for type2: The type in which to search Returns: True if v occurs in type2, otherwise False
codesearchnet
def _GenerateSshKey(self, key_type, key_dest): with tempfile.NamedTemporaryFile(prefix=key_type, delete=True) as temp: temp_key = temp.name command = ['ssh-keygen', '-t', key_type, '-f', temp_key, '-N', '', '-q'] try: self.logger.info('Generating SSH key %s.', key_dest) subprocess.check_call(command) except subprocess.CalledProcessError: self.logger.warning('Could not create SSH key %s.', key_dest) return shutil.move(temp_key, key_dest) shutil.move(('%s.pub' % temp_key), ('%s.pub' % key_dest)) file_utils.SetPermissions(key_dest, mode=384) file_utils.SetPermissions(('%s.pub' % key_dest), mode=420)
Generate a new SSH key. Args: key_type: string, the type of the SSH key. key_dest: string, a file location to store the SSH key.
codesearchnet
def assert_positive_definite(self, name='assert_positive_definite'): with self._name_scope(name): return self._assert_positive_definite()
Returns an `Op` that asserts this operator is positive definite. Here, positive definite means that the quadratic form `x^H A x` has positive real part for all nonzero `x`. Note that we do not require the operator to be self-adjoint to be positive definite. Args: name: A name to give this `Op`. Returns: An `Assert` `Op`, that, when run, will raise an `InvalidArgumentError` if the operator is not positive definite.
github-repos
def createEditor(self, parent, option, index): editor = BigIntSpinbox(parent) try: editor.setMinimum(self.minimum) editor.setMaximum(self.maximum) editor.setSingleStep(self.singleStep) except TypeError as err: pass return editor
Returns the widget used to edit the item specified by index for editing. The parent widget and style option are used to control how the editor widget appears. Args: parent (QWidget): parent widget. option (QStyleOptionViewItem): controls how editor widget appears. index (QModelIndex): model data index.
juraj-google-style