code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def process_fixed_issues(self, volumes, existing_issues): fixed_issues = [] for (issue_id, issue) in list(existing_issues.items()): if (issue_id not in volumes): fixed_issues.append(issue) return fixed_issues
Provided a list of volumes and existing issues, returns a list of fixed issues to be deleted Args: volumes (`dict`): A dictionary keyed on the issue id, with the :obj:`Volume` object as the value existing_issues (`dict`): A dictionary keyed on the issue id, with the :obj:`EBSVolumeAuditIssue` object as the value Returns: :obj:`list` of :obj:`EBSVolumeAuditIssue`
codesearchnet
def security(self, domains): api_name = 'opendns-security' fmt_url_path = u'security/name/{0}.json' return self._multi_get(api_name, fmt_url_path, domains)
Calls security end point and adds an 'is_suspicious' key to each response. Args: domains: An enumerable of strings Returns: A dict of {domain: security_result}
codesearchnet
def _ParseMRUListEntryValue( self, parser_mediator, registry_key, entry_index, entry_letter, **kwargs): value_string = '' value = registry_key.GetValueByName('{0:s}'.format(entry_letter)) if value is None: parser_mediator.ProduceExtractionWarning( 'missing MRUList value: {0:s} in key: {1:s}.'.format( entry_letter, registry_key.path)) elif value.DataIsString(): value_string = value.GetDataAsObject() elif value.DataIsBinaryData(): logger.debug(( '[{0:s}] Non-string MRUList entry value: {1:s} parsed as string ' 'in key: {2:s}.').format(self.NAME, entry_letter, registry_key.path)) utf16le_string_map = self._GetDataTypeMap('utf16le_string') try: value_string = self._ReadStructureFromByteStream( value.data, 0, utf16le_string_map) except (ValueError, errors.ParseError) as exception: parser_mediator.ProduceExtractionWarning(( 'unable to parse MRUList entry value: {0:s} with error: ' '{1!s}').format(entry_letter, exception)) value_string = value_string.rstrip('\x00') return value_string
Parses the MRUList entry value. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. registry_key (dfwinreg.WinRegistryKey): Windows Registry key that contains the MRUList value. entry_index (int): MRUList entry index. entry_letter (str): character value representing the entry. Returns: str: MRUList entry value.
juraj-google-style
def __init__(self, ppp_config_dir=None, enhancement_config_file=None): self.ppp_config_dir = ppp_config_dir or get_environ_config_dir() self.enhancement_config_file = enhancement_config_file if self.enhancement_config_file is None: config_fn = os.path.join("enhancements", "generic.yaml") self.enhancement_config_file = config_search_paths(config_fn, self.ppp_config_dir) if not self.enhancement_config_file: self.enhancement_tree = None else: if not isinstance(self.enhancement_config_file, (list, tuple)): self.enhancement_config_file = [self.enhancement_config_file] self.enhancement_tree = EnhancementDecisionTree(*self.enhancement_config_file) self.sensor_enhancement_configs = []
Initialize an Enhancer instance. Args: ppp_config_dir: Points to the base configuration directory enhancement_config_file: The enhancement configuration to apply, False to leave as is.
juraj-google-style
def scalar_pb(tag, data, description=None): arr = np.array(data) if (arr.shape != ()): raise ValueError(('Expected scalar shape for tensor, got shape: %s.' % arr.shape)) if (arr.dtype.kind not in ('b', 'i', 'u', 'f')): raise ValueError(('Cast %s to float is not supported' % arr.dtype.name)) tensor_proto = tensor_util.make_tensor_proto(arr.astype(np.float32)) summary_metadata = metadata.create_summary_metadata(display_name=None, description=description) summary = summary_pb2.Summary() summary.value.add(tag=tag, metadata=summary_metadata, tensor=tensor_proto) return summary
Create a scalar summary_pb2.Summary protobuf. Arguments: tag: String tag for the summary. data: A 0-dimensional `np.array` or a compatible python number type. description: Optional long-form description for this summary, as a `str`. Markdown is supported. Defaults to empty. Raises: ValueError: If the type or shape of the data is unsupported. Returns: A `summary_pb2.Summary` protobuf object.
codesearchnet
def connect(self, wire_char, where, label=None): if 'top' in where and self.top_connector: self.top_connect = self.top_connector[wire_char] if 'bot' in where and self.bot_connector: self.bot_connect = self.bot_connector[wire_char] if label: self.top_format = self.top_format[:-1] + (label if label else "")
Connects boxes and elements using wire_char and setting proper connectors. Args: wire_char (char): For example '║' or '│'. where (list["top", "bot"]): Where the connector should be set. label (string): Some connectors have a label (see cu1, for example).
juraj-google-style
def _get_params(self, validator_parameter, name_prefix): params_validator = self.request.get(validator_parameter) user_params = {} for key in self.request.arguments(): if key.startswith(name_prefix): values = self.request.get_all(key) adjusted_key = key[len(name_prefix):] if len(values) == 1: user_params[adjusted_key] = values[0] else: user_params[adjusted_key] = values if params_validator: resolved_validator = util.for_name(params_validator) resolved_validator(user_params) return user_params
Retrieves additional user-supplied params for the job and validates them. Args: validator_parameter: name of the request parameter which supplies validator for this parameter set. name_prefix: common prefix for all parameter names in the request. Raises: Any exception raised by the 'params_validator' request parameter if the params fail to validate. Returns: The user parameters.
juraj-google-style
def CopyFromDict(self, attributes): for (attribute_name, attribute_value) in attributes.items(): if (attribute_name[0] == '_'): continue setattr(self, attribute_name, attribute_value)
Copies the attribute container from a dictionary. Args: attributes (dict[str, object]): attribute values per name.
codesearchnet
def new_scope(self, new_scope={}): old_scopes, self.scopes = self.scopes, self.scopes.new_child(new_scope) yield self.scopes = old_scopes
Add a new innermost scope for the duration of the with block. Args: new_scope (dict-like): The scope to add.
juraj-google-style
def __init__(self, atlas_name, root_dir, reference_gempro, reference_genome_path=None, description=None): Object.__init__(self, id=atlas_name, description=description) self._root_dir = None self.root_dir = root_dir self.strains = DictList() self.df_orthology_matrix = pd.DataFrame() self._orthology_matrix_has_sequences = False self.reference_gempro = reference_gempro if not reference_genome_path and not self.reference_gempro.genome_path: self.reference_gempro.genome_path = self.reference_gempro.write_representative_sequences_file(outname=self.reference_gempro.id) else: self.reference_gempro.genome_path = reference_genome_path self._empty_reference_gempro = None if self.reference_gempro.model: self._empty_reference_gempro = GEMPRO(gem_name='Copied reference GEM-PRO', gem=self.reference_gempro.model.copy()) for x in self._empty_reference_gempro.genes: x.reset_protein() else: strain_genes = [x.id for x in self.reference_gempro.genes] if len(strain_genes) == 0: raise ValueError('GEM-PRO has no genes, unable to run multi-strain analysis') self._empty_reference_gempro = GEMPRO(gem_name='Copied reference GEM-PRO', genes_list=strain_genes)
Prepare a GEM-PRO model for ATLAS analysis Args: atlas_name (str): Name of your ATLAS project root_dir (str): Path to where the folder named after ``atlas_name`` will be created. reference_gempro (GEMPRO): GEM-PRO model to use as the reference genome reference_genome_path (str): Path to reference genome FASTA file description (str): Optional string to describe your project
juraj-google-style
def reset_internal_states(self, record=None): self._record = None self._count = 0 self._record = record
Resets the internal state of the recorder. Args: record: records.TestResultRecord, the test record for a test.
github-repos
def expected_value(self): alpha = (self.__success + self.__default_alpha) beta = (self.__failure + self.__default_beta) try: expected_value = (alpha / (alpha + beta)) except ZeroDivisionError: expected_value = 0.0 return expected_value
Compute expected value. Returns: Expected value.
codesearchnet
def delta_stoichiometry( reactants, products ): totals = Counter() for r in reactants: totals.update( ( r * -1.0 ).stoichiometry ) for p in products: totals.update( p.stoichiometry ) to_return = {} for c in totals: if totals[c] != 0: to_return[c] = totals[c] return to_return
Calculate the change in stoichiometry for reactants --> products. Args: reactants (list(vasppy.Calculation): A list of vasppy.Calculation objects. The initial state. products (list(vasppy.Calculation): A list of vasppy.Calculation objects. The final state. Returns: (Counter): The change in stoichiometry.
juraj-google-style
def __init__(self, key, secret): self.__key = key self.__secret = secret if self.__key is None or self.__secret is None: raise ValueError("Key and secret must be set.")
Handles token authentication for Neurio Client. Args: key (string): your Neurio API key secret (string): your Neurio API secret
juraj-google-style
def peek(quantity, min_type=EventType.firstevent, max_type=EventType.lastevent): return _peep(quantity, lib.SDL_PEEKEVENT, min_type, max_type)
Return events at the front of the event queue, within the specified minimum and maximum type, and do not remove them from the queue. Args: quantity (int): The maximum number of events to return. min_type (int): The minimum value for the event type of the returned events. max_type (int): The maximum value for the event type of the returned events. Returns: List[Event]: Events from the front of the event queue. Raises: SDLError: If there was an error retrieving the events.
codesearchnet
def get_pending_users_queryset(self, search_keyword, customer_uuid): queryset = PendingEnterpriseCustomerUser.objects.filter( enterprise_customer__uuid=customer_uuid ) if search_keyword is not None: queryset = queryset.filter(user_email__icontains=search_keyword) return queryset
Get the list of PendingEnterpriseCustomerUsers we want to render. Args: search_keyword (str): The keyword to search for in pending users' email addresses. customer_uuid (str): A unique identifier to filter down to only pending users linked to a particular EnterpriseCustomer.
juraj-google-style
def _compose_custom_getters(getter_a, getter_b): if (not getter_a): return getter_b if (not getter_b): return getter_a def getter_fn(getter, *args, **kwargs): return getter_b(functools.partial(getter_a, getter), *args, **kwargs) return getter_fn
Compose two custom getters. Example use: tf.get_variable_scope().set_custom_getter( compose_custom_getters(tf.get_variable_scope().custom_getter, new_getter)) This composes getters in the same way as creating a new variable scope with the new_getter, but it does not actually create a new variable scope. Args: getter_a: a custom getter - generally from the existing variable scope. getter_b: a custom getter Returns: a custom getter
codesearchnet
def download(url: str, filename: str, skip_cert_verify: bool = True) -> None: log.info("Downloading from {} to {}", url, filename) ctx = ssl.create_default_context() if skip_cert_verify: log.debug("Skipping SSL certificate check for " + url) ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE with urllib.request.urlopen(url, context=ctx) as u, open(filename, 'wb') as f: f.write(u.read())
Downloads a URL to a file. Args: url: URL to download from filename: file to save to skip_cert_verify: skip SSL certificate check?
juraj-google-style
def is_valid(container, path): try: tmp_hash_path = container.filename + ".hash" with open(tmp_hash_path, 'r') as tmp_file: tmp_hash = tmp_file.readline() except IOError: LOG.info("No .hash-file in the tmp-directory.") container_hash_path = local.path(path) / "gentoo.tar.bz2.hash" if container_hash_path.exists(): with open(container_hash_path, 'r') as hash_file: container_hash = hash_file.readline() return container_hash == tmp_hash return False
Checks if a container exists and is unpacked. Args: path: The location where the container is expected. Returns: True if the container is valid, False if the container needs to unpacked or if the path does not exist yet.
juraj-google-style
def generate_meta_features(path, base_learner_id): with functions.DBContextManager(path) as session: base_learner = session.query(models.BaseLearner).filter_by(id=base_learner_id).first() if not base_learner: raise exceptions.UserError('Base learner {} ' 'does not exist'.format(base_learner_id)) base_learner.job_id = get_current_job().id base_learner.job_status = 'started' session.add(base_learner) session.commit() try: est = base_learner.return_estimator() extraction = session.query(models.Extraction).first() X, y = extraction.return_train_dataset() return_splits_iterable = functions.import_object_from_string_code( extraction.meta_feature_generation['source'], 'return_splits_iterable' ) meta_features_list = [] trues_list = [] for train_index, test_index in return_splits_iterable(X, y): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] est = est.fit(X_train, y_train) meta_features_list.append( getattr(est, base_learner.base_learner_origin. meta_feature_generator)(X_test) ) trues_list.append(y_test) meta_features = np.concatenate(meta_features_list, axis=0) y_true = np.concatenate(trues_list) for key in base_learner.base_learner_origin.metric_generators: metric_generator = functions.import_object_from_string_code( base_learner.base_learner_origin.metric_generators[key], 'metric_generator' ) base_learner.individual_score[key] = metric_generator(y_true, meta_features) meta_features_path = base_learner.meta_features_path(path) if not os.path.exists(os.path.dirname(meta_features_path)): os.makedirs(os.path.dirname(meta_features_path)) np.save(meta_features_path, meta_features, allow_pickle=False) base_learner.job_status = 'finished' base_learner.meta_features_exists = True session.add(base_learner) session.commit() except: session.rollback() base_learner.job_status = 'errored' base_learner.description['error_type'] = repr(sys.exc_info()[0]) base_learner.description['error_value'] = repr(sys.exc_info()[1]) base_learner.description['error_traceback'] = \ traceback.format_exception(*sys.exc_info()) session.add(base_learner) session.commit() raise
Generates meta-features for specified base learner After generation of meta-features, the file is saved into the meta-features folder Args: path (str): Path to Xcessiv notebook base_learner_id (str): Base learner ID
juraj-google-style
def use(plugin): log.debug('register new plugin: {}'.format(plugin)) if inspect.isfunction(plugin): return plugin(Engine) if (plugin and hasattr(plugin, 'register')): return plugin.register(Engine) raise ValueError('invalid plugin: must be a function or implement register() method')
Register plugin in grappa. `plugin` argument can be a function or a object that implement `register` method, which should accept one argument: `grappa.Engine` instance. Arguments: plugin (function|module): grappa plugin object to register. Raises: ValueError: if `plugin` is not a valid interface. Example:: import grappa class MyOperator(grappa.Operator): pass def my_plugin(engine): engine.register(MyOperator) grappa.use(my_plugin)
codesearchnet
def rvs(self, size=1): return np.random.multivariate_normal(self.mean, self.cov, size)
Convenience method to sample from this distribution. Args: size (int or tuple): Shape of return value. Each element is drawn independently from this distribution.
codesearchnet
def _base_query(self, session): return session.query(ORMTargetMarker) \ .filter(ORMTargetMarker.name == self.name) \ .filter(ORMTargetMarker.params == self.params)
Base query for a target. Args: session: database session to query in
juraj-google-style
def visualize_conv_weights(filters, name): with tf.name_scope(('visualize_w_' + name)): filters = tf.transpose(filters, (3, 2, 0, 1)) filters = tf.unstack(filters) filters = tf.concat(filters, 1) filters = tf.unstack(filters) filters = tf.concat(filters, 1) filters = tf.expand_dims(filters, 0) filters = tf.expand_dims(filters, (- 1)) tf.summary.image(('visualize_w_' + name), filters)
Visualize use weights in convolution filters. Args: filters: tensor containing the weights [H,W,Cin,Cout] name: label for tensorboard Returns: image of all weight
codesearchnet
def merged(self, timeslots: 'TimeslotCollection') -> 'TimeslotCollection': slots = [Timeslot(slot.interval, slot.channel) for slot in self.timeslots] slots.extend([Timeslot(slot.interval, slot.channel) for slot in timeslots.timeslots]) return TimeslotCollection(*slots)
Return a new TimeslotCollection merged with a specified `timeslots` Args: timeslots: TimeslotCollection to be merged
juraj-google-style
def find_files(base_dir, extensions, exclude_dirs=list()): result = [] for root, dir_names, file_names in os.walk(base_dir): for filename in file_names: candidate = os.path.join(root, filename) if should_include_file_in_search(candidate, extensions, exclude_dirs): result.append(candidate) return result
Find all files matching the given extensions. Args: base_dir (str): Path of base directory to search in. extensions (list): A list of file extensions to search for. exclude_dirs (list): A list of directories to exclude from search. Returns: list of paths that match the search
juraj-google-style
def delete_record(self, record): self.children.remove(record.resource) record.delete()
Remove a DNSRecord Args: record (:obj:`DNSRecord`): :obj:`DNSRecord` to remove Returns: `None`
codesearchnet
def mme_add(store, user_obj, case_obj, add_gender, add_features, add_disorders, genes_only, mme_base_url, mme_accepts, mme_token): if ((not mme_base_url) or (not mme_accepts) or (not mme_token)): return 'Please check that Matchmaker connection parameters are valid' url = ''.join([mme_base_url, '/patient/add']) features = [] disorders = [] g_features = [] contact_info = {'name': user_obj['name'], 'href': ''.join(['mailto:', user_obj['email']]), 'institution': 'Scout software user, Science For Life Laboratory, Stockholm, Sweden'} if add_features: features = hpo_terms(case_obj) if add_disorders: disorders = omim_terms(case_obj) server_responses = [] submitted_info = {'contact': contact_info, 'sex': add_gender, 'features': features, 'disorders': disorders, 'genes_only': genes_only, 'patient_id': []} for individual in case_obj.get('individuals'): if (not (individual['phenotype'] in [2, 'affected'])): continue patient = {'contact': contact_info, 'id': '.'.join([case_obj['_id'], individual.get('individual_id')]), 'label': '.'.join([case_obj['display_name'], individual.get('display_name')]), 'features': features, 'disorders': disorders} if add_gender: if (individual['sex'] == '1'): patient['sex'] = 'MALE' else: patient['sex'] = 'FEMALE' if case_obj.get('suspects'): g_features = genomic_features(store, case_obj, individual.get('display_name'), genes_only) patient['genomicFeatures'] = g_features resp = matchmaker_request(url=url, token=mme_token, method='POST', content_type=mme_accepts, accept='application/json', data={'patient': patient}) server_responses.append({'patient': patient, 'message': resp.get('message'), 'status_code': resp.get('status_code')}) submitted_info['server_responses'] = server_responses return submitted_info
Add a patient to MatchMaker server Args: store(adapter.MongoAdapter) user_obj(dict) a scout user object (to be added as matchmaker contact) case_obj(dict) a scout case object add_gender(bool) if True case gender will be included in matchmaker add_features(bool) if True HPO features will be included in matchmaker add_disorders(bool) if True OMIM diagnoses will be included in matchmaker genes_only(bool) if True only genes and not variants will be shared mme_base_url(str) base url of the MME server mme_accepts(str) request content accepted by MME server mme_token(str) auth token of the MME server Returns: submitted_info(dict) info submitted to MatchMaker and its responses
codesearchnet
def _generate_entry(self, vm): return \ '{name} ' \ 'ansible_host={ip} ' \ 'ansible_ssh_private_key_file={key}'.format( name=vm.name(), ip=vm.ip(), key=self.prefix.paths.ssh_id_rsa() )
Generate host entry for the given VM Args: vm (lago.plugins.vm.VMPlugin): The VM for which the entry should be created for. Returns: str: An entry for vm
juraj-google-style
def _UpdateAndMigrateUnmerged(self, not_merged_stops, zone_map, merge_map, schedule): for (stop, migrated_stop) in not_merged_stops: if (stop.zone_id in zone_map): migrated_stop.zone_id = zone_map[stop.zone_id] else: migrated_stop.zone_id = self.feed_merger.GenerateId(stop.zone_id) zone_map[stop.zone_id] = migrated_stop.zone_id if stop.parent_station: parent_original = schedule.GetStop(stop.parent_station) migrated_stop.parent_station = merge_map[parent_original].stop_id self.feed_merger.merged_schedule.AddStopObject(migrated_stop)
Correct references in migrated unmerged stops and add to merged_schedule. For stops migrated from one of the input feeds to the output feed update the parent_station and zone_id references to point to objects in the output feed. Then add the migrated stop to the new schedule. Args: not_merged_stops: list of stops from one input feed that have not been merged zone_map: map from zone_id in the input feed to zone_id in the output feed merge_map: map from Stop objects in the input feed to Stop objects in the output feed schedule: the input Schedule object
codesearchnet
def _FormatDateTime(self, event): try: datetime_object = datetime.datetime( 1970, 1, 1, 0, 0, 0, 0, tzinfo=pytz.UTC) datetime_object += datetime.timedelta(microseconds=event.timestamp) datetime_object.astimezone(self._output_mediator.timezone) return datetime_object.replace(tzinfo=None) except (OverflowError, ValueError) as exception: self._ReportEventError(event, ( 'unable to copy timestamp: {0!s} to a human readable date and time ' 'with error: {1!s}. Defaulting to: "ERROR"').format( event.timestamp, exception)) return 'ERROR'
Formats the date to a datetime object without timezone information. Note: timezone information must be removed due to lack of support by xlsxwriter and Excel. Args: event (EventObject): event. Returns: datetime.datetime|str: date and time value or a string containing "ERROR" on OverflowError.
juraj-google-style
def cuts_connections(self, a, b): n = max(self.indices) + 1 return self.cut_matrix(n)[np.ix_(a, b)].any()
Check if this cut severs any connections from ``a`` to ``b``. Args: a (tuple[int]): A set of nodes. b (tuple[int]): A set of nodes.
juraj-google-style
def __init__(self, address, ap, data): super(WriteRequest, self).__init__(address=address, ap=ap, data=data)
Initializes the base class. Args: self (WriteRequest): the ``WriteRequest`` instance address (int): the register index ap (bool): ``True`` if this request is to an Access Port Access Register, otherwise ``False`` for a Debug Port Access Register Returns: ``None``
juraj-google-style
def fetch_committed_offsets(self, partitions): if not partitions: return {} while True: self.ensure_coordinator_ready() future = self._send_offset_fetch_request(partitions) self._client.poll(future=future) if future.succeeded(): return future.value if not future.retriable(): raise future.exception time.sleep(self.config['retry_backoff_ms'] / 1000)
Fetch the current committed offsets for specified partitions Arguments: partitions (list of TopicPartition): partitions to fetch Returns: dict: {TopicPartition: OffsetAndMetadata}
juraj-google-style
def weeks(value: Union[int, float]) -> Duration: return float(value * 60 * 60 * 24 * 7)
Converts input value from number of weeks to a `Duration` in seconds. ```python >>> a = tp.event_set( ... # Dates are converted to unix timestamps ... timestamps=["2020-01-01", "2020-01-07", "2020-01-31"], ... features={"f1": [1, 5, -5]} ... ) >>> a.moving_sum(window_length=tp.duration.weeks(2)) indexes: ... timestamps: ['2020-01-01T00:00:00' '2020-01-07T00:00:00' '2020-01-31T00:00:00'] 'f1': [ 1 6 -5] ... ``` Args: value: Number of weeks. Returns: Equivalent number of seconds.
github-repos
def delete(self, paths): results = s3io.S3IO(options=self._options).delete_paths(paths) exceptions = {path: error for path, error in results.items() if error is not None} if exceptions: raise BeamIOError('Delete operation failed', exceptions)
Deletes files or directories at the provided paths. Directories will be deleted recursively. Args: paths: list of paths that give the file objects to be deleted
github-repos
def GetSshkeyMap(self, since=None): return SshkeyUpdateGetter().GetUpdates(self, self.conf['sshkey_url'], since)
Return the sshkey map from this source. Args: since: Get data only changed since this timestamp (inclusive) or None for all data. Returns: instance of sshkey.SshkeyMap
github-repos
def TSKVolumeGetBytesPerSector(tsk_volume): if hasattr(tsk_volume, 'info') and tsk_volume.info is not None: block_size = getattr(tsk_volume.info, 'block_size', 512) else: block_size = 512 return block_size
Retrieves the number of bytes per sector from a TSK volume object. Args: tsk_volume (pytsk3.Volume_Info): TSK volume information. Returns: int: number of bytes per sector or 512 by default.
juraj-google-style
def remove_comp_items(self, context_word, comp_items): if context_word not in self._comp_dict: raise KeyError('Context word "%s" has not been registered' % context_word) for item in comp_items: self._comp_dict[context_word].remove(item)
Remove a list of completion items from a completion context. Args: context_word: A single completion word as a string. The removal will also apply to all other context words of the same context. comp_items: Completion items to remove. Raises: KeyError: if the context word has not been registered.
github-repos
def _make_pr_entry(self, step, wall_time, data_array, thresholds): true_positives = [int(v) for v in data_array[metadata.TRUE_POSITIVES_INDEX]] false_positives = [ int(v) for v in data_array[metadata.FALSE_POSITIVES_INDEX]] tp_index = metadata.TRUE_POSITIVES_INDEX fp_index = metadata.FALSE_POSITIVES_INDEX positives = data_array[[tp_index, fp_index], :].astype(int).sum(axis=0) end_index_inclusive = len(positives) - 1 while end_index_inclusive > 0 and positives[end_index_inclusive] == 0: end_index_inclusive -= 1 end_index = end_index_inclusive + 1 return { 'wall_time': wall_time, 'step': step, 'precision': data_array[metadata.PRECISION_INDEX, :end_index].tolist(), 'recall': data_array[metadata.RECALL_INDEX, :end_index].tolist(), 'true_positives': true_positives[:end_index], 'false_positives': false_positives[:end_index], 'true_negatives': [int(v) for v in data_array[metadata.TRUE_NEGATIVES_INDEX][:end_index]], 'false_negatives': [int(v) for v in data_array[metadata.FALSE_NEGATIVES_INDEX][:end_index]], 'thresholds': thresholds[:end_index], }
Creates an entry for PR curve data. Each entry corresponds to 1 step. Args: step: The step. wall_time: The wall time. data_array: A numpy array of PR curve data stored in the summary format. thresholds: An array of floating point thresholds. Returns: A PR curve entry.
juraj-google-style
def __init__(self, graph, title="GraphViewer", handler=None, padding=PADDING): title = self._make_unique_title(title) idaapi.GraphViewer.__init__(self, title) self._graph = graph if handler is None: handler = self.DEFAULT_HANDLER if not isinstance(handler, BasicNodeHandler): raise TypeError("Node handler must inherit from `BasicNodeHandler`.") self._default_handler = handler self._padding = padding
Initialize the graph viewer. To avoid bizarre IDA errors (crashing when creating 2 graphs with the same title,) a counter is appended to the title (similar to "Hex View-1".) Args: graph: A NetworkX graph to display. title: The graph title. handler: The default node handler to use when accessing node data.
juraj-google-style
def swd_write(self, output, value, nbits): pDir = binpacker.pack(output, nbits) pIn = binpacker.pack(value, nbits) bitpos = self._dll.JLINK_SWD_StoreRaw(pDir, pIn, nbits) if bitpos < 0: raise errors.JLinkException(bitpos) return bitpos
Writes bytes over SWD (Serial Wire Debug). Args: self (JLink): the ``JLink`` instance output (int): the output buffer offset to write to value (int): the value to write to the output buffer nbits (int): the number of bits needed to represent the ``output`` and ``value`` Returns: The bit position of the response in the input buffer.
juraj-google-style
def PrivateKeyFromWIF(wif): if wif is None or len(wif) is not 52: raise ValueError('Please provide a wif with a length of 52 bytes (LEN: {0:d})'.format(len(wif))) data = base58.b58decode(wif) length = len(data) if length is not 38 or data[0] is not 0x80 or data[33] is not 0x01: raise ValueError("Invalid format!") checksum = Crypto.Hash256(data[0:34])[0:4] if checksum != data[34:]: raise ValueError("Invalid WIF Checksum!") return data[1:33]
Get the private key from a WIF key Args: wif (str): The wif key Returns: bytes: The private key
juraj-google-style
def indent_xml(elem, level=0, more_sibs=False): i = "\n" pad = " " if level: i += (level - 1) * pad num_kids = len(elem) if num_kids: if not elem.text or not elem.text.strip(): elem.text = i + pad if level: elem.text += pad count = 0 for kid in elem: if kid.tag == "data": kid.text = "*DATA*" indent_xml(kid, level + 1, count < num_kids - 1) count += 1 if not elem.tail or not elem.tail.strip(): elem.tail = i if more_sibs: elem.tail += pad else: if level and (not elem.tail or not elem.tail.strip()): elem.tail = i if more_sibs: elem.tail += pad
Indent an xml element object to prepare for pretty printing. To avoid changing the contents of the original Element, it is recommended that a copy is made to send to this function. Args: elem: Element to indent. level: Int indent level (default is 0) more_sibs: Bool, whether to anticipate further siblings.
juraj-google-style
def create_from_binary(cls, binary_view): nw_obj = cls() offset = 0 previous_dr_offset = 0 header_size = cls._INFO.size while (binary_view[offset] != 0): header = cls._INFO.unpack(binary_view[offset:(offset + header_size)])[0] length_len = (header & 15) length_offset = ((header & 240) >> 4) temp_len = ((offset + header_size) + length_len) dr_length = int.from_bytes(binary_view[(offset + header_size):temp_len], 'little', signed=False) if length_offset: dr_offset = (int.from_bytes(binary_view[temp_len:(temp_len + length_offset)], 'little', signed=True) + previous_dr_offset) previous_dr_offset = dr_offset else: dr_offset = None offset += ((header_size + length_len) + length_offset) nw_obj.data_runs.append((dr_length, dr_offset)) _MOD_LOGGER.debug('DataRuns object created successfully') return nw_obj
Creates a new object DataRuns from a binary stream. The binary stream can be represented by a byte string, bytearray or a memoryview of the bytearray. Args: binary_view (memoryview of bytearray) - A binary stream with the information of the attribute Returns: DataRuns: New object using hte binary stream as source
codesearchnet
def convert_old_keys_to_new_keys(state_dict_keys: Optional[dict]=None): output_dict = {} if state_dict_keys is not None: old_text = '\n'.join(state_dict_keys) new_text = old_text for pattern, replacement in ORIGINAL_TO_CONVERTED_KEY_MAPPING.items(): if replacement is None: new_text = re.sub(pattern, '', new_text) continue new_text = re.sub(pattern, replacement, new_text) output_dict = dict(zip(old_text.split('\n'), new_text.split('\n'))) return output_dict
Converts old keys to new keys using the mapping and dynamically removes the 'ijepa.' prefix if necessary. Args: state_dict_keys (dict): The keys from the state_dict to convert. Returns: dict: A mapping from old keys to new keys.
github-repos
def terminate_session(self, token): url = self.rest_url + "/session/%s" % token response = self._delete(url) if not response.ok: return None return True
Terminates the session token, effectively logging out the user from all crowd-enabled services. Args: token: The session token. Returns: True: If session terminated None: If session termination failed
juraj-google-style
def write_file(self, filename='HEADER'): with open(filename, 'w') as f: f.write((str(self) + '\n'))
Writes Header into filename on disk. Args: filename: Filename and path for file to be written to disk
codesearchnet
def update_plot_limits(ax, white_space): if hasattr(ax, 'zz_dataLim'): bounds = ax.xy_dataLim.bounds ax.set_xlim((bounds[0] - white_space), ((bounds[0] + bounds[2]) + white_space)) ax.set_ylim((bounds[1] - white_space), ((bounds[1] + bounds[3]) + white_space)) bounds = ax.zz_dataLim.bounds ax.set_zlim((bounds[0] - white_space), ((bounds[0] + bounds[2]) + white_space)) else: bounds = ax.dataLim.bounds assert (not any(map(np.isinf, bounds))), 'Cannot set bounds if dataLim has infinite elements' ax.set_xlim((bounds[0] - white_space), ((bounds[0] + bounds[2]) + white_space)) ax.set_ylim((bounds[1] - white_space), ((bounds[1] + bounds[3]) + white_space))
Sets the limit options of a matplotlib plot. Args: ax: matplotlib axes white_space(float): whitespace added to surround the tight limit of the data Note: This relies on ax.dataLim (in 2d) and ax.[xy, zz]_dataLim being set in 3d
codesearchnet
def clusters_sites_obj(clusters): result = {} all_clusters = get_all_clusters_sites() clusters_sites = {c: s for (c, s) in all_clusters.items() if c in clusters} for cluster, site in clusters_sites.items(): result.update({cluster: get_site_obj(site)}) return result
Get all the corresponding sites of the passed clusters. Args: clusters(list): list of string uid of sites (e.g 'rennes') Return: dict corresponding to the mapping cluster uid to python-grid5000 site
juraj-google-style
def _SkipField(tokenizer): if tokenizer.TryConsume('['): tokenizer.ConsumeIdentifier() while tokenizer.TryConsume('.'): tokenizer.ConsumeIdentifier() tokenizer.Consume(']') else: tokenizer.ConsumeIdentifier() _SkipFieldContents(tokenizer) if not tokenizer.TryConsume(','): tokenizer.TryConsume(';')
Skips over a complete field (name and value/message). Args: tokenizer: A tokenizer to parse the field name and values.
juraj-google-style
def remove(self): if (not self._is_item): raise TypeError("Should be called on an item, not ListNode's itself.") self.container.node_stack.remove(self)
Removes an item from ListNode. Raises: TypeError: If it's called on container ListNode (intstead of ListNode's item) Note: Parent object should be explicitly saved.
codesearchnet
def autogen_argparse_block(extra_args=[]): grouped_args = [] for argtup in __REGISTERED_ARGS__: argstr_list, type_, default, help_ = argtup argstr_set = set(argstr_list) found = False for index, (keyset, vals) in enumerate(grouped_args): if len(keyset.intersection(argstr_set)) > 0: keyset.update(argstr_set) vals.append(argtup) found = True break if not found: new_keyset = argstr_set new_vals = [argtup] grouped_args.append((new_keyset, new_vals)) multi_groups = [] for keyset, vals in grouped_args: if len(vals) > 1: multi_groups.append(vals) if len(multi_groups) > 0: import utool as ut print('Following arg was specified multiple times') print(ut.repr4(multi_groups, newlines=2))
SHOULD TURN ANY REGISTERED ARGS INTO A A NEW PARSING CONFIG FILE FOR BETTER --help COMMANDS import utool as ut __REGISTERED_ARGS__ = ut.util_arg.__REGISTERED_ARGS__ Args: extra_args (list): (default = []) CommandLine: python -m utool.util_arg --test-autogen_argparse_block Example: >>> # DISABLE_DOCTEST >>> import utool as ut >>> extra_args = [] >>> result = ut.autogen_argparse_block(extra_args) >>> print(result)
juraj-google-style
def remove_keywords_from_dict(self, keyword_dict): for (clean_name, keywords) in keyword_dict.items(): if (not isinstance(keywords, list)): raise AttributeError('Value of key {} should be a list'.format(clean_name)) for keyword in keywords: self.remove_keyword(keyword)
To remove keywords from a dictionary Args: keyword_dict (dict): A dictionary with `str` key and (list `str`) as value Examples: >>> keyword_dict = { "java": ["java_2e", "java programing"], "product management": ["PM", "product manager"] } >>> keyword_processor.remove_keywords_from_dict(keyword_dict) Raises: AttributeError: If value for a key in `keyword_dict` is not a list.
codesearchnet
def add_presence_listener(self, callback): listener_uid = uuid4() self.presence_listeners[listener_uid] = callback return listener_uid
Add a presence listener that will send a callback when the client receives a presence update. Args: callback (func(roomchunk)): Callback called when a presence update arrives. Returns: uuid.UUID: Unique id of the listener, can be used to identify the listener.
juraj-google-style
def _load_audio_list(self, path): result = {} for entry in textfile.read_separated_lines_generator(path, separator='\t', max_columns=4): for i in range(len(entry)): if (entry[i] == '\\N'): entry[i] = None if (len(entry) < 4): entry.extend(([None] * (4 - len(entry)))) if ((not self.include_empty_licence) and (entry[2] is None)): continue if ((self.include_licenses is not None) and (entry[2] not in self.include_licenses)): continue result[entry[0]] = entry[1:] return result
Load and filter the audio list. Args: path (str): Path to the audio list file. Returns: dict: Dictionary of filtered sentences (id : username, license, attribution-url)
codesearchnet
def create_object(self, obj_type, payload, return_fields=None): self._validate_obj_type_or_die(obj_type) query_params = self._build_query_params(return_fields=return_fields) url = self._construct_url(obj_type, query_params) opts = self._get_request_options(data=payload) self._log_request('post', url, opts) if(self.session.cookies): self.session.auth = None r = self.session.post(url, **opts) self._validate_authorized(r) if r.status_code != requests.codes.CREATED: response = utils.safe_json_load(r.content) already_assigned = 'is assigned to another network view' if response and already_assigned in response.get('text'): exception = ib_ex.InfobloxMemberAlreadyAssigned else: exception = ib_ex.InfobloxCannotCreateObject raise exception( response=response, obj_type=obj_type, content=r.content, args=payload, code=r.status_code) return self._parse_reply(r)
Create an Infoblox object of type 'obj_type' Args: obj_type (str): Infoblox object type, e.g. 'network', 'range', etc. payload (dict): Payload with data to send return_fields (list): List of fields to be returned Returns: The object reference of the newly create object Raises: InfobloxException
juraj-google-style
def __init__(self, in_features: int, out_features: int, kernel_size: int=3, padding: int=1): super().__init__() self.layers = [nn.Conv2d(in_features, out_features, kernel_size=kernel_size, padding=padding, bias=False), nn.GroupNorm(32, out_features), nn.ReLU(inplace=True)] for i, layer in enumerate(self.layers): self.add_module(str(i), layer)
A basic module that executes conv - norm - in sequence used in MaskFormer. Args: in_features (`int`): The number of input features (channels). out_features (`int`): The number of outputs features (channels).
github-repos
def _GenerateStatsTable(self, feed_merger): rows = [] rows.append('<tr><th class="header"/><th class="header">Merged</th>' '<th class="header">Copied from old feed</th>' '<th class="header">Copied from new feed</th></tr>') for merger in feed_merger.GetMergerList(): stats = merger.GetMergeStats() if stats is None: continue merged, not_merged_a, not_merged_b = stats rows.append('<tr><th class="header">%s</th>' '<td class="header">%d</td>' '<td class="header">%d</td>' '<td class="header">%d</td></tr>' % (merger.DATASET_NAME, merged, not_merged_a, not_merged_b)) return '<table>%s</table>' % '\n'.join(rows)
Generate an HTML table of merge statistics. Args: feed_merger: The FeedMerger instance. Returns: The generated HTML as a string.
juraj-google-style
def verify_state(global_state_db, blockstore, bind_component, scheduler_type): state_view_factory = StateViewFactory(global_state_db) (start_block, prev_state_root) = search_for_present_state_root(blockstore, state_view_factory) if (start_block is None): LOGGER.info("Skipping state verification: chain head's state root is present") return LOGGER.info('Recomputing missing state from block %s with %s scheduler', start_block, scheduler_type) component_thread_pool = InstrumentedThreadPoolExecutor(max_workers=10, name='Component') component_dispatcher = Dispatcher() component_service = Interconnect(bind_component, component_dispatcher, secured=False, heartbeat=False, max_incoming_connections=20, monitor=True, max_future_callback_workers=10) context_manager = ContextManager(global_state_db) transaction_executor = TransactionExecutor(service=component_service, context_manager=context_manager, settings_view_factory=SettingsViewFactory(state_view_factory), scheduler_type=scheduler_type, invalid_observers=[]) component_service.set_check_connections(transaction_executor.check_connections) component_dispatcher.add_handler(validator_pb2.Message.TP_RECEIPT_ADD_DATA_REQUEST, tp_state_handlers.TpReceiptAddDataHandler(context_manager), component_thread_pool) component_dispatcher.add_handler(validator_pb2.Message.TP_EVENT_ADD_REQUEST, tp_state_handlers.TpEventAddHandler(context_manager), component_thread_pool) component_dispatcher.add_handler(validator_pb2.Message.TP_STATE_DELETE_REQUEST, tp_state_handlers.TpStateDeleteHandler(context_manager), component_thread_pool) component_dispatcher.add_handler(validator_pb2.Message.TP_STATE_GET_REQUEST, tp_state_handlers.TpStateGetHandler(context_manager), component_thread_pool) component_dispatcher.add_handler(validator_pb2.Message.TP_STATE_SET_REQUEST, tp_state_handlers.TpStateSetHandler(context_manager), component_thread_pool) component_dispatcher.add_handler(validator_pb2.Message.TP_REGISTER_REQUEST, processor_handlers.ProcessorRegisterHandler(transaction_executor.processor_manager), component_thread_pool) component_dispatcher.add_handler(validator_pb2.Message.TP_UNREGISTER_REQUEST, processor_handlers.ProcessorUnRegisterHandler(transaction_executor.processor_manager), component_thread_pool) component_dispatcher.start() component_service.start() process_blocks(initial_state_root=prev_state_root, blocks=blockstore.get_block_iter(start_block=start_block, reverse=False), transaction_executor=transaction_executor, context_manager=context_manager, state_view_factory=state_view_factory) component_dispatcher.stop() component_service.stop() component_thread_pool.shutdown(wait=True) transaction_executor.stop() context_manager.stop()
Verify the state root hash of all blocks is in state and if not, reconstruct the missing state. Assumes that there are no "holes" in state, ie starting from genesis, state is present for all blocks up to some point and then not at all. If persist is False, this recomputes state in memory for all blocks in the blockstore and verifies the state root hashes. Raises: InvalidChainError: The chain in the blockstore is not valid. ExecutionError: An unrecoverable error was encountered during batch execution.
codesearchnet
def build_query_string(self, data): query = [] keys_to_be_removed = [] for key, value in data.items(): if key not in ['version', 'restApi', 'resourcePath']: if not key == 'method': if key == 'points': value = ','.join(str(val) for val in value) keys_to_be_removed.append(key) query.append('{0}={1}'.format(key, value)) keys_to_be_removed.append(key) keys_to_be_removed.append(key) querystring = '&'.join(query) data['query'] = '{0}?{1}'.format(data['method'], querystring) for k in list(set(keys_to_be_removed)): del data[k] return data
This method occurs after dumping the data into the class. Args: data (dict): dictionary of all the query values Returns: data (dict): ordered dict of all the values
juraj-google-style
def read(self, input_stream, kmip_version=enums.KMIPVersion.KMIP_1_0): super(KeyWrappingData, self).read(input_stream, kmip_version=kmip_version) local_stream = BytearrayStream(input_stream.read(self.length)) if self.is_tag_next(enums.Tags.WRAPPING_METHOD, local_stream): self._wrapping_method = primitives.Enumeration(enum=enums.WrappingMethod, tag=enums.Tags.WRAPPING_METHOD) self._wrapping_method.read(local_stream, kmip_version=kmip_version) else: raise ValueError('Invalid struct missing the wrapping method attribute.') if self.is_tag_next(enums.Tags.ENCRYPTION_KEY_INFORMATION, local_stream): self._encryption_key_information = EncryptionKeyInformation() self._encryption_key_information.read(local_stream, kmip_version=kmip_version) if self.is_tag_next(enums.Tags.MAC_SIGNATURE_KEY_INFORMATION, local_stream): self._mac_signature_key_information = MACSignatureKeyInformation() self._mac_signature_key_information.read(local_stream, kmip_version=kmip_version) if self.is_tag_next(enums.Tags.MAC_SIGNATURE, local_stream): self._mac_signature = primitives.ByteString(tag=enums.Tags.MAC_SIGNATURE) self._mac_signature.read(local_stream, kmip_version=kmip_version) if self.is_tag_next(enums.Tags.IV_COUNTER_NONCE, local_stream): self._iv_counter_nonce = primitives.ByteString(tag=enums.Tags.IV_COUNTER_NONCE) self._iv_counter_nonce.read(local_stream, kmip_version=kmip_version) if self.is_tag_next(enums.Tags.ENCODING_OPTION, local_stream): self._encoding_option = primitives.Enumeration(enum=enums.EncodingOption, tag=enums.Tags.ENCODING_OPTION) self._encoding_option.read(local_stream, kmip_version=kmip_version) self.is_oversized(local_stream)
Read the data encoding the KeyWrappingData struct and decode it into its constituent parts. Args: input_stream (stream): A data stream containing encoded object data, supporting a read method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be decoded. Optional, defaults to KMIP 1.0.
codesearchnet
def qualNormGaussian(data, qualitative): genes, cells = data.shape clusters = qualitative.shape[1] output = np.zeros((genes, clusters)) missing_indices = [] qual_indices = [] for i in range(genes): if qualitative[i,:].max() == -1 and qualitative[i,:].min() == -1: missing_indices.append(i) continue qual_indices.append(i) threshold = (qualitative[i,:].max() - qualitative[i,:].min())/2.0 kmeans = KMeans(n_clusters = 2).fit(data[i,:].reshape((1, cells))) assignments = kmeans.labels_ means = kmeans.cluster_centers_ high_mean = means.max() low_mean = means.min() for k in range(clusters): if qualitative[i,k]>threshold: output[i,k] = high_mean else: output[i,k] = low_mean if missing_indices: M_init = output[qual_indices, :] kmeans = KMeans(n_clusters = 2, init = M_init, max_iter = 1).fit(data[qual_indices, :]) assignments = kmeans.labels_ for ind in missing_indices: for k in range(clusters): output[ind, k] = np.mean(data[ind, assignments==k]) return output
Generates starting points using binarized data. If qualitative data is missing for a given gene, all of its entries should be -1 in the qualitative matrix. Args: data (array): 2d array of genes x cells qualitative (array): 2d array of numerical data - genes x clusters Returns: Array of starting positions for state estimation or clustering, with shape genes x clusters
juraj-google-style
def _compile_aggregation_expression(self, expr: Expression, scope: Dict[(str, TensorFluent)], batch_size: Optional[int]=None, noise: Optional[List[tf.Tensor]]=None) -> TensorFluent: etype = expr.etype args = expr.args typed_var_list = args[:(- 1)] vars_list = [var for (_, (var, _)) in typed_var_list] expr = args[(- 1)] x = self._compile_expression(expr, scope) etype2aggr = {'sum': x.sum, 'prod': x.prod, 'avg': x.avg, 'maximum': x.maximum, 'minimum': x.minimum, 'exists': x.exists, 'forall': x.forall} if (etype[1] not in etype2aggr): raise ValueError('Invalid aggregation expression {}.'.format(expr)) aggr = etype2aggr[etype[1]] fluent = aggr(vars_list=vars_list) return fluent
Compile an aggregation expression `expr` into a TensorFluent in the given `scope` with optional batch size. Args: expr (:obj:`rddl2tf.expr.Expression`): A RDDL aggregation expression. scope (Dict[str, :obj:`rddl2tf.fluent.TensorFluent`]): A fluent scope. batch_size (Optional[size]): The batch size. Returns: :obj:`rddl2tf.fluent.TensorFluent`: The compiled expression as a TensorFluent.
codesearchnet
def __sub__(self, other: Union[None, int, str, 'KeyPath']) -> 'KeyPath': if other is None: return self if isinstance(other, str): other = KeyPath.parse(other) elif isinstance(other, int): other = KeyPath(other) if not isinstance(other, KeyPath): raise TypeError(f'Cannot subtract KeyPath({self}) by {other!r}.') max_len = max(len(self), len(other)) for pos in range(max_len): if pos >= len(self): raise ValueError(f'KeyPath subtraction failed: left path {self!r} is an ancestor of right path {other!r}.') if pos >= len(other): return KeyPath(self.keys[pos:]) if self.keys[pos] != other.keys[pos]: raise ValueError(f'KeyPath subtraction failed: left path {self!r} and right path {other!r} are in different subtree.') return KeyPath()
Finds the relative path of this path to the other. Example:: path1 = pg.KeyPath.parse('a.b.c.d') path2 = pg.KeyPath.parse('a.b') assert path1 - path2 == 'c.d' Args: other: Object to subtract, which can be None, int (as a depth-1 KeyPath), string (parsed as a KeyPath) or a KeyPath object. Returns: Relative path of this path to the other. Raises: ValueError: This path is an ancestor node of the other path, or these two paths are in different branch.
github-repos
def delete(self, messageId): check_type(messageId, basestring, may_be_none=False) self._session.delete(API_ENDPOINT + '/' + messageId)
Delete a message. Args: messageId(basestring): The ID of the message to be deleted. Raises: TypeError: If the parameter types are incorrect. ApiError: If the Webex Teams cloud returns an error.
juraj-google-style
def _should_recover(self, exception): exception = _maybe_wrap_exception(exception) if isinstance(exception, _RETRYABLE_STREAM_ERRORS): _LOGGER.info('Observed recoverable stream error %s', exception) return True _LOGGER.info('Observed non-recoverable stream error %s', exception) return False
Determine if an error on the RPC stream should be recovered. If the exception is one of the retryable exceptions, this will signal to the consumer thread that it should "recover" from the failure. This will cause the stream to exit when it returns :data:`False`. Returns: bool: Indicates if the caller should recover or shut down. Will be :data:`True` if the ``exception`` is "acceptable", i.e. in a list of retryable / idempotent exceptions.
codesearchnet
def _flush_range(self, buffer, start, end): with self._size_lock: if not self._size_synched: self._size_synched = True try: self._size = self.raw._size except (ObjectNotFoundError, UnsupportedOperation): self._size = 0 while start > self._size: sleep(self._FLUSH_WAIT) self._raw_flush(buffer, start, end)
Flush a buffer to a range of the file. Meant to be used asynchronously, used to provides parallel flushing of file parts when applicable. Args: buffer (memoryview): Buffer content. start (int): Start of buffer position to flush. end (int): End of buffer position to flush.
juraj-google-style
def get_rect(self): if self.handle: (left, top, right, bottom) = win32gui.GetWindowRect(self.handle) return RECT(left, top, right, bottom) else: desktop = win32gui.GetDesktopWindow() (left, top, right, bottom) = win32gui.GetWindowRect(desktop) return RECT(left, top, right, bottom)
Get rectangle of app or desktop resolution Returns: RECT(left, top, right, bottom)
codesearchnet
def glob(*args): if ((len(args) is 1) and isinstance(args[0], list)): args = args[0] matches = [] for pattern in args: for item in glob2.glob(pattern): if (not os.path.isdir(item)): matches.append(item) return matches
Returns list of paths matching one or more wildcard patterns. Args: include_dirs: Include directories in the output
codesearchnet
def forward(self, outputs, targets): outputs_without_aux = {k: v for k, v in outputs.items() if 'auxiliary_outputs' not in k} indices = self.matcher(outputs_without_aux, targets) num_boxes = sum((len(t['class_labels']) for t in targets)) num_boxes = torch.as_tensor([num_boxes], dtype=torch.float, device=next(iter(outputs.values())).device) num_boxes = torch.clamp(num_boxes, min=1).item() losses = {} for loss in self.losses: l_dict = self.get_loss(loss, outputs, targets, indices, num_boxes) l_dict = {k: l_dict[k] * self.weight_dict[k] for k in l_dict if k in self.weight_dict} losses.update(l_dict) if 'auxiliary_outputs' in outputs: for i, auxiliary_outputs in enumerate(outputs['auxiliary_outputs']): indices = self.matcher(auxiliary_outputs, targets) for loss in self.losses: if loss == 'masks': continue l_dict = self.get_loss(loss, auxiliary_outputs, targets, indices, num_boxes) l_dict = {k: l_dict[k] * self.weight_dict[k] for k in l_dict if k in self.weight_dict} l_dict = {k + f'_aux_{i}': v for k, v in l_dict.items()} losses.update(l_dict) if 'dn_auxiliary_outputs' in outputs: if 'denoising_meta_values' not in outputs: raise ValueError("The output must have the 'denoising_meta_values` key. Please, ensure that 'outputs' includes a 'denoising_meta_values' entry.") indices = self.get_cdn_matched_indices(outputs['denoising_meta_values'], targets) num_boxes = num_boxes * outputs['denoising_meta_values']['dn_num_group'] for i, auxiliary_outputs in enumerate(outputs['dn_auxiliary_outputs']): for loss in self.losses: if loss == 'masks': continue kwargs = {} l_dict = self.get_loss(loss, auxiliary_outputs, targets, indices, num_boxes, **kwargs) l_dict = {k: l_dict[k] * self.weight_dict[k] for k in l_dict if k in self.weight_dict} l_dict = {k + f'_dn_{i}': v for k, v in l_dict.items()} losses.update(l_dict) return losses
This performs the loss computation. Args: outputs (`dict`, *optional*): Dictionary of tensors, see the output specification of the model for the format. targets (`List[dict]`, *optional*): List of dicts, such that `len(targets) == batch_size`. The expected keys in each dict depends on the losses applied, see each loss' doc.
github-repos
def render(self, width: int, height: int) -> List[str]: if width == 0 or height == 0: return [''] * height out_chars = [[' '] * width for _ in range(height)] mid_x = int((width - 1) * self.horizontal_alignment) mid_y = (height - 1) if self.left: out_chars[mid_y][:mid_x + 1] = self.left * (mid_x + 1) if self.right: out_chars[mid_y][mid_x:] = self.right * (width - mid_x) if self.top: for y in range(mid_y + 1): out_chars[y][mid_x] = self.top if self.bottom: for y in range(mid_y, height): out_chars[y][mid_x] = self.bottom mid = self.content or self.center if self.content or self.center: content_lines = mid.split('\n') y = mid_y - (len(content_lines) - 1) for dy, content_line in enumerate(content_lines): s = int((len(content_line) - 1) * self.horizontal_alignment) x = mid_x - s for dx, c in enumerate(content_line): out_chars[y + dy][x + dx] = c return [''.join(line) for line in out_chars]
Returns a list of text lines representing the block's contents. Args: width: The width of the output text. Must be at least as large as the block's minimum width. height: The height of the output text. Must be at least as large as the block's minimum height. Returns: Text pre-split into lines.
juraj-google-style
def _parse_source_interface(self, config): match = re.search(r'vxlan source-interface ([^\s]+)', config) value = match.group(1) if match else self.DEFAULT_SRC_INTF return dict(source_interface=value)
Parses the conf block and returns the vxlan source-interface value Parses the provided configuration block and returns the value of vxlan source-interface. If the value is not configured, this method will return DEFAULT_SRC_INTF instead. Args: config (str): The Vxlan config block to scan Return: dict: A dict object intended to be merged into the resource dict
juraj-google-style
def message_tc(self, message, max_length=255): if os.access(self.default_args.tc_out_path, os.W_OK): message_file = '{}/message.tc'.format(self.default_args.tc_out_path) else: message_file = 'message.tc' message = '{}\n'.format(message) if max_length - len(message) > 0: with open(message_file, 'a') as mh: mh.write(message) elif max_length > 0: with open(message_file, 'a') as mh: mh.write(message[:max_length]) max_length -= len(message)
Write data to message_tc file in TcEX specified directory. This method is used to set and exit message in the ThreatConnect Platform. ThreatConnect only supports files of max_message_length. Any data exceeding this limit will be truncated by this method. Args: message (string): The message to add to message_tc file
juraj-google-style
def run_task_tests(self, task, torch_dtype='float32'): if task not in self.pipeline_model_mapping: self.skipTest(f'{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{torch_dtype} is skipped: `{task}` is not in `self.pipeline_model_mapping` for `{self.__class__.__name__}`.') model_architectures = self.pipeline_model_mapping[task] if not isinstance(model_architectures, tuple): model_architectures = (model_architectures,) at_least_one_model_is_tested = False for model_architecture in model_architectures: model_arch_name = model_architecture.__name__ model_type = model_architecture.config_class.model_type for _prefix in ['Flax', 'TF']: if model_arch_name.startswith(_prefix): model_arch_name = model_arch_name[len(_prefix):] break if model_arch_name not in tiny_model_summary: continue tokenizer_names = tiny_model_summary[model_arch_name]['tokenizer_classes'] image_processor_names = [] feature_extractor_names = [] processor_classes = tiny_model_summary[model_arch_name]['processor_classes'] for cls_name in processor_classes: if 'ImageProcessor' in cls_name: image_processor_names.append(cls_name) elif 'FeatureExtractor' in cls_name: feature_extractor_names.append(cls_name) processor_names = PROCESSOR_MAPPING_NAMES.get(model_type, None) if not isinstance(processor_names, (list, tuple)): processor_names = [processor_names] commit = None if model_arch_name in tiny_model_summary and 'sha' in tiny_model_summary[model_arch_name]: commit = tiny_model_summary[model_arch_name]['sha'] repo_name = f'tiny-random-{model_arch_name}' if TRANSFORMERS_TINY_MODEL_PATH != 'hf-internal-testing': repo_name = model_arch_name self.run_model_pipeline_tests(task, repo_name, model_architecture, tokenizer_names=tokenizer_names, image_processor_names=image_processor_names, feature_extractor_names=feature_extractor_names, processor_names=processor_names, commit=commit, torch_dtype=torch_dtype) at_least_one_model_is_tested = True if task in task_to_pipeline_and_spec_mapping: pipeline, hub_spec = task_to_pipeline_and_spec_mapping[task] compare_pipeline_args_to_hub_spec(pipeline, hub_spec) if not at_least_one_model_is_tested: self.skipTest(f'{self.__class__.__name__}::test_pipeline_{task.replace('-', '_')}_{torch_dtype} is skipped: Could not find any model architecture in the tiny models JSON file for `{task}`.')
Run pipeline tests for a specific `task` Args: task (`str`): A task name. This should be a key in the mapping `pipeline_test_mapping`. torch_dtype (`str`, `optional`, defaults to `'float32'`): The torch dtype to use for the model. Can be used for FP16/other precision inference.
github-repos
def parse_source(info): if ('extractor_key' in info): source = info['extractor_key'] lower_source = source.lower() for key in SOURCE_TO_NAME: lower_key = key.lower() if (lower_source == lower_key): source = SOURCE_TO_NAME[lower_key] if (source != 'Generic'): return source if (('url' in info) and (info['url'] is not None)): p = urlparse(info['url']) if (p and p.netloc): return p.netloc return 'Unknown'
Parses the source info from an info dict generated by youtube-dl Args: info (dict): The info dict to parse Returns: source (str): The source of this song
codesearchnet
def AddEvent(self, event): self._RaiseIfNotWritable() self._storage_file.AddEvent(event) self.number_of_events += 1 self._UpdateCounters(event)
Adds an event. Args: event (EventObject): an event. Raises: IOError: when the storage writer is closed. OSError: when the storage writer is closed.
juraj-google-style
def _create_per_replica(value_list, strategy): always_wrap = _always_wrap(strategy) per_replicas = distribute_utils.regroup(value_list, always_wrap=always_wrap) return per_replicas
Creates a PerReplica. For strategies other than OneDeviceStrategy, it creates a PerReplica whose type spec is set to the element spec of the dataset. This helps avoid retracing for partial batches. Retracing is problematic for multi client when different client retraces different time, since retracing changes the collective keys in the tf.function, and causes mismatches among clients. For single client strategies, this simply calls distribute_utils.regroup(). Args: value_list: a list of values, one for each replica. strategy: the `tf.distribute.Strategy`. Returns: a structure of PerReplica.
github-repos
def update(self, **kwargs): kwargs = self._preprocess_params(kwargs) kwargs = self.preprocess_kwargs_before_update(kwargs) for key, value in kwargs.iteritems(): cls = type(self) if not hasattr(cls, key) or isinstance(getattr(cls, key), property): continue if key not in self._no_overwrite_: setattr(self, key, value) if isinstance(getattr(self, key), OrderingList): getattr(self, key).reorder() elif isinstance(getattr(cls, key), AssociationProxyInstance): target_name = getattr(cls, key).target_collection target_rel = getattr(self, target_name) if isinstance(target_rel, OrderingList): target_rel.reorder() try: self.session.commit() return self except Exception as e: self.session.rollback() raise e
Updates an instance. Args: **kwargs : Arbitrary keyword arguments. Column names are keywords and their new values are the values. Examples: >>> customer.update(email="newemail@x.com", name="new")
juraj-google-style
def __init__(self, short_name, long_name, preregistered, int_id=None, max_messages=5): self.short_name = short_name self.long_name = long_name self.preregistered = preregistered self.last_heartbeat = monotonic() self.num_heartbeats = 0 self.id = int_id self._state = UNKNOWN self.messages = deque(maxlen=max_messages) self.headline = None self._last_message_id = 0
Constructor. Args: short_name (string): A unique short name for the service long_name (string): A user friendly name for the service preregistered (bool): Whether this is an expected preregistered service int_id (int): An internal numeric id for this service max_messages (int): The maximum number of messages to keep
juraj-google-style
def wmo(self, value=None): if (value is not None): try: value = str(value) except ValueError: raise ValueError('value {} need to be of type str for field `wmo`'.format(value)) if (',' in value): raise ValueError('value should not contain a comma for field `wmo`') self._wmo = value
Corresponds to IDD Field `wmo` usually a 6 digit field. Used as alpha in EnergyPlus. Args: value (str): value for IDD Field `wmo` if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
codesearchnet
def IsEquivalent(self, other): if self.name and other.name: return self.name == other.name if self.name: self_family, self_version_tuple = self._FAMILY_AND_VERSION_PER_NAME.get( self.name, self._DEFAULT_FAMILY_AND_VERSION) return ( self_family == other.family and self_version_tuple == other.version_tuple) if self.family and self.version: if other.name: other_family, other_version_tuple = ( self._FAMILY_AND_VERSION_PER_NAME.get( other.name, self._DEFAULT_FAMILY_AND_VERSION)) else: other_family = other.family other_version_tuple = other.version_tuple return ( self.family == other_family and self.version_tuple == other_version_tuple) if self.family: if other.name: other_family, _ = self._FAMILY_AND_VERSION_PER_NAME.get( other.name, self._DEFAULT_FAMILY_AND_VERSION) else: other_family = other.family return self.family == other_family return False
Determines if 2 operating system artifacts are equivalent. This function compares the operating systems based in order of: * name derived from product * family and version * family Args: other (OperatingSystemArtifact): operating system artifact attribute container to compare with. Returns: bool: True if the operating systems are considered equivalent, False if the most specific criteria do no match, or no criteria are available.
juraj-google-style
def write_tree_newick(self, filename, hide_rooted_prefix=False): if not isinstance(filename, str): raise TypeError("filename must be a str") treestr = self.newick() if hide_rooted_prefix: if treestr.startswith('[&R]'): treestr = treestr[4:].strip() else: warn("Specified hide_rooted_prefix, but tree was not rooted") if filename.lower().endswith('.gz'): f = gopen(expanduser(filename),'wb',9); f.write(treestr.encode()); f.close() else: f = open(expanduser(filename),'w'); f.write(treestr); f.close()
Write this ``Tree`` to a Newick file Args: ``filename`` (``str``): Path to desired output file (plain-text or gzipped)
juraj-google-style
def decode(self, encoded): encoded = super().decode(encoded) if (encoded.numel() > 1): raise ValueError('``decode`` decodes one label at a time, use ``batch_decode`` instead.') return self.itos[encoded.squeeze().item()]
Decodes ``encoded`` label. Args: encoded (torch.Tensor): Encoded label. Returns: object: Label decoded from ``encoded``.
codesearchnet
def rotated_printing(self, action): if action=='rotate': action='1' elif action=='cancel': action='0' else: raise RuntimeError('Invalid action.') self.send(chr(27)+chr(105)+chr(76)+action)
Calling this function applies the desired action to the printing orientation of the printer. Args: action: The desired printing orientation. 'rotate' enables rotated printing. 'normal' disables rotated printing. Returns: None Raises: RuntimeError: Invalid action.
juraj-google-style
def add_electrode(self, electrode, label=None): if (not label): label = 'Electrode {}'.format((len(self._electrodes) + 1)) self._electrodes[label] = electrode
Add an electrode to the plot. Args: electrode: An electrode. All electrodes satisfying the AbstractElectrode interface should work. label: A label for the electrode. If None, defaults to a counting system, i.e. 'Electrode 1', 'Electrode 2', ...
codesearchnet
def _process_counter_example(self, mma, w_string): diff = len(w_string) same = 0 membership_answer = self._membership_query(w_string) while True: i = (same + diff) / 2 access_string = self._run_in_hypothesis(mma, w_string, i) if membership_answer != self._membership_query(access_string + w_string[i:]): diff = i else: same = i if diff - same == 1: break exp = w_string[diff:] self.observation_table.em_vector.append(exp) for row in self.observation_table.sm_vector + self.observation_table.smi_vector: self._fill_table_entry(row, exp) return 0
Process a counterexample in the Rivest-Schapire way. Args: mma (DFA): The hypothesis automaton w_string (str): The examined string to be consumed Returns: None
juraj-google-style
def _ParsePropertiesXMLFile(self, xml_data): xml_root = ElementTree.fromstring(xml_data) properties = {} for xml_element in xml_root.iter(): if not xml_element.text: continue _, _, name = xml_element.tag.partition('}') if name == 'lpstr': continue property_name = self._PROPERTY_NAMES.get(name, None) if not property_name: property_name = self._FormatPropertyName(name) properties[property_name] = xml_element.text return properties
Parses a properties XML file. Args: xml_data (bytes): data of a _rels/.rels XML file. Returns: dict[str, object]: properties. Raises: zipfile.BadZipfile: if the properties XML file cannot be read.
juraj-google-style
def audio(self, audio, sample_rate, name=None, subdir=''): from chainerui.report.audio_report import check_available if (not check_available()): return from chainerui.report.audio_report import report as _audio col_name = self.get_col_name(name, 'audio') (out_dir, rel_out_dir) = self.get_subdir(subdir) (filename, _) = _audio(audio, sample_rate, out_dir, col_name) self.audios[col_name] = os.path.join(rel_out_dir, filename) self.count += 1
Summary audio to listen on web browser. Args: audio (:class:`numpy.ndarray` or :class:`cupy.ndarray` or \ :class:`chainer.Variable`): sampled wave array. sample_rate (int): sampling rate. name (str): name of image. set as column name. when not setting, assigned ``'audio'`` + sequential number. subdir (str): sub-directory path of output.
codesearchnet
def energy(self, sample_like, dtype=np.float): energy, = self.energies(sample_like, dtype=dtype) return energy
The energy of the given sample. Args: sample_like (samples_like): A raw sample. `sample_like` is an extension of NumPy's array_like structure. See :func:`.as_samples`. dtype (:class:`numpy.dtype`, optional): The data type of the returned energies. Defaults to float. Returns: The energy.
juraj-google-style
def get_labels_encoder(self, data_dir): label_filepath = os.path.join(data_dir, self.vocab_filename) return text_encoder.TokenTextEncoder(label_filepath)
Builds encoder for the given class labels. Args: data_dir: data directory Returns: An encoder for class labels.
codesearchnet
def lookup_subclass(cls, d): try: typeid = d["typeid"] except KeyError: raise FieldError("typeid not present in keys %s" % list(d)) subclass = cls._subcls_lookup.get(typeid, None) if not subclass: raise FieldError("'%s' not a valid typeid" % typeid) else: return subclass
Look up a class based on a serialized dictionary containing a typeid Args: d (dict): Dictionary with key "typeid" Returns: Serializable subclass
juraj-google-style
def pose2mat(pose): homo_pose_mat = np.zeros((4, 4), dtype=np.float32) homo_pose_mat[:3, :3] = quat2mat(pose[1]) homo_pose_mat[:3, 3] = np.array(pose[0], dtype=np.float32) homo_pose_mat[3, 3] = 1. return homo_pose_mat
Converts pose to homogeneous matrix. Args: pose: a (pos, orn) tuple where pos is vec3 float cartesian, and orn is vec4 float quaternion. Returns: 4x4 homogeneous matrix
juraj-google-style
def get_bytes(obj): try: obj = obj.read(_NUM_SIGNATURE_BYTES) except AttributeError: pass kind = type(obj) if (kind is bytearray): return signature(obj) if (kind is str): return get_signature_bytes(obj) if (kind is bytes): return signature(obj) if (kind is memoryview): return signature(obj).tolist() raise TypeError(('Unsupported type as file input: %s' % kind))
Infers the input type and reads the first 262 bytes, returning a sliced bytearray. Args: obj: path to readable, file, bytes or bytearray. Returns: First 262 bytes of the file content as bytearray type. Raises: TypeError: if obj is not a supported type.
codesearchnet
def wait(self, timeout=None, raise_error=True): return self.get(timeout=timeout, raise_error=raise_error)
alias of get Args: timeout (float): timeout seconds raise_error (bool): default true, whether to raise error if element not found Raises: WDAElementNotFoundError
juraj-google-style
def get_country_name_from_m49(cls, m49, use_live=True, exception=None): iso3 = cls.get_iso3_from_m49(m49, use_live=use_live, exception=exception) if iso3 is not None: return cls.get_country_name_from_iso3(iso3, exception=exception) return None
Get country name from M49 code Args: m49 (int): M49 numeric code for which to get country name use_live (bool): Try to get use latest data from web rather than file in package. Defaults to True. exception (Optional[ExceptionUpperBound]): An exception to raise if country not found. Defaults to None. Returns: Optional[str]: Country name
juraj-google-style
def verify(value, msg): return (bool(value) and converts_to_proto(value, msg) and successfuly_encodes(msg) and special_typechecking(value, msg))
C-style validator Keyword arguments: value -- dictionary to validate (required) msg -- the protobuf schema to validate against (required) Returns: True: If valid input False: If invalid input
codesearchnet
def _string_to_int(x, vocab): def _map_to_int(x): table = lookup.index_table_from_tensor( vocab, default_value=len(vocab)) return table.lookup(x) return _map_to_int(x)
Given a vocabulary and a string tensor `x`, maps `x` into an int tensor. Args: x: A `Column` representing a string value. vocab: list of strings. Returns: A `Column` where each string value is mapped to an integer representing its index in the vocab. Out of vocab values are mapped to len(vocab).
juraj-google-style
def match_rules_context_multi(tree, rules, parent_context={}): all_contexts = [] for template, match_rules in rules.items(): context = parent_context.copy() if match_template(tree, template, context): child_contextss = [] if not match_rules: all_contexts += [context] else: for key, child_rules in match_rules.items(): child_contextss.append(match_rules_context_multi(context[key], child_rules, context)) all_contexts += cross_context(child_contextss) return all_contexts
Recursively matches a Tree structure with rules and returns context Args: tree (Tree): Parsed tree structure rules (dict): See match_rules parent_context (dict): Context of parent call Returns: dict: Context matched dictionary of matched rules or None if no match
juraj-google-style
def _shuffle_single(fname, extra_fn=None): records = read_records(fname) random.shuffle(records) if extra_fn is not None: records = extra_fn(records) out_fname = fname.replace(UNSHUFFLED_SUFFIX, "") write_records(records, out_fname) tf.gfile.Remove(fname)
Shuffle a single file of records. Args: fname: a string extra_fn: an optional function from list of TFRecords to list of TFRecords to be called after shuffling.
juraj-google-style