code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def camelcase(text, acronyms=None): (words, _case, _sep) = case_parse.parse_case(text, acronyms) if words: words[0] = words[0].lower() return ''.join(words)
Return text in camelCase style. Args: text: input string to convert case detect_acronyms: should attempt to detect acronyms acronyms: a list of acronyms to detect >>> camelcase("hello world") 'helloWorld' >>> camelcase("HELLO_HTML_WORLD", True, ["HTML"]) 'helloHTMLWorld'
codesearchnet
def _validate_or_infer_batch_size(self, batch_size, steps, x): if isinstance(x, (data_types.DatasetV1, data_types.DatasetV2, data_utils.Sequence)) or tf_inspect.isgenerator(x): if batch_size is not None: raise ValueError('The `batch_size` argument must not be specified for the given input type. Received input: {}, batch_size: {}'.format(x, batch_size)) return layers = self._flatten_layers(include_self=False, recursive=False) first_layer = next(layers, None) if first_layer: static_batch_size = training_utils.get_static_batch_size(first_layer) if static_batch_size is not None: if self._distribution_strategy and distributed_training_utils.global_batch_size_supported(self._distribution_strategy): num_splits_for_ds = self._distribution_strategy.num_replicas_in_sync else: num_splits_for_ds = 1 if batch_size is not None: if batch_size % num_splits_for_ds != 0: raise ValueError('The `batch_size` argument ({}) must be divisible the by number of replicas ({})'.format(batch_size, num_splits_for_ds)) per_replica_batch_size = batch_size if per_replica_batch_size != static_batch_size: raise ValueError('The `batch_size` argument value {} is incompatible with the specified batch size of your Input Layer: {}'.format(per_replica_batch_size, static_batch_size)) if isinstance(x, (data_types.DatasetV2, iterator_ops.Iterator, iterator_ops.IteratorBase)): ds_batch_size = tensor_shape.Dimension(nest.flatten(dataset_ops.get_legacy_output_shapes(x))[0][0]).value if ds_batch_size is not None: if ds_batch_size % num_splits_for_ds != 0: raise ValueError('The batch output shape of your `Dataset` {} cannot be divisible by number of replicas {}'.format(ds_batch_size, num_splits_for_ds)) ds_per_replica_batch_size = ds_batch_size if ds_per_replica_batch_size != static_batch_size: raise ValueError('The batch output shape of your `Dataset` is {}, which is incompatible with the specified batch size of your Input Layer: {}'.format(ds_per_replica_batch_size, static_batch_size)) if steps is None: batch_size = static_batch_size * num_splits_for_ds if batch_size is None and steps is None: batch_size = 32 return batch_size
Validates that the `batch_size` provided is consistent with InputLayer. It's possible that the user specified a static batch size in their InputLayer. If so, this method checks the provided `batch_size` and `x` arguments are consistent with this static batch size. Also, if `batch_size` is `None`, this method will attempt to infer the batch size from the static batch size of the InputLayer. Lastly, ValueError will be raised if `x` is a tf.data.Dataset and `batch_size` is specified as we expect users to provide batched datasets. Args: batch_size: The batch_size provided as an argument to fit/evaluate/predict. steps: The steps provided as an argument to fit/evaluate/predict. x: The data passed as `x` to fit/evaluate/predict. Returns: The validated batch_size, auto-inferred from the first layer if not provided.
github-repos
def send_fetches(self): futures = [] for (node_id, request) in six.iteritems(self._create_fetch_requests()): if self._client.ready(node_id): log.debug('Sending FetchRequest to node %s', node_id) future = self._client.send(node_id, request) future.add_callback(self._handle_fetch_response, request, time.time()) future.add_errback(log.error, 'Fetch to node %s failed: %s', node_id) futures.append(future) self._fetch_futures.extend(futures) self._clean_done_fetch_futures() return futures
Send FetchRequests for all assigned partitions that do not already have an in-flight fetch or pending fetch data. Returns: List of Futures: each future resolves to a FetchResponse
codesearchnet
def moma(self, wt_fluxes): reactions = set(self._adjustment_reactions()) v = self._v obj_expr = 0 for (f_reaction, f_value) in iteritems(wt_fluxes): if (f_reaction in reactions): obj_expr += ((f_value - v[f_reaction]) ** 2) self._prob.set_objective(obj_expr) self._solve(lp.ObjectiveSense.Minimize)
Minimize the redistribution of fluxes using Euclidean distance. Minimizing the redistribution of fluxes using a quadratic objective function. The distance is minimized by minimizing the sum of (wild type - knockout)^2. Args: wt_fluxes: Dictionary of all the wild type fluxes that will be used to find a close MOMA solution. Fluxes can be expiremental or calculated using :meth: get_fba_flux(objective).
codesearchnet
def keypath(self, key): return fs.path(self.path, self.escape_key(key))
Get the filesystem path for a key. Arguments: key: Key. Returns: str: Absolute path.
juraj-google-style
def _MergeEntities(self, a, b): distance = transitfeed.ApproximateDistanceBetweenStops(a, b) if distance > self.largest_stop_distance: raise MergeError("Stops are too far apart: %.1fm " "(largest_stop_distance is %.1fm)." % (distance, self.largest_stop_distance)) scheme = {'stop_id': self._MergeIdentical, 'stop_name': self._MergeIdenticalCaseInsensitive, 'zone_id': self._MergeIdentical, 'location_type': self._MergeIdentical} return self._SchemedMerge(scheme, a, b)
Merges two stops. For the stops to be merged, they must have: - the same stop_id - the same stop_name (case insensitive) - the same zone_id - locations less than largest_stop_distance apart The other attributes can have arbitary changes. The merged attributes are taken from the new stop. Args: a: The first stop. b: The second stop. Returns: The merged stop. Raises: MergeError: The stops could not be merged.
juraj-google-style
def GetCacheValueByObject(self, vfs_object): for (identifier, cache_value) in iter(self._values.items()): if (not cache_value): raise RuntimeError('Missing cache value.') if (cache_value.vfs_object == vfs_object): return (identifier, cache_value) return (None, None)
Retrieves the cache value for the cached object. Args: vfs_object (object): VFS object that was cached. Returns: tuple[str, ObjectsCacheValue]: identifier and cache value object or (None, None) if not cached. Raises: RuntimeError: if the cache value is missing.
codesearchnet
def get_box(self, box_key = None, sort_by = None): uri = '/'.join([ self.api_uri, self.boxes_suffix ]) if box_key: uri = '/'.join([ uri, box_key ]) if sort_by: if sort_by in ['creationTimestamp', 'lastUpdatedTimestamp']: uri += self.sort_by_postfix + sort_by else: return requests.codes.bad_request, {'success' : 'False', 'error': 'sortBy needs to be \'creationTimestamp\', or \'lastUpdatedTimestamp\''} return self._req('get', uri)
Gets a list of one/all box objects. Performs a single GET. To go deeper individual boxes need to be polled for their contents. This is a directory for what we could ask for. Args: box_key key for the target box (default: None i.e. ALL) sort_by in desc order by 'creationTimestamp' or 'lastUpdatedTimestamp' returns (status code for the GET request, dict of box or a list thereof)
juraj-google-style
def create_tasks(self, wfk_file, scr_input): assert len(self) == 0 wfk_file = self.wfk_file = os.path.abspath(wfk_file) shell_manager = self.manager.to_shell_manager(mpi_procs=1) w = Work(workdir=self.tmpdir.path_join("_qptdm_run"), manager=shell_manager) fake_input = scr_input.deepcopy() fake_task = w.register(fake_input) w.allocate() w.build() fake_task.inlink_file(wfk_file) fake_task.set_vars({"nqptdm": -1}) fake_task.start_and_wait() with NetcdfReader(fake_task.outdir.has_abiext("qptdms.nc")) as reader: qpoints = reader.read_value("reduced_coordinates_of_kpoints") for qpoint in qpoints: qptdm_input = scr_input.deepcopy() qptdm_input.set_vars(nqptdm=1, qptdm=qpoint) new_task = self.register_scr_task(qptdm_input, manager=self.manager) if self.flow.gc is not None: new_task.set_gc(self.flow.gc) self.allocate()
Create the SCR tasks and register them in self. Args: wfk_file: Path to the ABINIT WFK file to use for the computation of the screening. scr_input: Input for the screening calculation.
juraj-google-style
def _ScanNode(self, scan_context, scan_node, auto_recurse=True): if (not scan_context): raise ValueError('Invalid scan context.') if (not scan_node): raise ValueError('Invalid scan node.') scan_path_spec = scan_node.path_spec system_level_file_entry = None if scan_node.IsSystemLevel(): system_level_file_entry = resolver.Resolver.OpenFileEntry(scan_node.path_spec, resolver_context=self._resolver_context) if (system_level_file_entry is None): raise errors.BackEndError('Unable to open file entry.') if system_level_file_entry.IsDirectory(): scan_context.SetSourceType(definitions.SOURCE_TYPE_DIRECTORY) return source_path_spec = self.ScanForStorageMediaImage(scan_node.path_spec) if source_path_spec: scan_node.scanned = True scan_node = scan_context.AddScanNode(source_path_spec, scan_node) if system_level_file_entry.IsDevice(): source_type = definitions.SOURCE_TYPE_STORAGE_MEDIA_DEVICE else: source_type = definitions.SOURCE_TYPE_STORAGE_MEDIA_IMAGE scan_context.SetSourceType(source_type) if (not auto_recurse): return source_path_spec = None while True: if scan_node.IsFileSystem(): break if scan_node.SupportsEncryption(): self._ScanEncryptedVolumeNode(scan_context, scan_node) if scan_context.IsLockedScanNode(scan_node.path_spec): break source_path_spec = self.ScanForVolumeSystem(scan_node.path_spec) if (not source_path_spec): break if (not scan_context.HasScanNode(source_path_spec)): scan_node.scanned = True scan_node = scan_context.AddScanNode(source_path_spec, scan_node) if (system_level_file_entry and system_level_file_entry.IsDevice()): source_type = definitions.SOURCE_TYPE_STORAGE_MEDIA_DEVICE else: source_type = definitions.SOURCE_TYPE_STORAGE_MEDIA_IMAGE scan_context.SetSourceType(source_type) if scan_node.IsVolumeSystemRoot(): self._ScanVolumeSystemRootNode(scan_context, scan_node, auto_recurse=auto_recurse) return if ((not auto_recurse) and scan_context.updated): return if (not scan_context.updated): break if scan_node.IsVolumeSystemRoot(): pass elif scan_context.IsLockedScanNode(scan_node.path_spec): pass elif ((scan_node.type_indicator == definitions.TYPE_INDICATOR_VSHADOW) and auto_recurse and (scan_node.path_spec != scan_path_spec)): pass elif (not scan_node.IsFileSystem()): source_path_spec = self.ScanForFileSystem(scan_node.path_spec) if (not source_path_spec): if (scan_node.path_spec.type_indicator == definitions.TYPE_INDICATOR_RAW): scan_node = scan_context.RemoveScanNode(scan_node.path_spec) scan_context.source_type = definitions.SOURCE_TYPE_FILE else: scan_context.SetSourceType(definitions.SOURCE_TYPE_FILE) elif (not scan_context.HasScanNode(source_path_spec)): scan_node.scanned = True scan_node = scan_context.AddScanNode(source_path_spec, scan_node) if (system_level_file_entry and system_level_file_entry.IsDevice()): source_type = definitions.SOURCE_TYPE_STORAGE_MEDIA_DEVICE else: source_type = definitions.SOURCE_TYPE_STORAGE_MEDIA_IMAGE scan_context.SetSourceType(source_type) if (not scan_node.scanned): scan_node.scanned = True
Scans a node for supported formats. Args: scan_context (SourceScannerContext): source scanner context. scan_node (SourceScanNode): source scan node. auto_recurse (Optional[bool]): True if the scan should automatically recurse as far as possible. Raises: BackEndError: if the source cannot be scanned. ValueError: if the scan context or scan node is invalid.
codesearchnet
def get(self, request): code = request.GET.get("code") if not code: return render(request, 'django_auth_adfs/login_failed.html', { 'error_message': "No authorization code was provided.", }, status=400) redirect_to = request.GET.get("state") user = authenticate(request=request, authorization_code=code) if user is not None: if user.is_active: login(request, user) if redirect_to: redirect_to = base64.urlsafe_b64decode(redirect_to.encode()).decode() else: redirect_to = django_settings.LOGIN_REDIRECT_URL url_is_safe = is_safe_url( url=redirect_to, allowed_hosts=[request.get_host()], require_https=request.is_secure(), ) redirect_to = redirect_to if url_is_safe else '/' return redirect(redirect_to) else: return render(request, 'django_auth_adfs/login_failed.html', { 'error_message': "Your account is disabled.", }, status=403) else: return render(request, 'django_auth_adfs/login_failed.html', { 'error_message': "Login failed.", }, status=401)
Handles the redirect from ADFS to our site. We try to process the passed authorization code and login the user. Args: request (django.http.request.HttpRequest): A Django Request object
juraj-google-style
def get_reserved_vlan_range(self, id_or_uri): uri = self._client.build_uri(id_or_uri) + "/reserved-vlan-range" return self._client.get(uri)
Gets the reserved vlan ID range for the fabric. Note: This method is only available on HPE Synergy. Args: id_or_uri: ID or URI of fabric. Returns: dict: vlan-pool
juraj-google-style
def performance_curve(self): pod = (self.contingency_tables['TP'] / (self.contingency_tables['TP'] + self.contingency_tables['FN'])) far = (self.contingency_tables['FP'] / (self.contingency_tables['FP'] + self.contingency_tables['TP'])) far[((self.contingency_tables['FP'] + self.contingency_tables['TP']) == 0)] = np.nan return pd.DataFrame({'POD': pod, 'FAR': far, 'Thresholds': self.thresholds}, columns=['POD', 'FAR', 'Thresholds'])
Calculate the Probability of Detection and False Alarm Ratio in order to output a performance diagram. Returns: pandas.DataFrame containing POD, FAR, and probability thresholds.
codesearchnet
def __init__(self,pos=None,chrom=None,separate_chroms=False): assert pos is not None, 'Slider:: set pos' assert chrom is not None, 'Slider:: set chrom' self.pos = pos self.chrom = chrom self.separate_chroms = separate_chroms self.windows = None self.info = {} pass
Constructor Args: pos: position chrom: chromosome
juraj-google-style
async def addNodes(self, nodedefs): for (formname, formvalu), forminfo in nodedefs: props = forminfo.get('props') if props is not None: props.pop('.created', None) node = await self.addNode(formname, formvalu, props=props) if node is not None: tags = forminfo.get('tags') if tags is not None: for tag, asof in tags.items(): await node.addTag(tag, valu=asof) yield node
Add/merge nodes in bulk. The addNodes API is designed for bulk adds which will also set properties and add tags to existing nodes. Nodes are specified as a list of the following tuples: ( (form, valu), {'props':{}, 'tags':{}}) Args: nodedefs (list): A list of nodedef tuples. Returns: (list): A list of xact messages.
juraj-google-style
def chartspan(cls, start, end): return cls(Lnk.CHARTSPAN, (int(start), int(end)))
Create a Lnk object for a chart span. Args: start: the initial chart vertex end: the final chart vertex
juraj-google-style
def get_enum_from_canonical_name(self, enum_name): return next((e for e in self.enums if e.canonical_name == enum_name), None)
Return an enum from a canonical name Args: enum_name (str): canonical name of the enum Returns: Enum
juraj-google-style
def get_cartesian(self): coords = ['x', 'y', 'z'] eq_sets = self._metadata['eq']['eq_sets'] sym_ops = self._metadata['eq']['sym_ops'] frame = pd.DataFrame(index=[i for v in eq_sets.values() for i in v], columns=['atom', 'x', 'y', 'z'], dtype='f8') frame['atom'] = pd.Series( {i: self.loc[k, 'atom'] for k, v in eq_sets.items() for i in v}) frame.loc[self.index, coords] = self.loc[:, coords] for i in eq_sets: for j in eq_sets[i]: frame.loc[j, coords] = np.dot(sym_ops[i][j], frame.loc[i, coords]) return Cartesian(frame)
Return a :class:`~Cartesian` where all members of a symmetry equivalence class are inserted back in. Args: None Returns: Cartesian: A new cartesian instance.
juraj-google-style
def inc(self, key, count=1): if count < 0: raise ValueError('Counter must be monotonically increasing.') if not _enabled: return self._counts[key] = self._counts.get(key, 0) + count self._total += count
Increment the metric by the specified amount. Args: key: A string to be used as the key. count: The amount to increment by (non-negative integer). Raises: ValueError: if the count is less than 0.
github-repos
def maybe_append_oov_vectors(embeddings, num_oov_buckets): num_embeddings = np.shape(embeddings)[0] embedding_dim = np.shape(embeddings)[1] embeddings.resize( [num_embeddings + num_oov_buckets, embedding_dim], refcheck=False)
Adds zero vectors for oov buckets if num_oov_buckets > 0. Since we are assigning zero vectors, adding more that one oov bucket is only meaningful if we perform fine-tuning. Args: embeddings: Embeddings to extend. num_oov_buckets: Number of OOV buckets in the extended embedding.
juraj-google-style
def make_action(self, fn, schema_parser, meta): validate_input = validate_output = None if ('$input' in meta): with MarkKey('$input'): validate_input = schema_parser.parse(meta['$input']) if ('$output' in meta): with MarkKey('$output'): validate_output = schema_parser.parse(meta['$output']) def action(data): if validate_input: try: data = validate_input(data) except Invalid as ex: return abort(400, 'InvalidData', str(ex)) if isinstance(data, dict): rv = fn(**data) else: rv = fn(data) else: rv = fn() (rv, status, headers) = unpack(rv) if validate_output: try: rv = validate_output(rv) except Invalid as ex: return abort(500, 'ServerError', str(ex)) return (rv, status, headers) return action
Make resource's method an action Validate input, output by schema in meta. If no input schema, call fn without params. If no output schema, will not validate return value. Args: fn: resource's method schema_parser: for parsing schema in meta meta: meta data of the action
codesearchnet
async def evaluate_model(eval_model_path, target_model_path, sgf_dir, seed): lines = await run( 'bazel-bin/cc/eval', '--flagfile={}'.format(os.path.join(FLAGS.flags_dir, 'eval.flags')), '--model={}'.format(eval_model_path), '--model_two={}'.format(target_model_path), '--sgf_dir={}'.format(sgf_dir), '--seed={}'.format(seed)) result = '\n'.join(lines[-7:]) logging.info(result) eval_stats, target_stats = parse_win_stats_table(result, 2) num_games = eval_stats.total_wins + target_stats.total_wins win_rate = eval_stats.total_wins / num_games logging.info('Win rate %s vs %s: %.3f', eval_stats.model_name, target_stats.model_name, win_rate) return win_rate
Evaluate one model against a target. Args: eval_model_path: the path to the model to evaluate. target_model_path: the path to the model to compare to. sgf_dif: directory path to write SGF output to. seed: random seed to use when running eval. Returns: The win-rate of eval_model against target_model in the range [0, 1].
juraj-google-style
def list_groups(name): if six.PY2: name = _to_unicode(name) ugrp = set() try: user = info(name)['groups'] except KeyError: return False for group in user: ugrp.add(group.strip(' *')) return sorted(list(ugrp))
Return a list of groups the named user belongs to Args: name (str): The user name for which to list groups Returns: list: A list of groups to which the user belongs CLI Example: .. code-block:: bash salt '*' user.list_groups foo
codesearchnet
def getConfigPath(configFileName = None): paths = {} applicationPath = "./" if sys.platform == 'win32': applicationPath = os.path.expanduser(os.path.join('~\\', 'OSRFramework')) else: applicationPath = os.path.expanduser(os.path.join('~/', '.config', 'OSRFramework')) paths = { "appPath": applicationPath, "appPathData": os.path.join(applicationPath, "data"), "appPathDefaults": os.path.join(applicationPath, "default"), "appPathPlugins": os.path.join(applicationPath, "plugins"), "appPathWrappers": os.path.join(applicationPath, "plugins", "wrappers"), "appPathPatterns": os.path.join(applicationPath, "plugins", "patterns"), } for path in paths.keys(): if not os.path.exists(paths[path]): os.makedirs(paths[path]) return paths
Auxiliar function to get the configuration paths depending on the system Args: ----- configFileName: TODO. Returns: -------- A dictionary with the following keys: appPath, appPathDefaults, appPathTransforms, appPathPlugins, appPathPatterns, appPathPatterns.
juraj-google-style
def macro_state(self, micro_state): assert (len(micro_state) == len(self.micro_indices)) reindexed = self.reindex() return utils.state_of(reindexed.output_indices, micro_state)
Compute the macro-state of this blackbox. This is just the state of the blackbox's output indices. Args: micro_state (tuple[int]): The state of the micro-elements in the blackbox. Returns: tuple[int]: The state of the output indices.
codesearchnet
def get_historical_data(nmr_problems): observations = np.tile(np.array([[10, 256, 202, 97]]), (nmr_problems, 1)) nmr_tanks_ground_truth = np.ones((nmr_problems,)) * 276 return observations, nmr_tanks_ground_truth
Get the historical tank data. Args: nmr_problems (int): the number of problems Returns: tuple: (observations, nmr_tanks_ground_truth)
juraj-google-style
def delete(self, name, action, seqno): return self.configure(('no route-map %s %s %s' % (name, action, seqno)))
Deletes the routemap from the node Note: This method will attempt to delete the routemap from the nodes operational config. If the routemap does not exist then this method will not perform any changes but still return True Args: name (string): The full name of the routemap. action (string): The action to take for this routemap clause. seqno (integer): The sequence number for the routemap clause. Returns: True if the routemap could be deleted otherwise False (see Node)
codesearchnet
def validate_activation(classifier_activation, weights): if weights is None: return classifier_activation = activations.get(classifier_activation) if classifier_activation not in {activations.get('softmax'), activations.get(None)}: raise ValueError(f'Only `None` and `softmax` activations are allowed for the `classifier_activation` argument when using pretrained weights, with `include_top=True`; Received: classifier_activation={classifier_activation}')
validates that the classifer_activation is compatible with the weights. Args: classifier_activation: str or callable activation function weights: The pretrained weights to load. Raises: ValueError: if an activation other than `None` or `softmax` are used with pretrained weights.
github-repos
def generate_hyperband_schedule(self, R, eta): schedule = [] s_max = int(math.floor(math.log(R, eta))) for s in range(0, s_max + 1): n = math.ceil(int((s_max + 1) / (s + 1)) * eta ** s) r = R * eta ** (-s) bracket = [] for i in range(0, s + 1): n_i = int(math.floor(n * eta ** (-i))) r_i = int(r * eta ** i) bracket.append((n_i, r_i)) schedule = [bracket] + schedule return schedule
Generate hyperband schedule according to the paper. Args: R: maximum resources per config. eta: proportion of configruations to discard per iteration of successive halving. Returns: hyperband schedule, which is represented as a list of brackets, where each bracket contains a list of (num configurations, num resources to use per configuration). See the paper for more details.
juraj-google-style
def _infer_output_coder(self, input_type=None, input_coder=None): return None
Returns the output coder to use for output of this transform. The Coder returned here should not be wrapped in a WindowedValueCoder wrapper. Args: input_type: An instance of an allowed built-in type, a custom class, or a typehints.TypeConstraint for the input type, or None if not available. input_coder: Coder object for encoding input to this PTransform, or None if not available. Returns: Coder object for encoding output of this PTransform or None if unknown.
github-repos
def _ConvertInteger(value): if (isinstance(value, float) and (not value.is_integer())): raise ParseError("Couldn't parse integer: {0}.".format(value)) if (isinstance(value, six.text_type) and (value.find(' ') != (- 1))): raise ParseError('Couldn\'t parse integer: "{0}".'.format(value)) return int(value)
Convert an integer. Args: value: A scalar value to convert. Returns: The integer value. Raises: ParseError: If an integer couldn't be consumed.
codesearchnet
def is_on_curve(self, point): (X, Y) = (point.X, point.Y) return (((((pow(Y, 2, self.P) - pow(X, 3, self.P)) - (self.a * X)) - self.b) % self.P) == 0)
Checks whether a point is on the curve. Args: point (AffinePoint): Point to be checked. Returns: bool: True if point is on the curve, False otherwise.
codesearchnet
def _DownloadScript(self, url, dest_dir): if url.startswith('gs: url = re.sub('^gs: return self._DownloadAuthUrl(url, dest_dir) header = 'http[s]?: domain = 'storage\\.googleapis\\.com' bucket = '(?P<bucket>[a-z0-9][-_.a-z0-9]*[a-z0-9])' obj = '(?P<obj>[^\\*\\?]+)' gs_regex = re.compile(('\\A%s%s\\.%s/%s\\Z' % (header, bucket, domain, obj))) match = gs_regex.match(url) if match: return self._DownloadAuthUrl(url, dest_dir) gs_regex = re.compile(('\\A%s(commondata)?%s/%s/%s\\Z' % (header, domain, bucket, obj))) match = gs_regex.match(url) if match: return self._DownloadAuthUrl(url, dest_dir) return self._DownloadUrl(url, dest_dir)
Download the contents of the URL to the destination. Args: url: string, the URL to download. dest_dir: string, the path to a directory for storing metadata scripts. Returns: string, the path to the file storing the metadata script.
codesearchnet
def reload_napps(self, napps=None): if (napps is None): napps = [] api = self._config.get('kytos', 'api') endpoint = os.path.join(api, 'api', 'kytos', 'core', 'reload', 'all') response = self.make_request(endpoint) for napp in napps: api = self._config.get('kytos', 'api') endpoint = os.path.join(api, 'api', 'kytos', 'core', 'reload', napp[0], napp[1]) response = self.make_request(endpoint) if (response.status_code != 200): raise KytosException('Error reloading the napp: Module not founded or could not be imported') return response.content
Reload a specific NApp or all Napps. Args: napp (list): NApp list to be reload. Raises: requests.HTTPError: When there's a server error.
codesearchnet
def lookup_id(self, group): filter = ['(cn={})'.format(group), '(objectclass=posixGroup)'] results = self.client.search(filter, ['gidNumber']) if (len(results) < 1): raise ldap_tools.exceptions.NoGroupsFound('No Groups Returned by LDAP') elif (len(results) > 1): raise ldap_tools.exceptions.TooManyResults('Multiple groups found. Please narrow your search.') else: return results[0].gidNumber.value
Lookup GID for the given group. Args: group: Name of group whose ID needs to be looked up Returns: A bytestring representation of the group ID (gid) for the group specified Raises: ldap_tools.exceptions.NoGroupsFound: No Groups were returned by LDAP ldap_tools.exceptions.TooManyResults: More than one group was returned by LDAP
codesearchnet
async def on_message(message): server = message.server author = message.author channel = message.channel content = message.content data = datatools.get_data() if not data["discord"]["servers"][server.id][_data.modulename]["activated"]: return if server is not None and author != channel.server.me: prefix = data["discord"]["servers"][server.id]["prefix"] if content.startswith(prefix): package = content.split(" ") command = package[0][len(prefix):] if command == 'gamedeals': await client.send_typing(channel) posts = api_reddit.get_top10() if posts: for post in posts: embed = ui_embed.success(channel, post) await embed.send() else: embed = ui_embed.no_results(channel) await embed.send()
The on_message event handler for this module Args: message (discord.Message): Input message
juraj-google-style
def _check_obj_properties(self, pub, name='pub'): if (not hasattr(pub, 'indexes')): raise InvalidType(("`%s` doesn't have .indexes property!" % name)) if (not pub.indexes): raise InvalidType(('`%s.indexes` is not set!' % name)) if (not hasattr(pub, 'project_key')): raise InvalidType(("`%s` doesn't have .project_key property!" % name)) if (not pub.project_key): raise InvalidType(('`%s.project_key` is not set!' % name))
Make sure, that `pub` has the right interface. Args: pub (obj): Instance which will be checked. name (str): Name of the instance. Used in exception. Default `pub`. Raises: InvalidType: When the `pub` is not instance of `obj_type`.
codesearchnet
def remove_option(self, section, option): try: section = self.__getitem__(section) except KeyError: raise NoSectionError(section) from None option = self.optionxform(option) existed = option in section.options() if existed: del section[option] return existed
Remove an option. Args: section (str): section name option (str): option name Returns: bool: whether the option was actually removed
juraj-google-style
def interruptWrite(self, endpoint, buffer, timeout = 100): r return self.dev.write(endpoint, buffer, timeout)
r"""Perform a interrupt write request to the endpoint specified. Arguments: endpoint: endpoint number. buffer: sequence data buffer to write. This parameter can be any sequence type. timeout: operation timeout in milliseconds. (default: 100) Returns the number of bytes written.
juraj-google-style
def set_record_attn(self, record_attn): def _should_record_attn(layer_idx): if isinstance(record_attn, bool): return record_attn return layer_idx in record_attn for i, layer in enumerate(self._attn_mods): layer.attn.record_attn = _should_record_attn(i) if not record_attn: self.saved_attn_weights = []
Makes forward prop dump self-attention softmaxes to self.saved_attn_weights. Args: record_attn (`Union[bool,set]`): Either a set of layer indices indicating which layers to store, or a boolean value indicating Whether to dump all.
github-repos
def set_timezone(self, timezone: str): data = {"timezoneId": timezone} return self._restCall("home/setTimezone", body=json.dumps(data))
sets the timezone for the AP. e.g. "Europe/Berlin" Args: timezone(str): the new timezone
juraj-google-style
def _get_oauth2_client_id_and_secret(settings_instance): secret_json = getattr(settings_instance, 'GOOGLE_OAUTH2_CLIENT_SECRETS_JSON', None) if secret_json is not None: return _load_client_secrets(secret_json) else: client_id = getattr(settings_instance, "GOOGLE_OAUTH2_CLIENT_ID", None) client_secret = getattr(settings_instance, "GOOGLE_OAUTH2_CLIENT_SECRET", None) if client_id is not None and client_secret is not None: return client_id, client_secret else: raise exceptions.ImproperlyConfigured( "Must specify either GOOGLE_OAUTH2_CLIENT_SECRETS_JSON, or " "both GOOGLE_OAUTH2_CLIENT_ID and " "GOOGLE_OAUTH2_CLIENT_SECRET in settings.py")
Initializes client id and client secret based on the settings. Args: settings_instance: An instance of ``django.conf.settings``. Returns: A 2-tuple, the first item is the client id and the second item is the client secret.
juraj-google-style
def main(argv=None): if (argv is None): argv = sys.argv[1:] parser = build_args() args = parser.parse_args(args=argv) (recipe_name, _ext) = os.path.splitext(os.path.basename(args.recipe)) rm = RecipeManager() rm.add_recipe_folder(os.path.dirname(args.recipe), whitelist=[os.path.basename(args.recipe)]) recipe = rm.get_recipe(recipe_name) if (args.archive is not None): print(('Archiving recipe into %s' % args.archive)) recipe.archive(args.archive) return 0 if args.info: print(recipe) return 0 variables = load_variables(args.define, args.config) success = 0 start_time = time.time() if (args.loop is None): try: recipe.run(variables) success += 1 except IOTileException as exc: print(('Error running recipe: %s' % str(exc))) return 1 else: while True: value = input(('Enter value for loop variable %s (return to stop): ' % args.loop)) if (value == ''): break local_vars = dict(**variables) local_vars[args.loop] = value try: recipe.run(local_vars) success += 1 except IOTileException as exc: print(('--> ERROR processing loop variable %s: %s' % (value, str(exc)))) end_time = time.time() total_time = (end_time - start_time) if (success == 0): per_time = 0.0 else: per_time = (total_time / success) print(('Performed %d runs in %.1f seconds (%.1f seconds / run)' % (success, total_time, per_time))) return 0
Main entry point for iotile-ship recipe runner. This is the iotile-ship command line program. Args: argv (list of str): An optional set of command line parameters. If not passed, these are taken from sys.argv.
codesearchnet
def clean_headers(headers): clean = {} try: for k, v in six.iteritems(headers): if not isinstance(k, six.binary_type): k = str(k) if not isinstance(v, six.binary_type): v = str(v) clean[_helpers._to_bytes(k)] = _helpers._to_bytes(v) except UnicodeEncodeError: from oauth2client.client import NonAsciiHeaderError raise NonAsciiHeaderError(k, ': ', v) return clean
Forces header keys and values to be strings, i.e not unicode. The httplib module just concats the header keys and values in a way that may make the message header a unicode string, which, if it then tries to contatenate to a binary request body may result in a unicode decode error. Args: headers: dict, A dictionary of headers. Returns: The same dictionary but with all the keys converted to strings.
juraj-google-style
def __init__(self, bits: List[int], order: int): super().__init__(trainable=False) bits = check_bits(bits) order = check_order(order) indices_list = [] for i in range(1, order + 1): combos = itertools.combinations(range(len(bits)), i) indices_list.extend(list(combos)) self.indices = tf.ragged.stack(indices_list) self.num_terms = len(indices_list)
Initializes a Parity layer. Args: bits: Unique labels for the bits on which this distribution is supported. order: Maximum size of bit groups to take the parity of.
github-repos
def base_name_from_image(image): m = re.match("^(.+/)?([^:/]+)(:[^:]+)?$", image) algo_name = m.group(2) if m else image return algo_name
Extract the base name of the image to use as the 'algorithm name' for the job. Args: image (str): Image name. Returns: str: Algorithm name, as extracted from the image name.
juraj-google-style
def when_connected(self): if (self._client and (not self._client.is_closed)): return defer.succeed(self._client) else: return self._client_deferred
Retrieve the currently-connected Protocol, or the next one to connect. Returns: defer.Deferred: A Deferred that fires with a connected :class:`FedoraMessagingProtocolV2` instance. This is similar to the whenConnected method from the Twisted endpoints APIs, which is sadly isn't available before 16.1.0, which isn't available in EL7.
codesearchnet
def sam2rnf(args): rnftools.mishmash.Source.recode_sam_reads( sam_fn=args.sam_fn, fastq_rnf_fo=args.fq_fo, fai_fo=args.fai_fo, genome_id=args.genome_id, number_of_read_tuples=10**9, simulator_name=args.simulator_name, allow_unmapped=args.allow_unmapped, )
Convert SAM to RNF-based FASTQ with respect to argparse parameters. Args: args (...): Arguments parsed by argparse
juraj-google-style
def infer_annotation(type_comments): assert type_comments args = {} returns = set() for comment in type_comments: arg_types, return_type = parse_type_comment(comment) for i, arg_type in enumerate(arg_types): args.setdefault(i, set()).add(arg_type) returns.add(return_type) combined_args = [] for i in sorted(args): arg_infos = list(args[i]) kind = argument_kind(arg_infos) if kind is None: raise InferError('Ambiguous argument kinds:\n' + '\n'.join(type_comments)) types = [arg.type for arg in arg_infos] combined = combine_types(types) if str(combined) == 'None': combined = UnionType([ClassType('None'), AnyType()]) if kind != ARG_POS and (len(str(combined)) > 120 or isinstance(combined, UnionType)): combined = AnyType() combined_args.append(Argument(combined, kind)) combined_return = combine_types(returns) return combined_args, combined_return
Given some type comments, return a single inferred signature. Args: type_comments: Strings of form '(arg1, ... argN) -> ret' Returns: Tuple of (argument types and kinds, return type).
juraj-google-style
def obtain_all_bond_lengths(sp1, sp2, default_bl=None): if isinstance(sp1, Element): sp1 = sp1.symbol if isinstance(sp2, Element): sp2 = sp2.symbol syms = tuple(sorted([sp1, sp2])) if (syms in bond_lengths): return bond_lengths[syms].copy() elif (default_bl is not None): return {1: default_bl} else: raise ValueError('No bond data for elements {} - {}'.format(*syms))
Obtain bond lengths for all bond orders from bond length database Args: sp1 (Specie): First specie. sp2 (Specie): Second specie. default_bl: If a particular type of bond does not exist, use this bond length as a default value (bond order = 1). If None, a ValueError will be thrown. Return: A dict mapping bond order to bond length in angstrom
codesearchnet
def _apply_user_agent(headers, user_agent): if (user_agent is not None): if ('user-agent' in headers): headers['user-agent'] = ((user_agent + ' ') + headers['user-agent']) else: headers['user-agent'] = user_agent return headers
Adds a user-agent to the headers. Args: headers: dict, request headers to add / modify user agent within. user_agent: str, the user agent to add. Returns: dict, the original headers passed in, but modified if the user agent is not None.
codesearchnet
def _GetIdentifierMappings(self, parser_mediator, cache, database): identifier_mappings = cache.GetResults('SruDbIdMapTable', default_value={}) if (not identifier_mappings): esedb_table = database.get_table_by_name('SruDbIdMapTable') if (not esedb_table): parser_mediator.ProduceExtractionWarning('unable to retrieve table: SruDbIdMapTable') else: identifier_mappings = self._ParseIdentifierMappingsTable(parser_mediator, esedb_table) cache.StoreDictInCache('SruDbIdMapTable', identifier_mappings) return identifier_mappings
Retrieves the identifier mappings from SruDbIdMapTable table. In the SRUM database individual tables contain numeric identifiers for the application ("AppId") and user identifier ("UserId"). A more descriptive string of these values can be found in the SruDbIdMapTable. For example the numeric value of 42 mapping to DiagTrack. This method will cache the mappings of a specific SRUM database. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. cache (ESEDBCache): cache, which contains information about the identifiers stored in the SruDbIdMapTable table. database (pyesedb.file): ESE database. Returns: dict[int, str]: mapping of numeric identifiers to their string representation.
codesearchnet
def referrer_uri(self, value): if value == self._defaults['referrerUri'] and 'referrerUri' in self._values: del self._values['referrerUri'] else: self._values['referrerUri'] = value
The referrer_uri property. Args: value (string). the property value.
juraj-google-style
def can_generate(cls) -> bool: if 'GenerationMixin' in str(cls.prepare_inputs_for_generation) and 'GenerationMixin' in str(cls.generate): return False return True
Returns whether this model can generate sequences with `.generate()`. Returns: `bool`: Whether this model can generate sequences with `.generate()`.
github-repos
def handle(data_type, data, data_id=None, caller=None): if (not data_id): data_id = data_type if (data_id not in _handlers): _handlers[data_id] = dict([(h.handle, h) for h in handlers.instantiate_for_data_type(data_type, data_id=data_id)]) for handler in list(_handlers[data_id].values()): try: data = handler(data, caller=caller) except Exception as inst: vodka.log.error(("Data handler '%s' failed with error" % handler)) vodka.log.error(traceback.format_exc()) return data
execute all data handlers on the specified data according to data type Args: data_type (str): data type handle data (dict or list): data Kwargs: data_id (str): can be used to differentiate between different data sets of the same data type. If not specified will default to the data type caller (object): if specified, holds the object or function that is trying to handle data Returns: dict or list - data after handlers have been executed on it
codesearchnet
def get_context_from_cmdln(args, desc="Run scriptworker"): context = Context() parser = argparse.ArgumentParser(description=desc) parser.add_argument( "config_path", type=str, nargs="?", default="scriptworker.yaml", help="the path to the config file" ) parsed_args = parser.parse_args(args) context.config, credentials = create_config(config_path=parsed_args.config_path) update_logging_config(context) return context, credentials
Create a Context object from args. Args: args (list): the commandline args. Generally sys.argv Returns: tuple: ``scriptworker.context.Context`` with populated config, and credentials frozendict
juraj-google-style
def get_cert_contents(kwargs): paths = {'certificate': kwargs.get('path_to_certificate'), 'private_key': kwargs.get('path_to_private_key'), 'chain': kwargs.get('path_to_chain')} for (key, value) in paths.items(): if (value is not None): continue path = input(('Path to %s (skip): ' % (key,))) if ((path == 'skip') or (not path.strip())): continue paths[key] = path parameters = {'ServerCertificateName': kwargs.get('cert_name')} for (key, path) in paths.items(): if (not path): continue try: contents = path.read() except AttributeError: with open(utils.full_path(path)) as read_file: contents = read_file.read() if (key == 'certificate'): parameters['CertificateBody'] = contents elif (key == 'private_key'): parameters['PrivateKey'] = contents elif (key == 'chain'): parameters['CertificateChain'] = contents return parameters
Builds parameters with server cert file contents. Args: kwargs(dict): The keyword args passed to ensure_server_cert_exists, optionally containing the paths to the cert, key and chain files. Returns: dict: A dictionary containing the appropriate parameters to supply to upload_server_certificate. An empty dictionary if there is a problem.
codesearchnet
def _check_or_build_spatial_positions(pos, index_dims, batch_size): if pos is None: pos = build_linear_positions(index_dims) pos = pos[None].expand((batch_size,) + pos.shape) pos = torch.reshape(pos, [batch_size, np.prod(index_dims), -1]) elif pos.shape[-1] != len(index_dims): raise ValueError('Spatial features have the wrong number of dimensions.') return pos
Checks or builds spatial position features (x, y, ...). Args: pos (`torch.FloatTensor`): None, or an array of position features. If None, position features are built. Otherwise, their size is checked. index_dims (`List[int]`): An iterable giving the spatial/index size of the data to be featurized. batch_size (`int`): The batch size of the data to be featurized. Returns: `torch.FloatTensor` of shape `(batch_size, prod(index_dims))` an array of position features.
github-repos
def SetDocumentType(self, document_type): self._document_type = document_type logger.debug('Elasticsearch document type: {0:s}'.format(document_type))
Sets the document type. Args: document_type (str): document type.
codesearchnet
def get_template(self, template_id): request = self._get_request() return request.get((self.TEMPLATE_GET_URL + template_id))
Gets a Template which includes a list of Accounts that can access it Args: template_id (str): The id of the template to retrieve Returns: A Template object
codesearchnet
def escape_yaml(raw_str: str) -> str: escape_list = [char for char in raw_str if char in ['!', '{', '[']] if len(escape_list) == 0: return raw_str str_quotes = '"' i_str_quotes = "'" if str_quotes in raw_str and str_quotes not in raw_str[1:-1]: return raw_str if str_quotes in raw_str[1:-1]: raw_str = i_str_quotes + raw_str + i_str_quotes else: raw_str = str_quotes + raw_str + str_quotes return raw_str
Shell-Escape a yaml input string. Args: raw_str: The unescaped string.
juraj-google-style
def __init__(self, session_identifier=None): super(Task, self).__init__() self.aborted = False self.completion_time = None self.file_entry_type = None self.has_retry = False self.identifier = '{0:s}'.format(uuid.uuid4().hex) self.last_processing_time = None self.merge_priority = None self.path_spec = None self.session_identifier = session_identifier self.start_time = int(time.time() * definitions.MICROSECONDS_PER_SECOND) self.storage_file_size = None
Initializes a task attribute container. Args: session_identifier (Optional[str]): identifier of the session the task is part of.
juraj-google-style
def get_obj_frm_str(obj_str, **kwargs): obj_str = obj_str.format(**kwargs) args = [] kwargs = {} params = [] if "(" in obj_str: call_args = obj_str[obj_str.find("("):] obj_str = obj_str[:obj_str.find("(")] call_args = call_args[1:-1] if call_args: call_args = call_args.split(",") else: call_args = [] call_args = [arg.strip() for arg in call_args] for arg in call_args: if "=" in arg: parts = arg.split("=") kwargs[parts[0]] = parts[1] else: args.append(arg) if "[" in obj_str: params = obj_str[obj_str.find("["):] obj_str = obj_str[:obj_str.find("[")] params = [part.replace("[", "").replace("]", "") for part in params.split("][")] obj = pydoc.locate(obj_str) if params: for part in params: obj = get_attr(obj, part) if args or kwargs: if kwargs: obj = obj.__call__(*args, **kwargs) else: obj = obj.__call__(*args) return obj
Returns a python object from a python object string args: obj_str: python object path expamle "rdfframework.connections.ConnManager[{param1}]" kwargs: * kwargs used to format the 'obj_str'
juraj-google-style
def _get_outer_context_id(self, graph): if hasattr(graph, 'outer_graph') and graph.outer_graph: return self._get_context_id(graph.outer_graph) else: return None
Get the ID of the immediate outer context of the input graph. Args: graph: The graph (context) in question. Returns: If an outer context exists, the immediate outer context name as a string. If such as outer context does not exist (i.e., `graph` is itself outermost), `None`.
github-repos
def write(self, records=None, path=None, fields=None, append=False, gzip=None): if (path is None): if (not self.is_attached()): raise ItsdbError('no path given for detached table') else: path = self.path path = _normalize_table_path(path) (dirpath, name) = os.path.split(path) if (fields is None): fields = self.fields if (records is None): records = iter(self) _write_table(dirpath, name, records, fields, append=append, gzip=gzip, encoding=self.encoding) if (self.is_attached() and (path == _normalize_table_path(self.path))): self.path = _table_filename(path) self._sync_with_file()
Write the table to disk. The basic usage has no arguments and writes the table's data to the attached file. The parameters accommodate a variety of use cases, such as using *fields* to refresh a table to a new schema or *records* and *append* to incrementally build a table. Args: records: an iterable of :class:`Record` objects to write; if `None` the table's existing data is used path: the destination file path; if `None` use the path of the file attached to the table fields (:class:`Relation`): table schema to use for writing, otherwise use the current one append: if `True`, append rather than overwrite gzip: compress with gzip if non-empty Examples: >>> table.write() >>> table.write(results, path='new/path/result')
codesearchnet
def OpenClient(client_id=None, token=None): if (not token): try: token = ApprovalFind(client_id, token=token) except access_control.UnauthorizedAccess as e: logging.debug('No authorization found for access to client: %s', e) try: client = aff4.FACTORY.Open(rdfvalue.RDFURN(client_id), mode='r', token=token) return (client, token) except access_control.UnauthorizedAccess: logging.warning('Unable to find a valid reason for client %s. You may need to request approval.', client_id) return (None, None)
Opens the client, getting potential approval tokens. Args: client_id: The client id that should be opened. token: Token to use to open the client Returns: tuple containing (client, token) objects or (None, None) on if no appropriate aproval tokens were found.
codesearchnet
def check_data_type(self): metadata_type = self.column_metadata.get('type') if ((self.type != metadata_type) and (metadata_type not in self.type)): raise ValueError("Types of transformer don't match")
Check the type of the transformer and column match. Args: column_metadata(dict): Metadata of the column. Raises a ValueError if the types don't match
codesearchnet
def process_data(data, number_to_keep): result = dict() if number_to_keep != 0: data_temp = dict(Counter(data).most_common(number_to_keep)) data_temp['rest'] = sum(data.values()) - sum(data_temp.values()) data = data_temp labels = data values = np.array([data[key] for key in labels], dtype=float) pvalues = values / sum(values) for position, label in enumerate(labels): result[label] = round(pvalues[position], 5) return result
Prepare received data for representation. Args: data (dict): values to represent (ex. {'001' : 130}) number_to_keep (int): number of elements to show individually. Returns: dict: processed data to show.
juraj-google-style
def cond(pred, true_fn, false_fn): return Cond()(pred, true_fn, false_fn)
Conditionally applies `true_fn` or `false_fn`. Args: pred: Boolean scalar type true_fn: Callable returning the output for the `pred == True` case. false_fn: Callable returning the output for the `pred == False` case. Returns: The output of either `true_fn` or `false_fn` depending on pred.
github-repos
def list_members(self, name, type="USER", recurse=True, max_results=1000): results = self.client.service.getListMembership( name, type, recurse, max_results, self.proxy_id, ) return [item["member"] for item in results]
Look up all the members of a list. Args: name (str): The name of the list type (str): The type of results to return. "USER" to get users, "LIST" to get lists. recurse (bool): Presumably, whether to recurse into member lists when retrieving users. max_results (int): Maximum number of results to return. Returns: list of strings: names of the members of the list
juraj-google-style
def sunrise(self, date=None, zenith=None): return (segment.sunrise(date, zenith) for segment in self)
Calculate sunrise times for locations. Args: date (datetime.date): Calculate rise or set for given date zenith (str): Calculate sunrise events, or end of twilight Returns: list of list of datetime.datetime: The time for the sunrise for each point in each segment
juraj-google-style
def prepare_all_data(data_dir, block_pct_tokens_thresh=0.1): gs_blocks_dir = os.path.join(data_dir, GOLD_STANDARD_BLOCKS_DIRNAME) gs_blocks_filenames = get_filenames( gs_blocks_dir, full_path=False, match_regex=re.escape(GOLD_STANDARD_BLOCKS_EXT)) gs_blocks_fileroots = ( re.search(r'(.+)' + re.escape(GOLD_STANDARD_BLOCKS_EXT), gs_blocks_filename).group(1) for gs_blocks_filename in gs_blocks_filenames) return [prepare_data(data_dir, fileroot, block_pct_tokens_thresh) for fileroot in gs_blocks_fileroots]
Prepare data for all HTML + gold standard blocks examples in ``data_dir``. Args: data_dir (str) block_pct_tokens_thresh (float): must be in [0.0, 1.0] Returns: List[Tuple[str, List[float, int, List[str]], List[float, int, List[str]]]] See Also: :func:`prepare_data`
juraj-google-style
def create_channel(cls, address="spanner.googleapis.com:443", credentials=None): grpc_gcp_config = grpc_gcp.api_config_from_text_pb( pkg_resources.resource_string(__name__, _SPANNER_GRPC_CONFIG) ) options = [(grpc_gcp.API_CONFIG_CHANNEL_ARG, grpc_gcp_config)] return google.api_core.grpc_helpers.create_channel( address, credentials=credentials, scopes=cls._OAUTH_SCOPES )
Create and return a gRPC channel object. Args: address (str): The host for the channel to use. credentials (~.Credentials): The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. Returns: grpc.Channel: A gRPC channel object.
juraj-google-style
def update(self, b): hv = self.hashfunc(b) (a, b) = self.permutations phv = np.bitwise_and((((a * hv) + b) % _mersenne_prime), np.uint64(_max_hash)) self.hashvalues = np.minimum(phv, self.hashvalues)
Update this MinHash with a new value. The value will be hashed using the hash function specified by the `hashfunc` argument in the constructor. Args: b: The value to be hashed using the hash function specified. Example: To update with a new string value (using the default SHA1 hash function, which requires bytes as input): .. code-block:: python minhash = Minhash() minhash.update("new value".encode('utf-8')) We can also use a different hash function, for example, `pyfarmhash`: .. code-block:: python import farmhash def _hash_32(b): return farmhash.hash32(b) minhash = MinHash(hashfunc=_hash_32) minhash.update("new value")
codesearchnet
def with_start_after(self, after_namespace): namespace_start = _ord_to_namespace((_namespace_to_ord(after_namespace) + 1)) return NamespaceRange(namespace_start, self.namespace_end, _app=self.app)
Returns a copy of this NamespaceName with a new namespace_start. Args: after_namespace: A namespace string. Returns: A NamespaceRange object whose namespace_start is the lexographically next namespace after the given namespace string. Raises: ValueError: if the NamespaceRange includes only a single namespace.
codesearchnet
def condition_indices(df): eigvals = eigenvalues(df) cond_idx = np.sqrt((eigvals.max() / eigvals)) return pd.Series(cond_idx, df.columns, name='Condition index')
Returns a pandas Series with condition indices of the df columns. Args: df: pandas DataFrame with columns to run diagnostics on
codesearchnet
def _GetEventLogProviderKey(self, log_source): table_names = ['event_log_providers'] column_names = ['event_log_provider_key'] condition = 'log_source == "{0:s}"'.format(log_source) values_list = list(self._database_file.GetValues(table_names, column_names, condition)) number_of_values = len(values_list) if (number_of_values == 0): return None if (number_of_values == 1): values = values_list[0] return values['event_log_provider_key'] raise RuntimeError('More than one value found in database.')
Retrieves the Event Log provider key. Args: log_source (str): Event Log source. Returns: str: Event Log provider key or None if not available. Raises: RuntimeError: if more than one value is found in the database.
codesearchnet
def _model_loss(model, inputs, targets, output_loss_metrics=None, sample_weights=None, training=False): total_loss = 0 kwargs = {} if model._expects_training_arg: kwargs['training'] = training if len(inputs) == 1 and (not isinstance(inputs, dict)): inputs = inputs[0] if any((isinstance(input_t, (np.ndarray, float, int)) for input_t in nest.flatten(inputs))): inputs = nest.map_structure(tensor_conversion.convert_to_tensor_v2_with_dispatch, inputs) outs = model(inputs, **kwargs) outs = nest.flatten(outs) if targets: targets = training_utils_v1.cast_if_floating_dtype_and_mismatch(targets, outs) if sample_weights: new_sample_weights = [] for val in sample_weights: if val is not None: new_sample_weights.append(training_utils_v1.cast_if_floating_dtype(tensor_conversion.convert_to_tensor_v2_with_dispatch(val))) else: new_sample_weights.append(None) sample_weights = new_sample_weights masks = [getattr(t, '_keras_mask', None) for t in outs] targets = nest.flatten(targets) output_losses = [] with backend.name_scope('loss'): loss_fns = [loss_fn for loss_fn in model.loss_functions if loss_fn is not None] custom_losses = model.losses if not loss_fns and (not custom_losses): if training: raise ValueError('The model cannot be trained because it has no loss to optimize.') else: raise ValueError('The model cannot be evaluated because it has no loss to compute.') for i, loss_fn in enumerate(loss_fns): weights = sample_weights[i] if sample_weights else None mask = masks[i] with backend.name_scope(model.output_names[i] + '_loss'): if mask is not None: mask = math_ops.cast(mask, outs[i].dtype) if weights is None: weights = mask else: weights = math_ops.cast(weights, outs[i].dtype) mask, _, weights = losses_utils.squeeze_or_expand_dimensions(mask, sample_weight=weights) weights *= mask if hasattr(loss_fn, 'reduction'): per_sample_losses = loss_fn.call(targets[i], outs[i]) weighted_losses = losses_utils.compute_weighted_loss(per_sample_losses, sample_weight=weights, reduction=losses_utils.ReductionV2.NONE) loss_reduction = loss_fn.reduction if loss_reduction == losses_utils.ReductionV2.AUTO: loss_reduction = losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE output_loss = losses_utils.reduce_weighted_loss(weighted_losses, reduction=loss_reduction) else: output_loss = loss_fn(targets[i], outs[i], sample_weight=weights) loss_reduction = losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE if len(model.outputs) > 1: output_losses.append(output_loss_metrics[i](output_loss)) if loss_reduction == losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE: output_loss = losses_utils.scale_loss_for_distribution(output_loss) total_loss += model._loss_weights_list[i] * output_loss if custom_losses: total_loss += losses_utils.scale_loss_for_distribution(math_ops.add_n(custom_losses)) return (outs, total_loss, output_losses, masks)
Calculates the loss for a given model. Args: model: The model on which metrics are being calculated. inputs: Either a dictionary of inputs to the model or a list of input arrays. targets: List of target arrays. output_loss_metrics: List of metrics that are used to aggregated output loss values. sample_weights: Optional list of sample weight arrays. training: Whether the model should be run in inference or training mode. Returns: Returns the model output, total loss, loss value calculated using the specified loss function and masks for each output. The total loss includes regularization losses and applies masking and sample weighting to the loss value.
github-repos
def set_current_position(self, position): raise NotImplementedError
Updates the last-consumed position to the given position. A source may invoke this method for records that do not start at split points. This may modify the internal state of the ``RangeTracker``. If the record starts at a split point, method ``try_claim()`` **must** be invoked instead of this method. Args: position: starting position of a record being read by a source.
github-repos
def get_contacts(self, issue): if not issue.resource: return [] account_contacts = issue.resource.account.contacts try: resource_owners = issue.resource.get_owner_emails() if type(resource_owners) is list: for resource_owner in resource_owners: account_contacts.append({'type': 'email', 'value': resource_owner}) except AttributeError: pass return account_contacts
Returns a list of contacts for an issue Args: issue (:obj:`RequiredTagsIssue`): Issue record Returns: `list` of `dict`
juraj-google-style
def _register_json_primitive(object_type, encoder, decoder): global _TYPE_TO_ENCODER global _TYPE_NAME_TO_DECODER if object_type not in _TYPE_TO_ENCODER: _TYPE_TO_ENCODER[object_type] = encoder _TYPE_NAME_TO_DECODER[object_type.__name__] = decoder
Extend what Pipeline can serialize. Args: object_type: type of the object. encoder: a function that takes in an object and returns a dict of json primitives. decoder: inverse function of encoder.
juraj-google-style
def get_iterator_id_fn(unused_dummy): return script_ops.numpy_function(generator_state.get_next_id, args, dtypes.int64)
Creates a unique `iterator_id` for each pass over the dataset. The returned `iterator_id` disambiguates between multiple concurrently existing iterators. Args: unused_dummy: Ignored value. Returns: A `tf.int64` tensor whose value uniquely identifies an iterator in `generator_state`.
github-repos
def line(self, x0, y0, x1, y1, char): if x0 > x1: x1, x0 = x0, x1 y1, y0 = y0, y1 dx = x1 - x0 dy = y1 - y0 if dx == 0 and dy == 0: self.point(x0, y0, char) elif abs(dx) >= abs(dy): for x in range(x0, x1 + 1): if dx == 0: y = y0 else: y = y0 + int(round((x - x0) * dy / float((dx)))) self.point(x, y, char) elif y0 < y1: for y in range(y0, y1 + 1): if dy == 0: x = x0 else: x = x0 + int(round((y - y0) * dx / float((dy)))) self.point(x, y, char) else: for y in range(y1, y0 + 1): if dy == 0: x = x0 else: x = x1 + int(round((y - y1) * dx / float((dy)))) self.point(x, y, char)
Create a line on ASCII canvas. Args: x0 (int): x coordinate where the line should start. y0 (int): y coordinate where the line should start. x1 (int): x coordinate where the line should end. y1 (int): y coordinate where the line should end. char (str): character to draw the line with.
juraj-google-style
def fill_treeview(self, tree, input_dict): tree.model().removeRows(0, tree.model().rowCount()) def add_element(item, key, value): child_name = QtWidgets.QStandardItem(key) if isinstance(value, dict): for key_child, value_child in value.items(): add_element(child_name, key_child, value_child) item.appendRow(child_name) else: child_value = QtWidgets.QStandardItem(str(value)) item.appendRow([child_name, child_value]) for index, (key, value) in enumerate(input_dict.items()): if isinstance(value, dict): item = QtWidgets.QStandardItem(key) for sub_key, sub_value in value.items(): add_element(item, sub_key, sub_value) tree.model().appendRow(item) elif isinstance(value, str): item = QtGui.QStandardItem(key) item_value = QtGui.QStandardItem(value) item_value.setEditable(True) item_value.setSelectable(True) tree.model().appendRow([item, item_value])
fills a treeview with nested parameters Args: tree: QtWidgets.QTreeView parameters: dictionary or Parameter object Returns:
juraj-google-style
def read_locations(filename): data = ConfigParser() if filename == '-': data.read_file(sys.stdin) else: data.read(filename) if not data.sections(): logging.debug('Config file is empty') locations = {} for name in data.sections(): if data.has_option(name, 'locator'): latitude, longitude = utils.from_grid_locator(data.get(name, 'locator')) else: latitude = data.getfloat(name, 'latitude') longitude = data.getfloat(name, 'longitude') locations[name] = (latitude, longitude) return locations
Pull locations from a user's config file. Args: filename (str): Config file to parse Returns: dict: List of locations from config file
juraj-google-style
def process_file(options, source_text=None, generate_callgraphs=False, preserve_pytype_vm=False): with config.verbosity_from(options): loader = load_pytd.create_loader(options) src = source_text or io.read_source_file(options.input) with io.wrap_pytype_exceptions(PytypeError, filename=options.input): ret = analyze.infer_types(src=src, options=options, loader=loader) pytd_module = ret.ast ast_root_node = astlib.parse(src, options.input, feature_version=options.python_version[1]) module_name = 'module' src_code = source.Code(src, ret.context.vm.opcode_traces, VmTrace, filename=options.input) ix = Indexer(ast=astlib, src=src_code, loader=loader, module_name=module_name, pytd_module=pytd_module) ix.index(ast_root_node) ix.finalize() ix.vm = ret.context.vm if generate_callgraphs: ix.function_map = callgraph.collect_function_map(ix) if not preserve_pytype_vm: ix.vm = None return ix
Process a single file and return cross references. Args: options: A dictionary of pytype options. source_text: Optional text of the file; will be read from the file pointed to by options.input if not supplied. generate_callgraphs: Collect call graph information preserve_pytype_vm: Preserve the pytype vm in the indexer Returns: The Indexer object used for indexing. Raises: PytypeError if pytype fails.
github-repos
def add(self, data, name=None): if name is None: n = len(self.data) while "Series %d"%n in self.data: n += 1 name = "Series %d"%n self.data[name] = data return name
Appends a new column of data to the data source. Args: data (seq) : new data to add name (str, optional) : column name to use. If not supplied, generate a name of the form "Series ####" Returns: str: the column name used
juraj-google-style
def format_usage(doc, width=None): sections = doc.replace('\r', '').split('\n\n') width = width or get_terminal_size().columns or 80 return '\n\n'.join(_wrap_section(s.strip(), width) for s in sections)
Format the docstring for display to the user. Args: doc: The docstring to reformat for display. Returns: The docstring formatted to parse and display to the user. This includes dedenting, rewrapping, and translating the docstring if necessary.
juraj-google-style
def _ParseChatData(self, data): data_store = {} if ('body' in data): body = data.get('body', '').replace('\n', ' ') if (body.startswith(' body_dict = self._ExtractJQuery(body) (title, _, _) = body.partition('{') body = '{0:s} <{1!s}>'.format(title[2:], self._DictToListOfStrings(body_dict)) else: body = 'No text.' data_store['text'] = body room = data.get('rooms', None) if (not room): room = data.get('room', None) if room: data_store['room'] = room data_store['id'] = data.get('id', None) user = data.get('user', None) if user: try: user_sid = int(user) data_store['sid'] = user_sid except (ValueError, TypeError): data_store['user'] = user return data_store
Parses chat comment data. Args: data (dict[str, object]): chat comment data as returned by SQLite. Returns: dict[str, object]: parsed chat comment data.
codesearchnet
def serialize_file(struct, path, format=None, encoding='utf-8'): try: with open(path, 'wb') as f: return serialize(struct, format, f, encoding) except EnvironmentError as e: raise AnyMarkupError(e, traceback.format_exc())
A convenience wrapper of serialize, which accepts path of file to serialize to. Args: struct: structure (dict or list) with unicode members to serialize; note that list can only be serialized to json path: path of the file to serialize to format: override markup format to serialize structure as (taken from filename by default) encoding: encoding to use when serializing, defaults to utf-8 Returns: number of bytes written Raises: AnyMarkupError if a problem occurs while serializing
codesearchnet
def predict_task_proba(self, X, t=0, **kwargs): return self.predict_proba(X, **kwargs)[t]
Predicts probabilistic labels for an input X on task t Args: X: The input for the predict_proba method t: The task index to predict for which to predict probabilities Returns: An [n, K_t] tensor of predictions for task t NOTE: By default, this method calls predict_proba and extracts element t. If it is possible to predict individual tasks in isolation, however, this method may be overriden for efficiency's sake.
codesearchnet
def write_alias_config_hash(alias_config_hash='', empty_hash=False): with open(GLOBAL_ALIAS_HASH_PATH, 'w') as alias_config_hash_file: alias_config_hash_file.write('' if empty_hash else alias_config_hash)
Write self.alias_config_hash to the alias hash file. Args: empty_hash: True if we want to write an empty string into the file. Empty string in the alias hash file means that we have to perform a full load of the command table in the next run.
juraj-google-style
def et2roc(et_fo, roc_fo): stats_dicts = [{'q': q, 'M': 0, 'w': 0, 'm': 0, 'P': 0, 'U': 0, 'u': 0, 'T': 0, 't': 0, 'x': 0} for q in range((rnftools.lavender.MAXIMAL_MAPPING_QUALITY + 1))] for line in et_fo: line = line.strip() if ((line != '') and (line[0] != ' (read_tuple_name, tab, info_categories) = line.partition('\t') intervals = info_categories.split(',') for interval in intervals: category = interval[0] (left, colon, right) = interval[2:].partition('-') for q in range(int(left), (int(right) + 1)): stats_dicts[q][category] += 1 roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' roc_fo.write((' l_numbers = [] for line in stats_dicts: numbers = [line['M'], line['w'], line['m'], line['P'], line['U'], line['u'], line['T'], line['t'], line['x']] if (numbers != l_numbers): roc_fo.write(('\t'.join((([str(line['q'])] + list(map(str, numbers))) + [str(sum(numbers))])) + os.linesep)) l_numbers = numbers
ET to ROC conversion. Args: et_fo (file): File object for the ET file. roc_fo (file): File object for the ROC file. raises: ValueError
codesearchnet
def emit_obj_delete(self, category: str, name: str, timestamp: int, pid: int, tid: int, object_id: int) -> None: event = self._create_event('D', category, name, pid, tid, timestamp) event['id'] = object_id self._events.append(event)
Adds an object deletion event to the trace. Args: category: The event category as a string. name: The event name as a string. timestamp: The timestamp of this event as a long integer. pid: Identifier of the process generating this event as an integer. tid: Identifier of the thread generating this event as an integer. object_id: Identifier of the object as an integer.
github-repos
def _ListFileEntry(self, file_system, file_entry, parent_full_path, output_writer): full_path = file_system.JoinPath([parent_full_path, file_entry.name]) if ((not self._list_only_files) or file_entry.IsFile()): output_writer.WriteFileEntry(full_path) for sub_file_entry in file_entry.sub_file_entries: self._ListFileEntry(file_system, sub_file_entry, full_path, output_writer)
Lists a file entry. Args: file_system (dfvfs.FileSystem): file system that contains the file entry. file_entry (dfvfs.FileEntry): file entry to list. parent_full_path (str): full path of the parent file entry. output_writer (StdoutWriter): output writer.
codesearchnet
def set_file_logger(filename: str, name: str = 'parsl', level: int = logging.DEBUG, format_string: Optional[str] = None): if format_string is None: format_string = "%(asctime)s.%(msecs)03d %(name)s:%(lineno)d [%(levelname)s] %(message)s" logger = logging.getLogger(name) logger.setLevel(logging.DEBUG) handler = logging.FileHandler(filename) handler.setLevel(level) formatter = logging.Formatter(format_string, datefmt='%Y-%m-%d %H:%M:%S') handler.setFormatter(formatter) logger.addHandler(handler) futures_logger = logging.getLogger("concurrent.futures") futures_logger.addHandler(handler)
Add a stream log handler. Args: - filename (string): Name of the file to write logs to - name (string): Logger name - level (logging.LEVEL): Set the logging level. - format_string (string): Set the format string Returns: - None
juraj-google-style
def interior_angle(p1, p2, o=(0, 0)): v1 = vector(o, p1) v2 = vector(o, p2) len1 = distance(o, p1) len2 = distance(o, p2) try: return acos(dot_product(v1, v2) / (len1 * len2)) except ZeroDivisionError: raise ValueError("p1 or p2 is overlapped with origin")
Returns interior angle of two vector(0 <= θ <= pi) Args: p1, p2: point (x, y) o: origin Raises: ValueError: p1 or p2 is overlapped with origin
juraj-google-style
def register_trainable(name, trainable): from ray.tune.trainable import Trainable from ray.tune.function_runner import wrap_function if isinstance(trainable, type): logger.debug('Detected class for trainable.') elif isinstance(trainable, FunctionType): logger.debug('Detected function for trainable.') trainable = wrap_function(trainable) elif callable(trainable): logger.warning('Detected unknown callable for trainable. Converting to class.') trainable = wrap_function(trainable) if (not issubclass(trainable, Trainable)): raise TypeError('Second argument must be convertable to Trainable', trainable) _global_registry.register(TRAINABLE_CLASS, name, trainable)
Register a trainable function or class. Args: name (str): Name to register. trainable (obj): Function or tune.Trainable class. Functions must take (config, status_reporter) as arguments and will be automatically converted into a class during registration.
codesearchnet
def floatx(): return _FLOATX
Returns the default float type, as a string. E.g. `'float16'`, `'float32'`, `'float64'`. Returns: String, the current default float type. Example: >>> tf.keras.backend.floatx() 'float32'
github-repos
def require_validated(self, req, partial=False, bulk=False): representations = ([self.require_representation(req)] if (not bulk) else self.require_representation(req)) if (bulk and (not isinstance(representations, list))): raise ValidationError('Request payload should represent a list of resources.').as_bad_request() object_dicts = [] try: for representation in representations: object_dict = self.serializer.from_representation(representation) self.serializer.validate(object_dict, partial) object_dicts.append(object_dict) except DeserializationError as err: raise err.as_bad_request() except ValidationError as err: raise err.as_bad_request() return (object_dicts if bulk else object_dicts[0])
Require fully validated internal object dictionary. Internal object dictionary creation is based on content-decoded representation retrieved from request body. Internal object validation is performed using resource serializer. Args: req (falcon.Request): request object partial (bool): set to True if partially complete representation is accepted (e.g. for patching instead of full update). Missing fields in representation will be skiped. bulk (bool): set to True if request payload represents multiple resources instead of single one. Returns: dict: dictionary of fields and values representing internal object. Each value is a result of ``field.from_representation`` call.
codesearchnet