code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def flatlist_dropdup(list_of_lists): return list(set([str(item) for sublist in list_of_lists for item in sublist]))
Make a single list out of a list of lists, and drop all duplicates. Args: list_of_lists: List of lists. Returns: list: List of single objects.
juraj-google-style
def __init__(self, iterator_resource, initializer, output_types, output_shapes, output_classes): self._iterator_resource = iterator_resource self._initializer = initializer if output_types is None or output_shapes is None or output_classes is None: raise ValueError(f'All of `output_types`, `output_shapes`, and `output_classes` must be specified to create an iterator. Got `output_types` = {output_types!r}, `output_shapes` = {output_shapes!r}, `output_classes` = {output_classes!r}.') self._element_spec = structure.convert_legacy_structure(output_types, output_shapes, output_classes) self._flat_tensor_shapes = structure.get_flat_tensor_shapes(self._element_spec) self._flat_tensor_types = structure.get_flat_tensor_types(self._element_spec) self._string_handle = gen_dataset_ops.iterator_to_string_handle(self._iterator_resource) self._get_next_call_count = 0 ops.add_to_collection(GLOBAL_ITERATORS, self._iterator_resource)
Creates a new iterator from the given iterator resource. Note: Most users will not call this initializer directly, and will instead use `Dataset.make_initializable_iterator()` or `Dataset.make_one_shot_iterator()`. Args: iterator_resource: A `tf.resource` scalar `tf.Tensor` representing the iterator. initializer: A `tf.Operation` that should be run to initialize this iterator. output_types: A (nested) structure of `tf.DType` objects corresponding to each component of an element of this iterator. output_shapes: A (nested) structure of `tf.TensorShape` objects corresponding to each component of an element of this iterator. output_classes: A (nested) structure of Python `type` objects corresponding to each component of an element of this iterator. Raises: TypeError: If `output_types`, `output_shapes`, or `output_classes` is not specified.
github-repos
class Monitor(object): def __init__(self, namespace: str, name_prefix: str) -> None: self.namespace = namespace self.name_prefix = name_prefix self.doFn = MonitorDoFn(namespace, name_prefix)
A monitor of elements with support for later retrieving their metrics monitor objects contains a doFn to record metrics Args: namespace: the namespace all metrics within this Monitor uses name_prefix: a prefix for this Monitor's metrics' names, intended to be unique in per-monitor basis in pipeline
github-repos
def remove_hairs_decorator(fn=None, hairs=HAIRS): def decorator_wrapper(fn): @wraps(fn) def decorator(*args, **kwargs): out = fn(*args, **kwargs) return remove_hairs(out, hairs) return decorator if fn: return decorator_wrapper(fn) return decorator_wrapper
Parametrized decorator wrapping the :func:`remove_hairs` function. Args: hairs (str, default HAIRS): List of characters which should be removed. See :attr:`HAIRS` for details.
juraj-google-style
def get_tqdm_kwargs(**kwargs): default = dict( smoothing=0.5, dynamic_ncols=True, ascii=True, bar_format='{l_bar}{bar}|{n_fmt}/{total_fmt}[{elapsed}<{remaining},{rate_noinv_fmt}]' ) try: interval = float(os.environ['TENSORPACK_PROGRESS_REFRESH']) except KeyError: interval = _pick_tqdm_interval(kwargs.get('file', sys.stderr)) default['mininterval'] = interval default.update(kwargs) return default
Return default arguments to be used with tqdm. Args: kwargs: extra arguments to be used. Returns: dict:
juraj-google-style
def add_business_days(self, date_tensor, num_days, roll_convention=constants.BusinessDayConvention.NONE): control_deps = [] if roll_convention == constants.BusinessDayConvention.NONE: message = 'Some dates in date_tensor are not business days. Please specify the roll_convention argument.' is_bus_day = self.is_business_day(date_tensor) control_deps.append(tf.debugging.assert_equal(is_bus_day, True, message=message)) else: date_tensor = self.roll_to_business_day(date_tensor, roll_convention) with tf.control_dependencies(control_deps): cumul_bus_days_table = self._compute_cumul_bus_days_table() cumul_bus_days = self._gather(cumul_bus_days_table, date_tensor.ordinal() - self._ordinal_offset + 1) target_cumul_bus_days = cumul_bus_days + num_days bus_day_ordinals_table = self._compute_bus_day_ordinals_table() ordinals = self._gather(bus_day_ordinals_table, target_cumul_bus_days) with tf.control_dependencies(self._assert_ordinals_in_bounds(ordinals)): return dt.from_ordinals(ordinals, validate=False)
Adds given number of business days to given dates. Note that this is different from calling `add_period_and_roll` with PeriodType.DAY. For example, adding 5 business days to Monday gives the next Monday (unless there are holidays on this week or next Monday). Adding 5 days and rolling means landing on Saturday and then rolling either to next Monday or to Friday of the same week, depending on the roll convention. If any of the dates in `date_tensor` are not business days, they will be rolled to business days before doing the addition. If `roll_convention` is `NONE`, and any dates are not business days, an exception is raised. Args: date_tensor: DateTensor of dates to advance from. num_days: Tensor of int32 type broadcastable to `date_tensor`. roll_convention: BusinessDayConvention. Determines how to roll a date that falls on a holiday. Returns: The resulting DateTensor.
github-repos
def Readdir(self, path, fh=None): if self.DataRefreshRequired(path): self._RunAndWaitForVFSFileUpdate(path) return super(GRRFuse, self).Readdir(path, fh=None)
Updates the directory listing from the client. Args: path: The path to the directory to update. Client is inferred from this. fh: A file handler. Not used. Returns: A list of filenames.
codesearchnet
def parse_config_input_output(args=sys.argv): parser = argparse.ArgumentParser( description='Process the input files using the given config') parser.add_argument( 'config_file', help='Configuration file.', metavar='FILE', type=extant_file) parser.add_argument( 'input_dir', help='Directory containing the input files.', metavar='DIR', type=extant_dir) parser.add_argument( 'output_dir', help='Directory where the output files should be saved.', metavar='DIR', type=extant_dir) return parser.parse_args(args[1:])
Parse the args using the config_file, input_dir, output_dir pattern Args: args: sys.argv Returns: The populated namespace object from parser.parse_args(). Raises: TBD
juraj-google-style
def underlying_variable_ref(t): while (t.op.type in ['Identity', 'ReadVariableOp', 'Enter']): t = t.op.inputs[0] op_type = t.op.type if (('Variable' in op_type) or ('VarHandle' in op_type)): return t else: return None
Find the underlying variable ref. Traverses through Identity, ReadVariableOp, and Enter ops. Stops when op type has Variable or VarHandle in name. Args: t: a Tensor Returns: a Tensor that is a variable ref, or None on error.
codesearchnet
def save_results(vcs, signature, result_path, patterns): results_directory = _get_results_directory(vcs, signature) if not os.path.exists(results_directory): os.makedirs(results_directory) with open(os.path.join(results_directory, 'patterns'), 'w') as f: f.write('\n'.join(patterns)) if not os.path.exists(os.path.join(results_directory, 'results')): os.mkdir(os.path.join(results_directory, 'results')) includes = ['--include={}'.format(x) for x in patterns] cmd = ['rsync', '-r'] + includes + ['--exclude=*', os.path.join(result_path, ''), os.path.join(results_directory, 'results', '')] subprocess.check_call(cmd)
Save results matching `patterns` at `result_path`. Args: vcs (easyci.vcs.base.Vcs) - the VCS object for the actual project (not the disposable copy) signature (str) - the project state signature result_path (str) - the path containing the result, usually a disposable copy of the project patterns (str) - `rsync`-compatible patterns matching test results to save.
juraj-google-style
def findLabel(self, query, create=False): if isinstance(query, six.string_types): query = query.lower() for label in self._labels.values(): if ((isinstance(query, six.string_types) and (query == label.name.lower())) or (isinstance(query, Pattern) and query.search(label.name))): return label return (self.createLabel(query) if (create and isinstance(query, six.string_types)) else None)
Find a label with the given name. Args: name (Union[_sre.SRE_Pattern, str]): A str or regular expression to match against the name. create (bool): Whether to create the label if it doesn't exist (only if name is a str). Returns: Union[gkeepapi.node.Label, None]: The label.
codesearchnet
def GetIndentLevel(line): indent = Match('^( *)\\S', line) if indent: return len(indent.group(1)) else: return 0
Return the number of leading spaces in line. Args: line: A string to check. Returns: An integer count of leading spaces, possibly zero.
codesearchnet
def GetServiceAccount(self, request, global_params=None): config = self.GetMethodConfig('GetServiceAccount') return self._RunMethod(config, request, global_params=global_params)
Returns the email address of the service account for your project used for interactions with Google Cloud KMS. Args: request: (BigqueryProjectsGetServiceAccountRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (GetServiceAccountResponse) The response message.
github-repos
def GetKeyByPath(self, key_path): root_key_path, _, key_path = key_path.partition( definitions.KEY_PATH_SEPARATOR) root_key_path = root_key_path.upper() root_key_path = self._ROOT_KEY_ALIASES.get(root_key_path, root_key_path) if root_key_path not in self._ROOT_KEYS: raise RuntimeError('Unsupported root key: {0:s}'.format(root_key_path)) key_path = definitions.KEY_PATH_SEPARATOR.join([root_key_path, key_path]) key_path_upper = key_path.upper() for virtual_key_path, virtual_key_callback in self._VIRTUAL_KEYS: virtual_key_path_upper = virtual_key_path.upper() if key_path_upper.startswith(virtual_key_path_upper): key_path_suffix = key_path[len(virtual_key_path):] callback_function = getattr(self, virtual_key_callback) virtual_key = callback_function(key_path_suffix) if not virtual_key: raise RuntimeError('Unable to resolve virtual key: {0:s}.'.format( virtual_key_path)) return virtual_key key_path_prefix_upper, registry_file = self._GetFileByPath(key_path_upper) if not registry_file: return None if not key_path_upper.startswith(key_path_prefix_upper): raise RuntimeError('Key path prefix mismatch.') key_path_suffix = key_path[len(key_path_prefix_upper):] key_path = key_path_suffix or definitions.KEY_PATH_SEPARATOR return registry_file.GetKeyByPath(key_path)
Retrieves the key for a specific path. Args: key_path (str): Windows Registry key path. Returns: WinRegistryKey: Windows Registry key or None if not available. Raises: RuntimeError: if the root key is not supported.
juraj-google-style
def of(cls, msg_header: MessageHeader) -> 'MessageDecoder': cte_hdr = msg_header.parsed.content_transfer_encoding return cls.of_cte(cte_hdr)
Return a decoder from the message header object. See Also: :meth:`.of_cte` Args: msg_header: The message header object.
codesearchnet
def _run_benchmarks(regex): registry = list(GLOBAL_BENCHMARK_REGISTRY) selected_benchmarks = [] for benchmark in registry: benchmark_name = '%s.%s' % (benchmark.__module__, benchmark.__name__) attrs = dir(benchmark) benchmark_instance = None for attr in attrs: if not attr.startswith('benchmark'): continue candidate_benchmark_fn = getattr(benchmark, attr) if not callable(candidate_benchmark_fn): continue full_benchmark_name = '%s.%s' % (benchmark_name, attr) if regex == 'all' or re.search(regex, full_benchmark_name): selected_benchmarks.append(full_benchmark_name) benchmark_instance = benchmark_instance or benchmark() instance_benchmark_fn = getattr(benchmark_instance, attr) instance_benchmark_fn() if not selected_benchmarks: raise ValueError("No benchmarks matched the pattern: '{}'".format(regex))
Run benchmarks that match regex `regex`. This function goes through the global benchmark registry, and matches benchmark class and method names of the form `module.name.BenchmarkClass.benchmarkMethod` to the given regex. If a method matches, it is run. Args: regex: The string regular expression to match Benchmark classes against. Raises: ValueError: If no benchmarks were selected by the input regex.
github-repos
def info(self, **kwargs): path = self._get_series_id_season_number_path('info') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Get the primary information about a TV season by its season number. Args: language: (optional) ISO 639 code. append_to_response: (optional) Comma separated, any TV series method. Returns: A dict respresentation of the JSON returned from the API.
codesearchnet
def ws45(msg): d = hex2bin(data(msg)) if (d[3] == '0'): return None ws = bin2int(d[4:6]) return ws
Wind shear. Args: msg (String): 28 bytes hexadecimal message string Returns: int: Wind shear level. 0=NIL, 1=Light, 2=Moderate, 3=Severe
codesearchnet
def get_custom_object_name(obj): if hasattr(obj, 'name'): return obj.name elif hasattr(obj, '__name__'): return obj.__name__ elif hasattr(obj, '__class__'): return generic_utils.to_snake_case(obj.__class__.__name__) else: return None
Returns the name to use for a custom loss or metric callable. Args: obj: Custom loss of metric callable Returns: Name to use, or `None` if the object was not recognized.
github-repos
def _add_loss_summaries(total_loss): loss_averages = tf.train.ExponentialMovingAverage(0.9, name='avg') losses = tf.get_collection('losses') loss_averages_op = loss_averages.apply(losses + [total_loss]) for l in losses + [total_loss]: tf.summary.scalar(l.op.name + ' (raw)', l) tf.summary.scalar(l.op.name, loss_averages.average(l)) return loss_averages_op
Add summaries for losses in CIFAR-10 model. Generates moving average for all losses and associated summaries for visualizing the performance of the network. Args: total_loss: Total loss from loss(). Returns: loss_averages_op: op for generating moving averages of losses.
juraj-google-style
def _CreateStopsFolder(self, schedule, doc): if not schedule.GetStopList(): return None stop_folder = self._CreateFolder(doc, 'Stops') stop_folder_selection = self._StopFolderSelectionMethod(stop_folder) stop_style_selection = self._StopStyleSelectionMethod(doc) stops = list(schedule.GetStopList()) stops.sort(key=lambda x: x.stop_name) for stop in stops: (folder, pathway_folder) = stop_folder_selection(stop) (style_id, pathway_style_id) = stop_style_selection(stop) self._CreateStopPlacemark(folder, stop, style_id) if (self.show_stop_hierarchy and stop.location_type != transitfeed.Stop.LOCATION_TYPE_STATION and stop.parent_station and stop.parent_station in schedule.stops): placemark = self._CreatePlacemark( pathway_folder, stop.stop_name, pathway_style_id) parent_station = schedule.stops[stop.parent_station] coordinates = [(stop.stop_lon, stop.stop_lat), (parent_station.stop_lon, parent_station.stop_lat)] self._CreateLineString(placemark, coordinates) return stop_folder
Create a KML Folder containing placemarks for each stop in the schedule. If there are no stops in the schedule then no folder is created. Args: schedule: The transitfeed.Schedule instance. doc: The KML Document ElementTree.Element instance. Returns: The Folder ElementTree.Element instance or None if there are no stops.
juraj-google-style
def sia(transition, direction=Direction.BIDIRECTIONAL): validate.direction(direction, allow_bi=True) log.info('Calculating big-alpha for %s...', transition) if (not transition): log.info('Transition %s is empty; returning null SIA immediately.', transition) return _null_ac_sia(transition, direction) if (not connectivity.is_weak(transition.network.cm, transition.node_indices)): log.info('%s is not strongly/weakly connected; returning null SIA immediately.', transition) return _null_ac_sia(transition, direction) log.debug('Finding unpartitioned account...') unpartitioned_account = account(transition, direction) log.debug('Found unpartitioned account.') if (not unpartitioned_account): log.info('Empty unpartitioned account; returning null AC SIA immediately.') return _null_ac_sia(transition, direction) cuts = _get_cuts(transition, direction) engine = ComputeACSystemIrreducibility(cuts, transition, direction, unpartitioned_account) result = engine.run_sequential() log.info('Finished calculating big-ac-phi data for %s.', transition) log.debug('RESULT: \n%s', result) return result
Return the minimal information partition of a transition in a specific direction. Args: transition (Transition): The candidate system. Returns: AcSystemIrreducibilityAnalysis: A nested structure containing all the data from the intermediate calculations. The top level contains the basic irreducibility information for the given subsystem.
codesearchnet
def set_maintainer(self, maintainer): if isinstance(maintainer, hdx.data.user.User) or isinstance(maintainer, dict): if 'id' not in maintainer: maintainer = hdx.data.user.User.read_from_hdx(maintainer['name'], configuration=self.configuration) maintainer = maintainer['id'] elif not isinstance(maintainer, str): raise HDXError('Type %s cannot be added as a maintainer!' % type(maintainer).__name__) if is_valid_uuid(maintainer) is False: raise HDXError('%s is not a valid user id for a maintainer!' % maintainer) self.data['maintainer'] = maintainer
Set the dataset's maintainer. Args: maintainer (Union[User,Dict,str]): Either a user id or User metadata from a User object or dictionary. Returns: None
juraj-google-style
def from_respecth(cls, filename_xml, file_author='', file_author_orcid=''): properties = ReSpecTh_to_ChemKED(filename_xml, file_author, file_author_orcid, validate=False) return cls(dict_input=properties)
Construct a ChemKED instance directly from a ReSpecTh file. Arguments: filename_xml (`str`): Filename of the ReSpecTh-formatted XML file to be imported file_author (`str`, optional): File author to be added to the list generated from the XML file file_author_orcid (`str`, optional): ORCID for the file author being added to the list of file authors Returns: `ChemKED`: Instance of the `ChemKED` class containing the data in ``filename_xml``. Examples: >>> ck = ChemKED.from_respecth('respecth_file.xml') >>> ck = ChemKED.from_respecth('respecth_file.xml', file_author='Bryan W. Weber') >>> ck = ChemKED.from_respecth('respecth_file.xml', file_author='Bryan W. Weber', file_author_orcid='0000-0000-0000-0000')
codesearchnet
def update_firmware(self, firmware_information, force=False): firmware_uri = "{}/firmware".format(self.data["uri"]) result = self._helper.update(firmware_information, firmware_uri, force=force) self.refresh() return result
Installs firmware to the member interconnects of a SAS Logical Interconnect. Args: firmware_information: Options to install firmware to a SAS Logical Interconnect. force: If sets to true, the operation completes despite any problems with the network connectivy or the erros on the resource itself. Returns: dict: SAS Logical Interconnect Firmware.
juraj-google-style
def GetArtifactPathDependencies(rdf_artifact): deps = set() for source in rdf_artifact.sources: for arg, value in iteritems(source.attributes): paths = [] if arg in ["path", "query"]: paths.append(value) if arg == "key_value_pairs": paths.extend([x["key"] for x in value]) if arg in ["keys", "paths", "path_list", "content_regex_list"]: paths.extend(value) for path in paths: for match in artifact_utils.INTERPOLATED_REGEX.finditer(path): deps.add(match.group()[2:-2]) deps.update(GetArtifactParserDependencies(rdf_artifact)) return deps
Return a set of knowledgebase path dependencies. Args: rdf_artifact: RDF artifact object. Returns: A set of strings for the required kb objects e.g. ["users.appdata", "systemroot"]
juraj-google-style
def generate_plaintext_random(plain_vocab, distribution, train_samples, length): if distribution is not None: assert len(distribution) == len(plain_vocab) train_indices = np.random.choice( range(len(plain_vocab)), (train_samples, length), p=distribution) return train_indices
Generates samples of text from the provided vocabulary. Args: plain_vocab: vocabulary. distribution: distribution. train_samples: samples for training. length: length. Returns: train_indices (np.array of Integers): random integers for training. shape = [num_samples, length] test_indices (np.array of Integers): random integers for testing. shape = [num_samples, length] plain_vocab (list of Integers): unique vocabularies.
juraj-google-style
def filter_error(self, error): if error.filename != self._filename or error.line is None: return True if error.name == 'bad-return-type' and error.opcode_name in ('RETURN_VALUE', 'RETURN_CONST') and (error.line not in self.return_lines): _, end = self._function_ranges.find_outermost(error.line) if end: error.set_line(end) line = error.line or sys.maxsize return line not in self._ignore and line not in self._disables[_ALL_ERRORS] and (line not in self._disables[error.name])
Return whether the error should be logged. This method is suitable for use as an error filter. Args: error: An error._Error object. Returns: True iff the error should be included in the log.
github-repos
def add_space(self, line): if (not isinstance(self.last_item, Space)): space = Space(self._structure) self._structure.append(space) self.last_item.add_line(line) return self
Add a Space object to the section Used during initial parsing mainly Args: line (str): one line that defines the space, maybe whitespaces
codesearchnet
def __init__(self, message): super(ItemNotFound, self).__init__( reason=enums.ResultReason.ITEM_NOT_FOUND, message=message )
Create an ItemNotFound exception. Args: message (string): A string containing information about the error.
juraj-google-style
def IsDeletedOrDefault(clean_lines, linenum): open_paren = clean_lines.elided[linenum].find('(') if (open_paren < 0): return False (close_line, _, close_paren) = CloseExpression(clean_lines, linenum, open_paren) if (close_paren < 0): return False return Match('\\s*=\\s*(?:delete|default)\\b', close_line[close_paren:])
Check if current constructor or operator is deleted or default. Args: clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. Returns: True if this is a deleted or default constructor.
codesearchnet
def get_ggt(self, n, u): gk = self[0].einsum_sequence([n, u, n, u]) result = ((- ((((2 * gk) * np.outer(u, u)) + self[0].einsum_sequence([n, n])) + self[1].einsum_sequence([n, u, n, u]))) / (2 * gk)) return result
Gets the Generalized Gruneisen tensor for a given third-order elastic tensor expansion. Args: n (3x1 array-like): normal mode direction u (3x1 array-like): polarization direction
codesearchnet
def build_bird_configuration(config): bird_configuration = {} if config.getboolean('daemon', 'ipv4'): if os.path.islink(config.get('daemon', 'bird_conf')): config_file = os.path.realpath(config.get('daemon', 'bird_conf')) print("'bird_conf' is set to a symbolic link ({s} -> {d}, but we " "will use the canonical path of that link" .format(s=config.get('daemon', 'bird_conf'), d=config_file)) else: config_file = config.get('daemon', 'bird_conf') dummy_ip_prefix = config.get('daemon', 'dummy_ip_prefix') if not valid_ip_prefix(dummy_ip_prefix): raise ValueError("invalid dummy IPv4 prefix: {i}" .format(i=dummy_ip_prefix)) bird_configuration[4] = { 'config_file': config_file, 'variable_name': config.get('daemon', 'bird_variable'), 'dummy_ip_prefix': dummy_ip_prefix, 'reconfigure_cmd': config.get('daemon', 'bird_reconfigure_cmd'), 'keep_changes': config.getboolean('daemon', 'bird_keep_changes'), 'changes_counter': config.getint('daemon', 'bird_changes_counter') } if config.getboolean('daemon', 'ipv6'): if os.path.islink(config.get('daemon', 'bird6_conf')): config_file = os.path.realpath(config.get('daemon', 'bird6_conf')) print("'bird6_conf' is set to a symbolic link ({s} -> {d}, but we " "will use the canonical path of that link" .format(s=config.get('daemon', 'bird6_conf'), d=config_file)) else: config_file = config.get('daemon', 'bird6_conf') dummy_ip_prefix = config.get('daemon', 'dummy_ip6_prefix') if not valid_ip_prefix(dummy_ip_prefix): raise ValueError("invalid dummy IPv6 prefix: {i}" .format(i=dummy_ip_prefix)) bird_configuration[6] = { 'config_file': config_file, 'variable_name': config.get('daemon', 'bird6_variable'), 'dummy_ip_prefix': dummy_ip_prefix, 'reconfigure_cmd': config.get('daemon', 'bird6_reconfigure_cmd'), 'keep_changes': config.getboolean('daemon', 'bird6_keep_changes'), 'changes_counter': config.getint('daemon', 'bird6_changes_counter') } return bird_configuration
Build bird configuration structure. First it performs a sanity check against bird settings and then builds a dictionary structure with bird configuration per IP version. Arguments: config (obj): A configparser object which holds our configuration. Returns: A dictionary Raises: ValueError if sanity check fails.
juraj-google-style
def set_boolean(self, option, value): if not isinstance(value, bool): raise TypeError("%s must be a boolean" % option) self.options[option] = str(value).lower()
Set a boolean option. Args: option (str): name of option. value (bool): value of the option. Raises: TypeError: Value must be a boolean.
juraj-google-style
def __init__(self, timestamp=None): super(DelphiDateTime, self).__init__() self._precision = definitions.PRECISION_1_MILLISECOND self._timestamp = timestamp
Initializes a Delphi TDateTime timestamp. Args: timestamp (Optional[float]): Delphi TDateTime timestamp.
juraj-google-style
def _get_metrics_from_layers(layers): metrics = [] layers = layer_utils.filter_empty_layer_containers(layers) for layer in layers: if isinstance(layer, Model): metrics.extend(layer._metrics) metrics.extend(_get_metrics_from_layers(layer.layers)) else: metrics.extend(layer.metrics) return metrics
Returns list of metrics from the given layers. This will not include the `compile` metrics of a model layer. Args: layers: List of layers. Returns: List of metrics.
github-repos
def image(self, tag, image, step=None): image = onp.array(image) if step is None: step = self._step else: self._step = step if len(onp.shape(image)) == 2: image = image[:, :, onp.newaxis] if onp.shape(image)[-1] == 1: image = onp.repeat(image, 3, axis=-1) image_strio = io.BytesIO() plt.imsave(image_strio, image, format='png') image_summary = Summary.Image( encoded_image_string=image_strio.getvalue(), colorspace=3, height=image.shape[0], width=image.shape[1]) summary = Summary(value=[Summary.Value(tag=tag, image=image_summary)]) self.add_summary(summary, step)
Saves RGB image summary from onp.ndarray [H,W], [H,W,1], or [H,W,3]. Args: tag: str: label for this data image: ndarray: [H,W], [H,W,1], [H,W,3] save image in greyscale or colors/ step: int: training step
juraj-google-style
def launchctl(sub_cmd, *args, **kwargs): return_stdout = kwargs.pop('return_stdout', False) cmd = ['launchctl', sub_cmd] cmd.extend(args) kwargs['python_shell'] = False kwargs = salt.utils.args.clean_kwargs(**kwargs) ret = __salt__['cmd.run_all'](cmd, **kwargs) error = _check_launchctl_stderr(ret) if (ret['retcode'] or error): out = 'Failed to {0} service:\n'.format(sub_cmd) out += 'stdout: {0}\n'.format(ret['stdout']) out += 'stderr: {0}\n'.format(ret['stderr']) out += 'retcode: {0}'.format(ret['retcode']) raise CommandExecutionError(out) else: return (ret['stdout'] if return_stdout else True)
Run a launchctl command and raise an error if it fails Args: additional args are passed to launchctl sub_cmd (str): Sub command supplied to launchctl Kwargs: passed to ``cmd.run_all`` return_stdout (bool): A keyword argument. If true return the stdout of the launchctl command Returns: bool: ``True`` if successful str: The stdout of the launchctl command if requested Raises: CommandExecutionError: If command fails CLI Example: .. code-block:: bash import salt.utils.mac_service salt.utils.mac_service.launchctl('debug', 'org.cups.cupsd')
codesearchnet
def match_docstring_with_signature(obj: Any) -> Optional[Tuple[str, str]]: if len(getattr(obj, '__doc__', '')) == 0: return try: source, _ = inspect.getsourcelines(obj) except OSError: source = [] idx = 0 while idx < len(source) and '"""' not in source[idx]: idx += 1 ignore_order = False if idx < len(source): line_before_docstring = source[idx - 1] if re.search('^\\s* return elif re.search('^\\s* ignore_order = True signature = inspect.signature(obj).parameters obj_doc_lines = obj.__doc__.split('\n') idx = 0 while idx < len(obj_doc_lines) and _re_args.search(obj_doc_lines[idx]) is None: idx += 1 if idx == len(obj_doc_lines): return if 'kwargs' in signature and signature['kwargs'].annotation != inspect._empty: return indent = find_indent(obj_doc_lines[idx]) arguments = {} current_arg = None idx += 1 start_idx = idx while idx < len(obj_doc_lines) and (len(obj_doc_lines[idx].strip()) == 0 or find_indent(obj_doc_lines[idx]) > indent): if find_indent(obj_doc_lines[idx]) == indent + 4: re_search_arg = _re_parse_arg.search(obj_doc_lines[idx]) if re_search_arg is not None: _, name, description = re_search_arg.groups() current_arg = name if name in signature: default = signature[name].default if signature[name].kind is inspect._ParameterKind.VAR_KEYWORD: default = None new_description = replace_default_in_arg_description(description, default) else: new_description = description init_doc = _re_parse_arg.sub(f'\\1\\2 ({new_description}):', obj_doc_lines[idx]) arguments[current_arg] = [init_doc] elif current_arg is not None: arguments[current_arg].append(obj_doc_lines[idx]) idx += 1 idx -= 1 if current_arg: while len(obj_doc_lines[idx].strip()) == 0: arguments[current_arg] = arguments[current_arg][:-1] idx -= 1 idx += 1 old_doc_arg = '\n'.join(obj_doc_lines[start_idx:idx]) old_arguments = list(arguments.keys()) arguments = {name: '\n'.join(doc) for name, doc in arguments.items()} for name in set(signature.keys()) - set(arguments.keys()): arg = signature[name] if name.startswith('_') or arg.kind in [inspect._ParameterKind.VAR_KEYWORD, inspect._ParameterKind.VAR_POSITIONAL]: arguments[name] = '' else: arg_desc = get_default_description(arg) arguments[name] = ' ' * (indent + 4) + f'{name} ({arg_desc}): <fill_docstring>' if ignore_order: new_param_docs = [arguments[name] for name in old_arguments if name in signature] missing = set(signature.keys()) - set(old_arguments) new_param_docs.extend([arguments[name] for name in missing if len(arguments[name]) > 0]) else: new_param_docs = [arguments[name] for name in signature.keys() if len(arguments[name]) > 0] new_doc_arg = '\n'.join(new_param_docs) return (old_doc_arg, new_doc_arg)
Matches the docstring of an object with its signature. Args: obj (`Any`): The object to process. Returns: `Optional[Tuple[str, str]]`: Returns `None` if there is no docstring or no parameters documented in the docstring, otherwise returns a tuple of two strings: the current documentation of the arguments in the docstring and the one matched with the signature.
github-repos
def is_outlier(df, item_id, segment_id, price): if ((segment_id, item_id) not in df.index): return False mean = df.loc[(segment_id, item_id)]['mean'] std = df.loc[(segment_id, item_id)]['std'] return gaussian_outlier.is_outlier(x=price, mean=mean, standard_deviation=std)
Verify if a item is an outlier compared to the other occurrences of the same item, based on his price. Args: item_id: idPlanilhaItens segment_id: idSegmento price: VlUnitarioAprovado
codesearchnet
def add_transcript(self, transcript): logger.debug("Adding transcript {0} to variant {1}".format( transcript, self['variant_id'])) self['transcripts'].append(transcript)
Add the information transcript This adds a transcript dict to variant['transcripts'] Args: transcript (dict): A transcript dictionary
juraj-google-style
def get_airports(self, country): url = AIRPORT_BASE.format(country.replace(" ", "-")) return self._fr24.get_airports_data(url)
Returns a list of all the airports For a given country this returns a list of dicts, one for each airport, with information like the iata code of the airport etc Args: country (str): The country for which the airports will be fetched Example:: from pyflightdata import FlightData f=FlightData() f.get_airports('India')
juraj-google-style
def download(self, resource_id): self.resource_id(str(resource_id)) self._request_uri = '{}/download'.format(self._request_uri)
Update the request URI to download the document for this resource. Args: resource_id (integer): The group id.
juraj-google-style
def GetClientURNsForHostnames(hostnames, token=None): if data_store.RelationalDBEnabled(): index = ClientIndex() else: index = CreateClientIndex(token=token) keywords = set() for hostname in hostnames: if hostname.startswith('host:'): keywords.add(hostname) else: keywords.add(('host:%s' % hostname)) results = index.ReadClientPostingLists(keywords) result = {} for (keyword, hits) in iteritems(results): result[keyword[len('host:'):]] = hits return result
Gets all client_ids for a given list of hostnames or FQDNS. Args: hostnames: A list of hostnames / FQDNs. token: An ACL token. Returns: A dict with a list of all known GRR client_ids for each hostname.
codesearchnet
def _process_config_item(item, dirname): item = copy.deepcopy(item) html = item.get('html', None) if (not html): raise UserWarning(("Can't find HTML source for item:\n%s" % str(item))) link = (html if (': del item['html'] for (key, val) in item.items(): if ('notfoundmsg' in val): val['notfoundmsg'] = val['notfoundmsg'].replace('$name', key) return {'html': _get_source(link), 'link': link, 'vars': item}
Process one item from the configuration file, which contains multiple items saved as dictionary. This function reads additional data from the config and do some replacements - for example, if you specify url, it will download data from this url and so on. Args: item (dict): Item, which will be processed. Note: Returned data format:: { "link": "link to html page/file", "html": "html code from file/url", "vars": { "varname": { "data": "matching data..", ... } } } Returns: dict: Dictionary in format showed above.
codesearchnet
def ParseDom(self, dom, feed): shape_num = 0 for node in dom.getElementsByTagName('Placemark'): p = self.ParsePlacemark(node) if p.IsPoint(): (lon, lat) = p.coordinates[0] m = self.stopNameRe.search(p.name) feed.AddStop(lat, lon, m.group(1)) elif p.IsLine(): self.ConvertPlacemarkToShape(p, feed)
Parses the given kml dom tree and updates the Google transit feed object. Args: dom - kml dom tree feed - an instance of Schedule class to be updated
codesearchnet
def makeDoubleLinked(dom, parent=None): dom.parent = parent for child in dom.childs: child.parent = dom makeDoubleLinked(child, dom)
Standard output from `dhtmlparser` is single-linked tree. This will make it double-linked. Args: dom (obj): :class:`.HTMLElement` instance. parent (obj, default None): Don't use this, it is used in recursive call.
juraj-google-style
def decision_points(self) -> List[DecisionPoint]: return self._decision_points
Returns all decision points in their declaration order. Returns: All decision points in current space. For multi-choices, the sub-choice objects will be returned. Users can call `spec.parent_choice` to access the parent multi-choice node.
github-repos
def modify_binding(site, binding, hostheader=None, ipaddress=None, port=None, sslflags=None): if ((sslflags is not None) and (sslflags not in _VALID_SSL_FLAGS)): message = "Invalid sslflags '{0}' specified. Valid sslflags range: {1}..{2}".format(sslflags, _VALID_SSL_FLAGS[0], _VALID_SSL_FLAGS[(- 1)]) raise SaltInvocationError(message) current_sites = list_sites() if (site not in current_sites): log.debug("Site '%s' not defined.", site) return False current_bindings = list_bindings(site) if (binding not in current_bindings): log.debug("Binding '%s' not defined.", binding) return False (i, p, h) = binding.split(':') new_binding = ':'.join([(ipaddress if (ipaddress is not None) else i), (six.text_type(port) if (port is not None) else six.text_type(p)), (hostheader if (hostheader is not None) else h)]) if (new_binding != binding): ps_cmd = ['Set-WebBinding', '-Name', "'{0}'".format(site), '-BindingInformation', "'{0}'".format(binding), '-PropertyName', 'BindingInformation', '-Value', "'{0}'".format(new_binding)] cmd_ret = _srvmgr(ps_cmd) if (cmd_ret['retcode'] != 0): msg = 'Unable to modify binding: {0}\nError: {1}'.format(binding, cmd_ret['stderr']) raise CommandExecutionError(msg) if ((sslflags is not None) and (sslflags != current_sites[site]['bindings'][binding]['sslflags'])): ps_cmd = ['Set-WebBinding', '-Name', "'{0}'".format(site), '-BindingInformation', "'{0}'".format(new_binding), '-PropertyName', 'sslflags', '-Value', "'{0}'".format(sslflags)] cmd_ret = _srvmgr(ps_cmd) if (cmd_ret['retcode'] != 0): msg = 'Unable to modify binding SSL Flags: {0}\nError: {1}'.format(sslflags, cmd_ret['stderr']) raise CommandExecutionError(msg) log.debug('Binding modified successfully: %s', binding) return True
Modify an IIS Web Binding. Use ``site`` and ``binding`` to target the binding. .. versionadded:: 2017.7.0 Args: site (str): The IIS site name. binding (str): The binding to edit. This is a combination of the IP address, port, and hostheader. It is in the following format: ipaddress:port:hostheader. For example, ``*:80:`` or ``*:80:salt.com`` hostheader (str): The host header of the binding. Usually the hostname. ipaddress (str): The IP address of the binding. port (int): The TCP port of the binding. sslflags (str): The flags representing certificate type and storage of the binding. Returns: bool: True if successful, otherwise False CLI Example: The following will seat the host header of binding ``*:80:`` for ``site0`` to ``example.com`` .. code-block:: bash salt '*' win_iis.modify_binding site='site0' binding='*:80:' hostheader='example.com'
codesearchnet
def get_pattern_additional_cycles(self, patternnumber): _checkPatternNumber(patternnumber) address = _calculateRegisterAddress('cycles', patternnumber) return self.read_register(address)
Get the number of additional cycles for a given pattern. Args: patternnumber (integer): 0-7 Returns: The number of additional cycles (int).
codesearchnet
def list_depth(list_, func=max, _depth=0): depth_list = [list_depth(item, func=func, _depth=_depth + 1) for item in list_ if util_type.is_listlike(item)] if len(depth_list) > 0: return func(depth_list) else: return _depth
Returns the deepest level of nesting within a list of lists Args: list_ : a nested listlike object func : depth aggregation strategy (defaults to max) _depth : internal var Example: >>> # ENABLE_DOCTEST >>> from utool.util_list import * # NOQA >>> list_ = [[[[[1]]], [3]], [[1], [3]], [[1], [3]]] >>> result = (list_depth(list_, _depth=0)) >>> print(result)
juraj-google-style
def encode(self, s): if s.endswith('.mp3'): out_filepath = (s[:(- 4)] + '.wav') call(['sox', '--guard', s, '-r', '16k', '-b', '16', '-c', '1', out_filepath]) s = out_filepath elif (not s.endswith('.wav')): out_filepath = (s + '.wav') if (not os.path.exists(out_filepath)): call(['sox', '-r', '16k', '-b', '16', '-c', '1', s, out_filepath]) s = out_filepath (rate, data) = wavfile.read(s) assert (rate == self._sample_rate) assert (len(data.shape) == 1) if (data.dtype not in [np.float32, np.float64]): data = (data.astype(np.float32) / np.iinfo(data.dtype).max) return data.tolist()
Transform a string with a filename into a list of float32. Args: s: path to the file with a waveform. Returns: samples: list of int16s
codesearchnet
def _create_triangular_filter_bank(fft_freqs: np.ndarray, filter_freqs: np.ndarray) -> np.ndarray: filter_diff = np.diff(filter_freqs) slopes = np.expand_dims(filter_freqs, 0) - np.expand_dims(fft_freqs, 1) down_slopes = -slopes[:, :-2] / filter_diff[:-1] up_slopes = slopes[:, 2:] / filter_diff[1:] return np.maximum(np.zeros(1), np.minimum(down_slopes, up_slopes))
Creates a triangular filter bank. Adapted from *torchaudio* and *librosa*. Args: fft_freqs (`np.ndarray` of shape `(num_frequency_bins,)`): Discrete frequencies of the FFT bins in Hz. filter_freqs (`np.ndarray` of shape `(num_mel_filters,)`): Center frequencies of the triangular filters to create, in Hz. Returns: `np.ndarray` of shape `(num_frequency_bins, num_mel_filters)`
github-repos
def delete_case(self, case): mongo_case = self.case(case) if not mongo_case: raise CaseError("Tried to delete case {0} but could not find case".format( case.get('case_id') )) LOG.info("Removing case {0} from database".format( mongo_case.get('case_id') )) self.db.case.delete_one({'_id': mongo_case['_id']}) return
Delete case from the database Delete a case from the database Args: case (dict): A case dictionary
juraj-google-style
def serialize_example(transformed_json_data, info_dict): import six import tensorflow as tf def _make_int64_list(x): return tf.train.Feature(int64_list=tf.train.Int64List(value=x)) def _make_bytes_list(x): return tf.train.Feature(bytes_list=tf.train.BytesList(value=x)) def _make_float_list(x): return tf.train.Feature(float_list=tf.train.FloatList(value=x)) if (sorted(six.iterkeys(transformed_json_data)) != sorted(six.iterkeys(info_dict))): raise ValueError(('Keys do not match %s, %s' % (list(six.iterkeys(transformed_json_data)), list(six.iterkeys(info_dict))))) ex_dict = {} for (name, info) in six.iteritems(info_dict): if (info['dtype'] == tf.int64): ex_dict[name] = _make_int64_list(transformed_json_data[name]) elif (info['dtype'] == tf.float32): ex_dict[name] = _make_float_list(transformed_json_data[name]) elif (info['dtype'] == tf.string): ex_dict[name] = _make_bytes_list(transformed_json_data[name]) else: raise ValueError(('Unsupported data type %s' % info['dtype'])) ex = tf.train.Example(features=tf.train.Features(feature=ex_dict)) return ex.SerializeToString()
Makes a serialized tf.example. Args: transformed_json_data: dict of transformed data. info_dict: output of feature_transforms.get_transfrormed_feature_info() Returns: The serialized tf.example version of transformed_json_data.
codesearchnet
def get_gan_loss(self, true_frames, gen_frames, name): with tf.variable_scope(('%s_discriminator' % name), reuse=tf.AUTO_REUSE): (gan_d_loss, _, fake_logits_stop) = self.d_step(true_frames, gen_frames) with tf.variable_scope(('%s_discriminator' % name), reuse=True): (gan_g_loss_pos_d, gan_g_loss_neg_d) = self.g_step(gen_frames, fake_logits_stop) gan_g_loss = (gan_g_loss_pos_d + gan_g_loss_neg_d) tf.summary.scalar(('gan_loss_%s' % name), (gan_g_loss_pos_d + gan_d_loss)) if (self.hparams.gan_optimization == 'joint'): gan_loss = (gan_g_loss + gan_d_loss) else: curr_step = self.get_iteration_num() gan_loss = tf.cond(tf.logical_not(((curr_step % 2) == 0)), (lambda : gan_g_loss), (lambda : gan_d_loss)) return gan_loss
Get the discriminator + generator loss at every step. This performs an 1:1 update of the discriminator and generator at every step. Args: true_frames: 5-D Tensor of shape (num_steps, batch_size, H, W, C) Assumed to be ground truth. gen_frames: 5-D Tensor of shape (num_steps, batch_size, H, W, C) Assumed to be fake. name: discriminator scope. Returns: loss: 0-D Tensor, with d_loss + g_loss
codesearchnet
def RemoveScanNode(self, path_spec): scan_node = self._scan_nodes.get(path_spec, None) if (not scan_node): return None if scan_node.sub_nodes: raise RuntimeError('Scan node has sub nodes.') parent_scan_node = scan_node.parent_node if parent_scan_node: parent_scan_node.sub_nodes.remove(scan_node) if (path_spec == self._root_path_spec): self._root_path_spec = None del self._scan_nodes[path_spec] if path_spec.IsFileSystem(): del self._file_system_scan_nodes[path_spec] return parent_scan_node
Removes a scan node of a certain path specification. Args: path_spec (PathSpec): path specification. Returns: SourceScanNode: parent scan node or None if not available. Raises: RuntimeError: if the scan node has sub nodes.
codesearchnet
def getFingerprintsForTexts(self, strings, sparsity=1.0): body = [{"text": s} for s in strings] return self._text.getRepresentationsForBulkText(self._retina, json.dumps(body), sparsity)
Bulk get Fingerprint for text. Args: strings, list(str): A list of texts to be evaluated (required) sparsity, float: Sparsify the resulting expression to this percentage (optional) Returns: list of Fingerprint Raises: CorticalioException: if the request was not successful
juraj-google-style
def _EvaluateExpressions(self, frame): return [self._FormatExpression(frame, expression) for expression in self._definition.get('expressions') or []]
Evaluates watched expressions into a string form. If expression evaluation fails, the error message is used as evaluated expression string. Args: frame: Python stack frame of breakpoint hit. Returns: Array of strings where each string corresponds to the breakpoint expression with the same index.
juraj-google-style
def scalar_pb(tag, data, description=None): arr = np.array(data) if arr.shape != (): raise ValueError('Expected scalar shape for tensor, got shape: %s.' % arr.shape) if arr.dtype.kind not in ('b', 'i', 'u', 'f'): raise ValueError('Cast %s to float is not supported' % arr.dtype.name) tensor_proto = tensor_util.make_tensor_proto(arr.astype(np.float32)) summary_metadata = metadata.create_summary_metadata( display_name=None, description=description) summary = summary_pb2.Summary() summary.value.add(tag=tag, metadata=summary_metadata, tensor=tensor_proto) return summary
Create a scalar summary_pb2.Summary protobuf. Arguments: tag: String tag for the summary. data: A 0-dimensional `np.array` or a compatible python number type. description: Optional long-form description for this summary, as a `str`. Markdown is supported. Defaults to empty. Raises: ValueError: If the type or shape of the data is unsupported. Returns: A `summary_pb2.Summary` protobuf object.
juraj-google-style
def arcsinh(x): if any_symbolic_tensors((x,)): return Arcsinh().symbolic_call(x) return backend.numpy.arcsinh(x)
Inverse hyperbolic sine, element-wise. Arguments: x: Input tensor. Returns: Output tensor of same shape as `x`. Example: >>> x = keras.ops.convert_to_tensor([1, -1, 0]) >>> keras.ops.arcsinh(x) array([0.88137364, -0.88137364, 0.0], dtype=float32)
github-repos
def decrypt_block(self, cipherText): if (not self.initialized): raise TypeError('CamCrypt object has not been initialized') if (len(cipherText) != BLOCK_SIZE): raise ValueError(('cipherText must be %d bytes long (received %d bytes)' % (BLOCK_SIZE, len(cipherText)))) plain = ctypes.create_string_buffer(BLOCK_SIZE) self.decblock(self.bitlen, cipherText, self.keytable, plain) return plain.raw
Decrypt a 16-byte block of data. NOTE: This function was formerly called `decrypt`, but was changed when support for decrypting arbitrary-length strings was added. Args: cipherText (str): 16-byte data. Returns: 16-byte str. Raises: TypeError if CamCrypt object has not been initialized. ValueError if `cipherText` is not BLOCK_SIZE (i.e. 16) bytes.
codesearchnet
def commit(self, sourcedir, targetdir, abs_config, abs_sourcedir, abs_targetdir): (config_path, config_filename) = os.path.split(abs_config) if (not os.path.exists(config_path)): os.makedirs(config_path) if (not os.path.exists(abs_sourcedir)): os.makedirs(abs_sourcedir) if (not os.path.exists(abs_targetdir)): os.makedirs(abs_targetdir) self.backend_engine.dump({'SOURCES_PATH': sourcedir, 'TARGET_PATH': targetdir, 'LIBRARY_PATHS': [], 'OUTPUT_STYLES': 'nested', 'SOURCE_COMMENTS': False, 'EXCLUDES': []}, abs_config, indent=4)
Commit project structure and configuration file Args: sourcedir (string): Source directory path. targetdir (string): Compiled files target directory path. abs_config (string): Configuration file absolute path. abs_sourcedir (string): ``sourcedir`` expanded as absolute path. abs_targetdir (string): ``targetdir`` expanded as absolute path.
codesearchnet
def layout(mtf_graph, mesh_shape, mtf_outputs=()): mesh_shape = mtf.convert_to_shape(mesh_shape) estimator = memory_estimator.MemoryEstimator(mtf_graph, mesh_shape, mtf_outputs) optimizer = layout_optimizer.LayoutOptimizer(estimator) return mtf.convert_to_layout_rules(optimizer.solve())
Compute layout rules based on a computational graph and mesh shape. Args: mtf_graph: a mtf.Graph. mesh_shape: an mtf.Shape, str, or listlike of mtf.Dimension. mtf_outputs: an optional iterable of mtf.Tensor, representing the outputs of the computation. Returns: a mtf.LayoutRules
juraj-google-style
def _execute_with_retries(conn, function, **kwargs): r = {} max_attempts = 18 max_retry_delay = 10 for attempt in range(max_attempts): log.info('attempt: %s function: %s', attempt, function) try: fn = getattr(conn, function) r['result'] = fn(**kwargs) return r except botocore.exceptions.ClientError as e: error_code = e.response['Error']['Code'] if (('LimitExceededException' in error_code) or ('ResourceInUseException' in error_code)): log.debug('Retrying due to AWS exception', exc_info=True) time.sleep(_jittered_backoff(attempt, max_retry_delay)) else: r['error'] = e.response['Error'] log.error(r['error']) r['result'] = None return r r['error'] = 'Tried to execute function {0} {1} times, but was unable'.format(function, max_attempts) log.error(r['error']) return r
Retry if we're rate limited by AWS or blocked by another call. Give up and return error message if resource not found or argument is invalid. conn The connection established by the calling method via _get_conn() function The function to call on conn. i.e. create_stream **kwargs Any kwargs required by the above function, with their keywords i.e. StreamName=stream_name Returns: The result dict with the HTTP response and JSON data if applicable as 'result', or an error as 'error' CLI example:: salt myminion boto_kinesis._execute_with_retries existing_conn function_name function_kwargs
codesearchnet
def isValidUnit(self, w): bad = set(['point', 'a']) if w in bad: return False try: pq.Quantity(0.0, w) return True except: return w == '/'
Checks if a string represents a valid quantities unit. Args: w (str): A string to be tested against the set of valid quantities units. Returns: True if the string can be used as a unit in the quantities module.
juraj-google-style
def _constrain_L2_grad(op, grad): inp = op.inputs[0] inp_norm = tf.norm(inp) unit_inp = (inp / inp_norm) grad_projection = dot(unit_inp, grad) parallel_grad = (unit_inp * grad_projection) is_in_ball = tf.less_equal(inp_norm, 1) is_pointed_inward = tf.less(grad_projection, 0) allow_grad = tf.logical_or(is_in_ball, is_pointed_inward) clip_grad = tf.logical_not(allow_grad) clipped_grad = tf.cond(clip_grad, (lambda : (grad - parallel_grad)), (lambda : grad)) return clipped_grad
Gradient for constrained optimization on an L2 unit ball. This function projects the gradient onto the ball if you are on the boundary (or outside!), but leaves it untouched if you are inside the ball. Args: op: the tensorflow op we're computing the gradient for. grad: gradient we need to backprop Returns: (projected if necessary) gradient.
codesearchnet
def retrieve_instance_links(self): instance_links = {} self.log.debug('LINKS IS %s', LINKS) for (key, value) in LINKS.items(): if (value not in self.pipeline_config['instance_links'].values()): instance_links[key] = value return instance_links
Appends on existing instance links Returns: instance_links: A dictionary containing all the instance links in LINKS and not in pipeline_config
codesearchnet
def authenticate(self, request): request = request._request user = getattr(request, 'user', None) if ((not user) or user.is_anonymous): return None self.enforce_csrf(request) return (user, None)
Authenticate the user, requiring a logged-in account and CSRF. This is exactly the same as the `SessionAuthentication` implementation, with the `user.is_active` check removed. Args: request (HttpRequest) Returns: Tuple of `(user, token)` Raises: PermissionDenied: The CSRF token check failed.
codesearchnet
def decode(self, audio_codes: torch.Tensor, audio_scales: torch.Tensor, padding_mask: Optional[torch.Tensor]=None, return_dict: Optional[bool]=None) -> Union[Tuple[torch.Tensor, torch.Tensor], EncodecDecoderOutput]: return_dict = return_dict if return_dict is not None else self.config.return_dict chunk_length = self.config.chunk_length if chunk_length is None: if len(audio_codes) != 1: raise ValueError(f'Expected one frame, got {len(audio_codes)}') audio_values = self._decode_frame(audio_codes[0], audio_scales[0]) else: decoded_frames = [] for frame, scale in zip(audio_codes, audio_scales): frames = self._decode_frame(frame, scale) decoded_frames.append(frames) audio_values = self._linear_overlap_add(decoded_frames, self.config.chunk_stride or 1) if padding_mask is not None and padding_mask.shape[-1] < audio_values.shape[-1]: audio_values = audio_values[..., :padding_mask.shape[-1]] if not return_dict: return (audio_values,) return EncodecDecoderOutput(audio_values)
Decodes the given frames into an output audio waveform. Note that the output might be a bit bigger than the input. In that case, any extra steps at the end can be trimmed. Args: audio_codes (`torch.LongTensor` of shape `(batch_size, nb_chunks, chunk_length)`, *optional*): Discret code embeddings computed using `model.encode`. audio_scales (`torch.Tensor` of shape `(batch_size, nb_chunks)`, *optional*): Scaling factor for each `audio_codes` input. padding_mask (`torch.Tensor` of shape `(batch_size, channels, sequence_length)`): Padding mask used to pad the `input_values`. return_dict (`bool`, *optional*): Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
github-repos
def to_env_vars(self): env = {'hosts': self.hosts, 'network_interface_name': self.network_interface_name, 'hps': self.hyperparameters, 'user_entry_point': self.user_entry_point, 'framework_params': self.additional_framework_parameters, 'resource_config': self.resource_config, 'input_data_config': self.input_data_config, 'output_data_dir': self.output_data_dir, 'channels': sorted(self.channel_input_dirs.keys()), 'current_host': self.current_host, 'module_name': self.module_name, 'log_level': self.log_level, 'framework_module': self.framework_module, 'input_dir': self.input_dir, 'input_config_dir': self.input_config_dir, 'output_dir': self.output_dir, 'num_cpus': self.num_cpus, 'num_gpus': self.num_gpus, 'model_dir': self.model_dir, 'module_dir': self.module_dir, 'training_env': dict(self), 'user_args': self.to_cmd_args(), 'output_intermediate_dir': self.output_intermediate_dir} for (name, path) in self.channel_input_dirs.items(): env[('channel_%s' % name)] = path for (key, value) in self.hyperparameters.items(): env[('hp_%s' % key)] = value return _mapping.to_env_vars(env)
Environment variable representation of the training environment Returns: dict: an instance of dictionary
codesearchnet
def GetMap(self, map_name, since=None, location=None): if map_name == config.MAP_PASSWORD: return self.GetPasswdMap(since) elif map_name == config.MAP_SSHKEY: return self.GetSshkeyMap(since) elif map_name == config.MAP_GROUP: return self.GetGroupMap(since) elif map_name == config.MAP_SHADOW: return self.GetShadowMap(since) elif map_name == config.MAP_NETGROUP: return self.GetNetgroupMap(since) elif map_name == config.MAP_AUTOMOUNT: return self.GetAutomountMap(since, location=location) raise error.UnsupportedMap('Source can not fetch %s' % map_name)
Get a specific map from this source. Args: map_name: A string representation of the map you want since: optional timestamp for incremental query location: optional field used by automounts to indicate a specific map Returns: A Map child class for the map requested. Raises: UnsupportedMap: for unknown source maps
github-repos
def recipe_manual(config, auth_read): hello(config, {'auth': auth_read, 'hour': [], 'say': 'Hello Manual', 'sleep': 0})
Used by tests. Args: auth_read (authentication) - Credentials used for reading data.
github-repos
def can_acomp(cat_id): url = 'https: auth = Auth() r = _req_with_retries(auth.gbdx_connection, url) try: data = r.json() return data['acompVersion'] is not None except: return False
Checks to see if a CatalogID can be atmos. compensated or not. Args: catalogID (str): The catalog ID from the platform catalog. Returns: available (bool): Whether or not the image can be acomp'd
juraj-google-style
def appliance_node_information(self): if (not self.__appliance_node_information): self.__appliance_node_information = ApplianceNodeInformation(self.__connection) return self.__appliance_node_information
Gets the ApplianceNodeInformation API client. Returns: ApplianceNodeInformation:
codesearchnet
def _make_headers(self, method, path, query={}, headers={}): date = datetime.datetime.utcnow().strftime('%a, %d %b %Y %H:%M:%S GMT') nonce = self._make_nonce() ctype = headers.get('Content-Type') if headers.get('Content-Type') else 'application/json' auth = self._make_auth(method, date, nonce, path, query=query, ctype=ctype) req_headers = { 'Content-Type': 'application/json', 'Date': date, 'On-Nonce': nonce, 'Authorization': auth, 'User-Agent': 'Onshape Python Sample App', 'Accept': 'application/json' } for h in headers: req_headers[h] = headers[h] return req_headers
Creates a headers object to sign the request Args: - method (str): HTTP method - path (str): Request path, e.g. /api/documents. No query string - query (dict, default={}): Query string in key-value format - headers (dict, default={}): Other headers to pass in Returns: - dict: Dictionary containing all headers
juraj-google-style
def _CanProcessKeyWithPlugin(self, registry_key, plugin): for registry_key_filter in plugin.FILTERS: if getattr(registry_key_filter, 'key_paths', []): continue if registry_key_filter.Match(registry_key): return True return False
Determines if a plugin can process a Windows Registry key or its values. Args: registry_key (dfwinreg.WinRegistryKey): Windows Registry key. plugin (WindowsRegistryPlugin): Windows Registry plugin. Returns: bool: True if the Registry key can be processed with the plugin.
codesearchnet
def _acquire_given_subnet(self, uuid_path, subnet): lease = self.create_lease_object_from_subnet(subnet) self._take_lease(lease, uuid_path) return lease.to_ip_network()
Try to create a lease for subnet Args: uuid_path (str): Path to the uuid file of a :class:`lago.Prefix` subnet (str): dotted ipv4 subnet (for example ```192.168.200.0```) Returns: netaddr.IPNetwork: Which represents the selected subnet Raises: LagoSubnetLeaseException: If the requested subnet is not in the range of this store or its already been taken
juraj-google-style
def __mod__(self, other): other = as_dimension(other) if self._value is None or other.value is None: return Dimension(None) else: return Dimension(self._value % other.value)
Returns `self` modulo `other`. Dimension modulo are computed as follows: ```python tf.compat.v1.Dimension(m) % tf.compat.v1.Dimension(n) == tf.compat.v1.Dimension(m % n) tf.compat.v1.Dimension(m) % tf.compat.v1.Dimension(None) # equiv. to tf.compat.v1.Dimension(None) tf.compat.v1.Dimension(None) % tf.compat.v1.Dimension(n) # equiv. to tf.compat.v1.Dimension(None) tf.compat.v1.Dimension(None) % tf.compat.v1.Dimension(None) # equiv. to tf.compat.v1.Dimension(None) ``` Args: other: Another Dimension, or a value accepted by `as_dimension`. Returns: A Dimension whose value is `self` modulo `other`.
github-repos
def AnalyzeEvents(self): session = engine.BaseEngine.CreateSession(command_line_arguments=self._command_line_arguments, preferred_encoding=self.preferred_encoding) storage_reader = storage_factory.StorageFactory.CreateStorageReaderForFile(self._storage_file_path) if (not storage_reader): logger.error('Format of storage file: {0:s} not supported'.format(self._storage_file_path)) return self._number_of_analysis_reports = storage_reader.GetNumberOfAnalysisReports() storage_reader.Close() configuration = self._CreateProcessingConfiguration(self._knowledge_base) counter = collections.Counter() if (self._output_format != 'null'): self._status_view.SetMode(self._status_view_mode) self._status_view.SetStorageFileInformation(self._storage_file_path) status_update_callback = self._status_view.GetAnalysisStatusUpdateCallback() storage_reader = storage_factory.StorageFactory.CreateStorageReaderForFile(self._storage_file_path) analysis_engine = psort.PsortMultiProcessEngine(use_zeromq=self._use_zeromq) analysis_engine.ExportEvents(self._knowledge_base, storage_reader, self._output_module, configuration, deduplicate_events=self._deduplicate_events, status_update_callback=status_update_callback, time_slice=self._time_slice, use_time_slicer=self._use_time_slicer) for (item, value) in iter(session.analysis_reports_counter.items()): counter[item] = value if self._quiet_mode: return self._output_writer.Write('Processing completed.\n') table_view = views.ViewsFactory.GetTableView(self._views_format_type, title='Counter') for (element, count) in counter.most_common(): if (not element): element = 'N/A' table_view.AddRow([element, count]) table_view.Write(self._output_writer) storage_reader = storage_factory.StorageFactory.CreateStorageReaderForFile(self._storage_file_path) self._PrintAnalysisReportsDetails(storage_reader, self._number_of_analysis_reports) self._output_writer.Write('Storage file is {0:s}\n'.format(self._storage_file_path))
Analyzes events from a plaso storage file and generate a report. Raises: BadConfigOption: when a configuration parameter fails validation. RuntimeError: if a non-recoverable situation is encountered.
codesearchnet
def most_uncertain_by_mask(self, mask, y): idxs = np.where(mask)[0] return idxs[np.argsort(np.abs((self.probs[(idxs, y)] - (1 / self.num_classes))))[:4]]
Extracts the first 4 most uncertain indexes from the ordered list of probabilities Arguments: mask (numpy.ndarray): the mask of probabilities specific to the selected class; a boolean array with shape (num_of_samples,) which contains True where class==selected_class, and False everywhere else y (int): the selected class Returns: idxs (ndarray): An array of indexes of length 4
codesearchnet
def create_state(self, state_manager): pass
Uses the `state_manager` to create state for the FeatureColumn. Args: state_manager: A `StateManager` to create / access resources such as lookup tables and variables.
github-repos
def build_and_pickle_dump(self, abivalidate=False): self.build() if (not abivalidate): return self.pickle_dump() (isok, errors) = self.abivalidate_inputs() if isok: return self.pickle_dump() errlines = [] for (i, e) in enumerate(errors): errlines.append(('[%d] %s' % (i, e))) raise ValueError('\n'.join(errlines))
Build dirs and file of the `Flow` and save the object in pickle format. Returns 0 if success Args: abivalidate: If True, all the input files are validate by calling the abinit parser. If the validation fails, ValueError is raise.
codesearchnet
def to_matrix(xx, yy, zz, xy, yz, xz): matrix = np.array([[xx, xy, xz], [xy, yy, yz], [xz, yz, zz]]) return matrix
Convert a list of matrix components to a symmetric 3x3 matrix. Inputs should be in the order xx, yy, zz, xy, yz, xz. Args: xx (float): xx component of the matrix. yy (float): yy component of the matrix. zz (float): zz component of the matrix. xy (float): xy component of the matrix. yz (float): yz component of the matrix. xz (float): xz component of the matrix. Returns: (np.array): The matrix, as a 3x3 numpy array.
codesearchnet
def dynamics(start, end=None): def _(sequence): if (start in _dynamic_markers_to_velocity): start_velocity = _dynamic_markers_to_velocity[start] start_marker = start else: raise ValueError(('Unknown start dynamic: %s, must be in %s' % (start, _dynamic_markers_to_velocity.keys()))) if (end is None): end_velocity = start_velocity end_marker = start_marker elif (end in _dynamic_markers_to_velocity): end_velocity = _dynamic_markers_to_velocity[end] end_marker = end else: raise ValueError(('Unknown end dynamic: %s, must be in %s' % (start, _dynamic_markers_to_velocity.keys()))) retval = sequence.__class__([Point(point) for point in sequence._elements]) velocity_interval = (((float(end_velocity) - float(start_velocity)) / (len(retval) - 1)) if (len(retval) > 1) else 0) velocities = [int((start_velocity + (velocity_interval * pos))) for pos in range(len(retval))] if (start_velocity > end_velocity): retval[0]['dynamic'] = 'diminuendo' retval[(- 1)]['dynamic'] = end_marker elif (start_velocity < end_velocity): retval[0]['dynamic'] = 'crescendo' retval[(- 1)]['dynamic'] = end_marker else: retval[0]['dynamic'] = start_marker for (point, velocity) in zip(retval, velocities): point['velocity'] = velocity return retval return _
Apply dynamics to a sequence. If end is specified, it will crescendo or diminuendo linearly from start to end dynamics. You can pass any of these strings as dynamic markers: ['pppppp', 'ppppp', 'pppp', 'ppp', 'pp', 'p', 'mp', 'mf', 'f', 'ff', 'fff', ''ffff] Args: start: beginning dynamic marker, if no end is specified all notes will get this marker end: ending dynamic marker, if unspecified the entire sequence will get the start dynamic marker Example usage: s1 | dynamics('p') # play a sequence in piano s2 | dynamics('p', 'ff') # crescendo from p to ff s3 | dynamics('ff', 'p') # diminuendo from ff to p
codesearchnet
def permutation_matrix(permutation): assert check_permutation(permutation) n = len(permutation) op_matrix = np_zeros((n, n), dtype=int) for (i, j) in enumerate(permutation): op_matrix[(j, i)] = 1 return Matrix(op_matrix)
r"""Return orthogonal permutation matrix for permutation tuple Return an orthogonal permutation matrix :math:`M_\sigma` for a permutation :math:`\sigma` defined by the image tuple :math:`(\sigma(1), \sigma(2),\dots \sigma(n))`, such that .. math:: M_\sigma \vec{e}_i = \vec{e}_{\sigma(i)} where :math:`\vec{e}_k` is the k-th standard basis vector. This definition ensures a composition law: .. math:: M_{\sigma \cdot \tau} = M_\sigma M_\tau. The column form of :math:`M_\sigma` is thus given by .. math:: M = ( \vec{e}_{\sigma(1)}, \vec{e}_{\sigma(2)}, \dots \vec{e}_{\sigma(n)}). Args: permutation (tuple): A permutation image tuple (zero-based indices!)
codesearchnet
def _get_value_from_match(self, key, match): value = match.groups(1)[0] clean_value = str(value).lstrip().rstrip() if (clean_value == 'true'): self._log.info('Got value of "%s" as boolean true.', key) return True if (clean_value == 'false'): self._log.info('Got value of "%s" as boolean false.', key) return False try: float_value = float(clean_value) self._log.info('Got value of "%s" as float "%f".', key, float_value) return float_value except ValueError: self._log.info('Got value of "%s" as string "%s".', key, clean_value) return clean_value
Gets the value of the property in the given MatchObject. Args: key (str): Key of the property looked-up. match (MatchObject): The matched property. Return: The discovered value, as a string or boolean.
codesearchnet
def get_key(key, data_structure): if (key == '/'): return data_structure path = key.split('/') (path[0] or path.pop(0)) current_value = data_structure while path: current_key = path.pop(0) try: current_key = int(current_key) except ValueError: pass try: current_value = current_value[current_key] except (KeyError, IndexError): LOGGER.debug('failed to extract path {}'.format(key)) return None return current_value
Helper method for extracting values from a nested data structure. Args: key (str): The path to the vales (a series of keys and indexes separated by '/') data_structure (dict or list): The data structure from which the value will be extracted. Returns: str: The values associated with key
codesearchnet
def FromId(architecture_id, error_on_unknown=True): if not architecture_id: return None for arch in Architecture._ALL: if arch.id == architecture_id: return arch if error_on_unknown: raise InvalidEnumValue(architecture_id, 'Architecture', [value.id for value in Architecture._ALL]) return None
Gets the enum corresponding to the given architecture id. Args: architecture_id: str, The architecture id to parse error_on_unknown: bool, True to raise an exception if the id is unknown, False to just return None. Raises: InvalidEnumValue: If the given value cannot be parsed. Returns: ArchitectureTuple, One of the Architecture constants or None if the input is None.
github-repos
def register_with_password(self, username, password): response = self.api.register( auth_body={"type": "m.login.dummy"}, kind='user', username=username, password=password, ) return self._post_registration(response)
Register for a new account on this HS. Args: username (str): Account username password (str): Account password Returns: str: Access Token Raises: MatrixRequestError
juraj-google-style
def partial_derivative_mu(mu, sigma, low, high, data): pd_mu = (np.sum((data - mu)) / (sigma ** 2)) pd_mu -= (len(data) * ((norm.pdf(low, mu, sigma) - norm.pdf(high, mu, sigma)) / (norm.cdf(high, mu, sigma) - norm.cdf(low, mu, sigma)))) return (- pd_mu)
The partial derivative with respect to the mean. Args: mu (float): the mean of the truncated normal sigma (float): the std of the truncated normal low (float): the lower truncation bound high (float): the upper truncation bound data (ndarray): the one dimension list of data points for which we want to calculate the likelihood Returns: float: the partial derivative evaluated at the given point
codesearchnet
def increment(self, size: int): assert size >= 0, size self.files += 1 self.size += size self.bandwidth_meter.feed(size)
Increment the number of files downloaded. Args: size: The size of the file
juraj-google-style
def experimental_run_functions_eagerly(run_eagerly): return run_functions_eagerly(run_eagerly)
Enables / disables eager execution of `tf.function`s. Calling `tf.config.experimental_run_functions_eagerly(True)` will make all invocations of `tf.function` run eagerly instead of running as a traced graph function. See `tf.config.run_functions_eagerly` for an example. Note: This flag has no effect on functions passed into tf.data transformations as arguments. tf.data functions are never executed eagerly and are always executed as a compiled Tensorflow Graph. Args: run_eagerly: Boolean. Whether to run functions eagerly. Returns: None
github-repos
def find_divisors(n): if (not isinstance(n, int)): raise TypeError('Expecting a strictly positive integer') if (n <= 0): raise ValueError('Expecting a strictly positive integer') for i in range(1, (int((n ** 0.5)) + 1)): if ((n % i) == 0): divisors = {i, (n for divisor in divisors: (yield divisor)
Find all the positive divisors of the given integer n. Args: n (int): strictly positive integer Returns: A generator of all the positive divisors of n Raises: TypeError: if n is not an integer ValueError: if n is negative
codesearchnet
def _extend_op(values, leaf_op, empty_st_op=None): if not isinstance(values, Sequence): raise ValueError('Expected a list') if not values: raise ValueError('List cannot be empty') if empty_st_op is None: empty_st_op = empty_st_op_like_zeros(leaf_op) value = values[0] if isinstance(value, StructuredTensor): empty_result = empty_st_op(values) if not value.field_names(): return empty_result new_fields = {} for k in value.field_names(): new_fields[k] = _extend_op([v.field_value(k) for v in values], leaf_op, empty_st_op) return StructuredTensor.from_fields(new_fields, shape=empty_result.shape) else: return leaf_op(values)
Extend an op from RaggedTensor and Tensor to StructuredTensor. Visits all children of the structured tensor, and children of children, applying leaf_op whenever it reaches a leaf, and empty_st_op whenever it reaches an internal node without children. Args: values: a list of structured tensors, ragged tensors, or tensors. All must have the same type. If they are structured tensors, they must have the same paths. leaf_op: an op for handling non-structured tensor. empty_st_op: op to create a structured tensor without fields. Returns: the result of the extended op (a StructuredTensor, RaggedTensor, or Tensor) Raises: ValueError: If values is not a Sequence or is empty.
github-repos
def read_classification_results(storage_client, file_path): if storage_client: success = False retry_count = 0 while (retry_count < 4): try: blob = storage_client.get_blob(file_path) if (not blob): return {} if (blob.size > MAX_ALLOWED_CLASSIFICATION_RESULT_SIZE): logging.warning('Skipping classification result because its too big: %d bytes for %s', blob.size, file_path) return None buf = BytesIO() blob.download_to_file(buf) buf.seek(0) success = True break except Exception: retry_count += 1 time.sleep(5) if (not success): return None else: try: with open(file_path, 'rb') as f: buf = BytesIO(f.read()) except IOError: return None result = {} if PY3: buf = StringIO(buf.read().decode('UTF-8')) for row in csv.reader(buf): try: image_filename = row[0] if (image_filename.endswith('.png') or image_filename.endswith('.jpg')): image_filename = image_filename[:image_filename.rfind('.')] label = int(row[1]) except (IndexError, ValueError): continue result[image_filename] = label return result
Reads classification results from the file in Cloud Storage. This method reads file with classification results produced by running defense on singe batch of adversarial images. Args: storage_client: instance of CompetitionStorageClient or None for local file file_path: path of the file with results Returns: dictionary where keys are image names or IDs and values are classification labels
codesearchnet
def load_mutation_rates(path=None): if path is None: path = resource_filename(__name__, "data/rates.txt") rates = [] with open(path) as handle: for line in handle: if line.startswith("from"): continue line = [ x.encode('utf8') for x in line.strip().split() ] rates.append(line) return rates
load sequence context-based mutation rates Args: path: path to table of sequence context-based mutation rates. If None, this defaults to per-trinucleotide rates provided by Kaitlin Samocha (Broad Institute). Returns: list of [initial, changed, rate] lists e.g. [['AGA', 'ATA', '5e-8']]
juraj-google-style
def set_server_def(self, server_def, keep_alive_secs=_KEEP_ALIVE_SECS): if not server_def: raise ValueError('server_def is None.') self._server_def = server_def if self._context_handle: server_def_str = server_def.SerializeToString() pywrap_tfe.TFE_ContextSetServerDef(self._context_handle, keep_alive_secs, server_def_str) self._initialize_logical_devices() self._clear_caches() _device_parsing_cache.clear()
Allow setting a server_def on the context. When a server def is replaced, it effectively clears a bunch of caches within the context. If you attempt to use a tensor object that was pointing to a tensor on the remote device, it will raise an error. Args: server_def: A tensorflow::ServerDef proto. Enables execution on remote devices. keep_alive_secs: Num. seconds after which the remote end will hang up. As long as the client is still alive, the server state for the context will be kept alive. If the client is killed (or there is some failure), the server will clean up its context keep_alive_secs after the final RPC it receives. Raises: ValueError: if server_def is None.
github-repos
def find(self, *index): assert (self.wrapFunction is not None) if ((len(index) == 1) and isinstance(index[0], (tuple, list))): index = index[0] it = self._impl.find(Tuple(index)._impl) if (it == self._impl.end()): return None else: return self.wrapFunction(it)
Searches the current entity for an instance with the specified index. Returns: The wanted instance if found, otherwise it returns `None`.
codesearchnet
def recursive_import(root): for _, name, _ in pkgutil.walk_packages(root.__path__, prefix=root.__name__ + '.'): try: importlib.import_module(name) except (AttributeError, ImportError): pass
Recursively imports all the sub-modules under a root package. Args: root: A python package.
github-repos