code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _from_tensor_shape(cls, shape: Any, num_row_partitions: int, dtype: dtypes.DType) -> 'DynamicRaggedShape.Spec': if dtype != dtypes.int32 and dtype != dtypes.int64: raise ValueError('dtype must be tf.int32 or tf.int64') shape = tensor_shape.as_shape(shape) if shape.rank is None: row_partitions = [RowPartitionSpec(dtype=dtype) for _ in range(num_row_partitions)] return DynamicRaggedShape.Spec(row_partitions=row_partitions, static_inner_shape=tensor_shape.TensorShape(None), dtype=dtype) if shape.rank <= 1: if num_row_partitions: raise ValueError('num_row_partitions should be zero ' + 'if shape is a scalar or vector.') return DynamicRaggedShape.Spec(row_partitions=[], static_inner_shape=shape, dtype=dtype) if shape.rank <= num_row_partitions: raise ValueError('num_row_partitions must be less than rank') num_elements_so_far = tensor_shape.dimension_value(shape[0]) rp_specs = [] for i in range(num_row_partitions): current_dim = tensor_shape.dimension_value(shape[i + 1]) if current_dim is None or num_elements_so_far is None: nvals = None else: nvals = num_elements_so_far * current_dim rp_specs.append(RowPartitionSpec(nrows=num_elements_so_far, nvals=nvals, uniform_row_length=current_dim, dtype=dtype)) num_elements_so_far = nvals static_inner_shape = tensor_shape.TensorShape([num_elements_so_far]) + shape[num_row_partitions + 1:] return DynamicRaggedShape.Spec(row_partitions=rp_specs, static_inner_shape=static_inner_shape, dtype=dtype)
Creates a `DynamicRaggedShape.Spec` corresponding to a `tf.TensorShape`. It is assumed that this is a `tf.TensorShape` coming from a `tf.TensorSpec`, not from `RaggedTensor.shape`. In addition to the shape, we need to know the number of row partitions, and the dtype used in the shape (tf.int32 or tf.int64). Within the dimensions that are partitioned, all dimensions are assumed to be uniform. Args: shape: a TensorShape. num_row_partitions: the ragged rank of the RaggedShape. dtype: the dtype of the shape (not the tensor); tf.int64 or tf.int32. Returns: a DynamicRaggedShape.Spec representing a TensorShape.
github-repos
def get_cost_per_mol(self, comp): comp = comp if isinstance(comp, Composition) else Composition(comp) decomp = self.get_lowest_decomposition(comp) return sum(k.energy_per_atom * v * comp.num_atoms for k, v in decomp.items())
Get best estimate of minimum cost/mol based on known data Args: comp: Composition as a pymatgen.core.structure.Composition Returns: float of cost/mol
juraj-google-style
def error(message): fail = '\033[91m' end = '\033[0m' sys.exit(fail + "Error: {}".format(message) + end)
Throw an error with the given message and immediately quit. Args: message(str): The message to display.
juraj-google-style
def parse_ranges(range_string): range_string = range_string.strip() if not range_string: return [] if 'inf' in range_string: range_string = re.sub('inf', repr(sys.float_info.max), range_string) ranges = ast.literal_eval(range_string) if isinstance(ranges, list) and (not isinstance(ranges[0], list)): ranges = [ranges] for item in ranges: if len(item) != 2: raise ValueError('Incorrect number of elements in range') elif not isinstance(item[0], (int, float)): raise ValueError('Incorrect type in the 1st element of range: %s' % type(item[0])) elif not isinstance(item[1], (int, float)): raise ValueError('Incorrect type in the 2nd element of range: %s' % type(item[0])) return ranges
Parse a string representing numerical range(s). Args: range_string: (str) A string representing a numerical range or a list of them. For example: "[-1.0,1.0]", "[-inf, 0]", "[[-inf, -1.0], [1.0, inf]]" Returns: (list of list of float) A list of numerical ranges parsed from the input string. Raises: ValueError: If the input doesn't represent a range or a list of ranges.
github-repos
def _get_class(self): class_parts = [self._prefix, self._known_keys[_InstrumentationKnownStatusKeys.CLASS]] return '.'.join(filter(None, class_parts))
Gets the class name of the test method for the instrumentation method block. Returns: A string containing the class name of the instrumentation test method's test or empty string if no name was parsed. If a prefix was specified, then the prefix will be prepended to the class name.
github-repos
def abort_class(reason, extras=None): raise signals.TestAbortClass(reason, extras)
Abort all subsequent tests within the same test class in one iteration. If one test class is requested multiple times in a test run, this can only abort one of the requested executions, NOT all. Args: reason: The reason to abort. extras: An optional field for extra information to be included in test result. Raises: signals.TestAbortClass: Abort all subsequent tests in a test class.
github-repos
def aes_decrypt(base64_encryption_key, base64_data): data = from_base64(base64_data) (aes_key_bytes, hmac_key_bytes) = _extract_keys(base64_encryption_key) (data, hmac_signature) = (data[:(- HMAC_SIG_SIZE)], data[(- HMAC_SIG_SIZE):]) if (hmac.new(hmac_key_bytes, data, hashlib.sha256).digest() != hmac_signature): raise AuthenticationError('HMAC authentication failed') (iv_bytes, data) = (data[:AES_BLOCK_SIZE], data[AES_BLOCK_SIZE:]) cipher = AES.new(aes_key_bytes, AES.MODE_CBC, iv_bytes) data = cipher.decrypt(data) return _unpad(data)
Verify HMAC-SHA256 signature and decrypt data with AES-CBC Arguments: encryption_key (str): a base64-encoded string containing an AES encryption key and HMAC signing key as generated by generate_encryption_key() data (str): a byte string containing the data decrypted with an HMAC signing key appended to the end Returns: str: a byte string containing the data that was originally encrypted Raises: AuthenticationError: when the HMAC-SHA256 signature authentication fails
codesearchnet
def human_timestamp_to_datetime(human_timestamp, to_utc=False): settings = {} if to_utc: settings = {"TO_TIMEZONE": "UTC"} return dateparser.parse(human_timestamp, settings=settings)
Converts a human-readable timestamp into a Python ``DateTime`` object Args: human_timestamp (str): A timestamp string to_utc (bool): Convert the timestamp to UTC Returns: DateTime: The converted timestamp
juraj-google-style
def _get_default_initializer(self, name, shape=None, dtype=dtypes.float32): del shape if dtype.is_floating: initializer = init_ops.glorot_uniform_initializer() initializing_from_value = False elif dtype.is_integer or dtype.is_unsigned or dtype.is_bool or (dtype == dtypes.string): initializer = init_ops.zeros_initializer() initializing_from_value = False else: raise ValueError('An initializer for variable %s of %s is required' % (name, dtype.base_dtype)) return (initializer, initializing_from_value)
Provide a default initializer and a corresponding value. Args: name: see get_variable. shape: see get_variable. dtype: see get_variable. Returns: initializer and initializing_from_value. See get_variable above. Raises: ValueError: When giving unsupported dtype.
github-repos
def Create(conf): global _source_implementations if not _source_implementations: raise RuntimeError('no source implementations exist') source_name = conf['name'] if source_name not in list(_source_implementations.keys()): raise RuntimeError('source not implemented: %r' % (source_name,)) return _source_implementations[source_name](conf)
Source creation factory method. Args: conf: a dictionary of configuration key/value pairs, including one required attribute 'name'. Returns: A Source instance. Raises: RuntimeError: no sources are registered with RegisterImplementation
github-repos
def swap_tensor_content_in_graph_function(graph_def, from_endiness, to_endiness): if isinstance(graph_def, meta_graph_pb2.MetaGraphDef): functions = graph_def.graph_def.library.function elif isinstance(graph_def, graph_pb2.GraphDef): functions = graph_def.library.function else: return for function in functions: node_def = function.node_def for node in node_def: if node.op == 'Const': tensor = node.attr['value'].tensor byte_swap_tensor_content(tensor, from_endiness, to_endiness)
Fix endiness of tensor contents. Args: graph_def: Target graph_def to change endiness. from_endiness: The original endianness format. "big" or "little" to_endiness: The target endianness format. "big" or "little"
github-repos
def get_help_data(filepath): try: with open(filepath, 'r') as file: return _json.load(file, object_pairs_hook=OrderedDict) except Exception as e: logger.error("Could not load file {}".format(filepath)) logger.exception(e) return {}
Get the json data from a help file Args: filepath (str): The file path for the help file Returns: data: The json data from a help file
juraj-google-style
def HasTable(self, table_name): if not self._connection: raise IOError('Not opened.') if not table_name: return False if self._table_names is None: self._table_names = [] self._cursor.execute(self._HAS_TABLE_QUERY) for row in self._cursor.fetchall(): if not row[0]: continue row_table_name = row[0] if isinstance(row_table_name, bytes): row_table_name = row_table_name.decode('utf-8') self._table_names.append(row_table_name.lower()) table_name = table_name.lower() return table_name in self._table_names
Determines if a specific table exists. Args: table_name (str): name of the table. Returns: bool: True if the column exists. Raises: IOError: if the database file is not opened. OSError: if the database file is not opened.
juraj-google-style
def add_severity(self, name, value): logger.debug('Adding severity {0} with value {1} to variant {2}'.format(name, value, self['variant_id'])) self['severities'].append({name: value})
Add a severity to the variant Args: name (str): The name of the severity value : The value of the severity
codesearchnet
def checksum(self, path): if not self.exists(path): raise BeamIOError('Path does not exist: %s' % path) return str(os.path.getsize(path))
Fetch checksum metadata of a file on the :class:`~apache_beam.io.filesystem.FileSystem`. Args: path: string path of a file. Returns: string containing file size. Raises: ``BeamIOError``: if path isn't a file or doesn't exist.
github-repos
def hex_is_dark(hexx, percent=50): (r, g, b) = hex_to_rgb(hexx) luma = ((((0.2126 * r) + (0.7152 * g)) + (0.0722 * b)) / 2.55) return (luma < percent)
Function to decide if a hex colour is dark. Args: hexx (str): A hexadecimal colour, starting with '#'. Returns: bool: The colour's brightness is less than the given percent.
codesearchnet
def _FlushCache(cls, format_categories): if (definitions.FORMAT_CATEGORY_ARCHIVE in format_categories): cls._archive_remainder_list = None cls._archive_scanner = None cls._archive_store = None if (definitions.FORMAT_CATEGORY_COMPRESSED_STREAM in format_categories): cls._compressed_stream_remainder_list = None cls._compressed_stream_scanner = None cls._compressed_stream_store = None if (definitions.FORMAT_CATEGORY_FILE_SYSTEM in format_categories): cls._file_system_remainder_list = None cls._file_system_scanner = None cls._file_system_store = None if (definitions.FORMAT_CATEGORY_STORAGE_MEDIA_IMAGE in format_categories): cls._storage_media_image_remainder_list = None cls._storage_media_image_scanner = None cls._storage_media_image_store = None if (definitions.FORMAT_CATEGORY_VOLUME_SYSTEM in format_categories): cls._volume_system_remainder_list = None cls._volume_system_scanner = None cls._volume_system_store = None
Flushes the cached objects for the specified format categories. Args: format_categories (set[str]): format categories.
codesearchnet
def __init__(self, default_value, initializer): super(InitializableLookupTableBase, self).__init__(initializer.key_dtype, initializer.value_dtype) self._default_value = ops.convert_to_tensor(default_value, dtype=self._value_dtype) self._default_value.get_shape().merge_with(tensor_shape.TensorShape([])) if isinstance(initializer, trackable_base.Trackable): self._initializer = self._track_trackable(initializer, '_initializer') with ops.init_scope(): self._resource_handle = self._create_resource() if not context.executing_eagerly() and ops.get_default_graph()._get_control_flow_context() is not None: with ops.init_scope(): self._init_op = self._initialize() else: self._init_op = self._initialize()
Construct a table object from a table reference. If requires a table initializer object (subclass of `TableInitializerBase`). It provides the table key and value types, as well as the op to initialize the table. The caller is responsible to execute the initialization op. Args: default_value: The value to use if a key is missing in the table. initializer: The table initializer to use.
github-repos
def ensure_scheme(url, default_scheme='http'): parsed = urlsplit(url, scheme=default_scheme) if (not parsed.netloc): parsed = SplitResult(scheme=parsed.scheme, netloc=parsed.path, path='', query=parsed.query, fragment=parsed.fragment) return urlunsplit(parsed)
Adds a scheme to a url if not present. Args: url (string): a url, assumed to start with netloc default_scheme (string): a scheme to be added Returns: string: URL with a scheme
codesearchnet
def status(self, **kwargs): path = ('/geo_nodes/%s/status' % self.get_id()) return self.manager.gitlab.http_get(path, **kwargs)
Get the status of the geo node. Args: **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabGetError: If the server failed to perform the request Returns: dict: The status of the geo node
codesearchnet
def NewFromContent(cls, content, urn, chunk_size=1024, token=None, private_key=None, public_key=None): aff4.FACTORY.Delete(urn, token=token) with data_store.DB.GetMutationPool() as pool: with aff4.FACTORY.Create(urn, cls, mode='w', mutation_pool=pool, token=token) as fd: for start_of_chunk in range(0, len(content), chunk_size): chunk = content[start_of_chunk:(start_of_chunk + chunk_size)] blob_rdf = rdf_crypto.SignedBlob() blob_rdf.Sign(chunk, private_key, public_key) fd.Add(blob_rdf, mutation_pool=pool) return urn
Alternate constructor for GRRSignedBlob. Creates a GRRSignedBlob from a content string by chunking it and signing each chunk. Args: content: The data to stored in the GRRSignedBlob. urn: The AFF4 URN to create. chunk_size: Data will be chunked into this size (each chunk is individually signed. token: The ACL Token. private_key: An rdf_crypto.RSAPrivateKey() instance. public_key: An rdf_crypto.RSAPublicKey() instance. Returns: the URN of the new object written.
codesearchnet
def _find_and_replace(text, start_string, end_string, replace_fn): ret = u"" current_pos = 0 while True: start_pos = text.find(start_string, current_pos) if start_pos == -1: ret += text[current_pos:] break ret += text[current_pos:start_pos] end_pos = text.find(end_string, start_pos + len(start_string)) if end_pos == -1: break ret += replace_fn(text[start_pos + len(start_string):end_pos]) current_pos = end_pos + len(end_string) return ret
Remove everything found between instances of start_string and end_string. Replace each such instance with replace_fn(removed_text) e.g. _find_and_replace(u"the [[fat]] cat [[sat]]", u"[[", u"]]", lambda x: x) = u"the fat cat sat" Args: text: a unicode string start_string: a unicode string end_string: a unicode string replace_fn: a unary function from unicode string to unicode string Returns: a string
juraj-google-style
def run_step(self, representer): assert representer, ("ObjectRepresenter instance required to run " "ObjectRewriterStep.") rewriter = ObjectRewriter(self.context.get_formatted_iterable, representer) super().run_step(rewriter)
Do the object in-out rewrite. Args: representer: A pypyr.filesystem.ObjectRepresenter instance.
juraj-google-style
def Next(self): stacktop = self.stack[(- 1)] if (stacktop.index == (- 1)): stacktop = _Frame(None, index=0) self.stack.append(stacktop) context_array = self.stack[(- 2)].context if (stacktop.index == len(context_array)): self.stack.pop() raise StopIteration stacktop.context = context_array[stacktop.index] stacktop.index += 1 return True
Advance to the next item in a repeated section. Raises: StopIteration if there are no more elements
codesearchnet
class _ConvBlock(tf.keras.Model): def __init__(self, kernel_size, filters, stage, block, data_format, strides=(2, 2)): super(_ConvBlock, self).__init__(name='') filters1, filters2, filters3 = filters conv_name_base = 'res' + str(stage) + block + '_branch' bn_name_base = 'bn' + str(stage) + block + '_branch' bn_axis = 1 if data_format == 'channels_first' else 3 self.conv2a = layers.Conv2D(filters1, (1, 1), strides=strides, name=conv_name_base + '2a', data_format=data_format) self.bn2a = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2a') self.conv2b = layers.Conv2D(filters2, kernel_size, padding='same', name=conv_name_base + '2b', data_format=data_format) self.bn2b = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2b') self.conv2c = layers.Conv2D(filters3, (1, 1), name=conv_name_base + '2c', data_format=data_format) self.bn2c = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '2c') self.conv_shortcut = layers.Conv2D(filters3, (1, 1), strides=strides, name=conv_name_base + '1', data_format=data_format) self.bn_shortcut = layers.BatchNormalization(axis=bn_axis, name=bn_name_base + '1') def call(self, input_tensor, training=False): x = self.conv2a(input_tensor) x = self.bn2a(x, training=training) x = tf.nn.relu(x) x = self.conv2b(x) x = self.bn2b(x, training=training) x = tf.nn.relu(x) x = self.conv2c(x) x = self.bn2c(x, training=training) shortcut = self.conv_shortcut(input_tensor) shortcut = self.bn_shortcut(shortcut, training=training) x += shortcut return tf.nn.relu(x)
_ConvBlock is the block that has a conv layer at shortcut. Args: kernel_size: the kernel size of middle conv layer at main path filters: list of integers, the filters of 3 conv layer at main path stage: integer, current stage label, used for generating layer names block: 'a','b'..., current block label, used for generating layer names data_format: data_format for the input ('channels_first' or 'channels_last'). strides: strides for the convolution. Note that from stage 3, the first conv layer at main path is with strides=(2,2), and the shortcut should have strides=(2,2) as well.
github-repos
def cumulative_distribution(self, X): self.check_fit() def func(*args): return self.probability_density(list(args)) lower_bound = self.get_lower_bound() ranges = [[lower_bound, val] for val in X] return integrate.nquad(func, ranges)[0]
Computes the cumulative distribution function for the copula Args: X: `numpy.ndarray` or `pandas.DataFrame` Returns: np.array: cumulative probability
codesearchnet
def add_activation_summary(x, types=None, name=None, collections=None): ndim = x.get_shape().ndims if ndim < 2: logger.warn("Cannot summarize scalar activation {}".format(x.name)) return if types is None: types = ['sparsity', 'rms', 'histogram'] with cached_name_scope('activation-summary'): add_tensor_summary(x, types, name=name, collections=collections)
Call :func:`add_tensor_summary` under a reused 'activation-summary' name scope. This function is a no-op if not calling from main training tower. Args: x (tf.Tensor): the tensor to summary. types (list[str]): summary types, defaults to ``['sparsity', 'rms', 'histogram']``. name (str): if is None, use x.name. collections (list[str]): collections of the summary ops.
juraj-google-style
def get_size_with_aspect_ratio(image_size: Tuple[int, int], size: int, max_size: Optional[int]=None, mod_size: int=16) -> Tuple[int, int]: height, width = image_size raw_size = None if max_size is not None: min_original_size = float(min((height, width))) max_original_size = float(max((height, width))) if max_original_size / min_original_size * size > max_size: raw_size = max_size * min_original_size / max_original_size size = int(round(raw_size)) if width < height: ow = size if max_size is not None and raw_size is not None: oh = int(raw_size * height / width) else: oh = int(size * height / width) elif height <= width and height == size or (width <= height and width == size): oh, ow = (height, width) else: oh = size if max_size is not None and raw_size is not None: ow = int(raw_size * width / height) else: ow = int(size * width / height) if mod_size is not None: ow_mod = torch.remainder(torch.tensor(ow), mod_size).item() oh_mod = torch.remainder(torch.tensor(oh), mod_size).item() ow = ow - ow_mod oh = oh - oh_mod return (oh, ow)
Computes the output image size given the input image size and the desired output size with multiple of divisible_size. Args: image_size (`Tuple[int, int]`): The input image size. size (`int`): The desired output size. max_size (`int`, *optional*): The maximum allowed output size. mod_size (`int`, *optional*): The size to make multiple of mod_size.
github-repos
def install_package(self, name, index=None, force=False, update=False): cmd = 'install' if force: cmd = '{0} {1}'.format(cmd, '--force-reinstall') if update: cmd = '{0} {1}'.format(cmd, '--update') if index: cmd = '{0} {1}'.format(cmd, '--index-url {0}'.format(index)) self.pip('{0} {1}'.format(cmd, name))
Install a given package. Args: name (str): The package name to install. This can be any valid pip package specification. index (str): The URL for a pypi index to use. force (bool): For the reinstall of packages during updates. update (bool): Update the package if it is out of date.
codesearchnet
def resource_path(package: Union[str, types.ModuleType]) -> abstract_path.Path: try: path = importlib_resources.files(package) except AttributeError: is_adhoc = True else: if isinstance(path, importlib_resources._adapters.CompatibilityFiles.SpecPath): is_adhoc = True else: is_adhoc = False if is_adhoc: if isinstance(package, types.ModuleType): package = getattr(package.__spec__, 'name', package.__name__) path = pathlib.Path(sys.modules[package].__file__) if path.name == '__init__.py': path = path.parent if isinstance(path, pathlib.Path): return abstract_path.Path(path) elif isinstance(path, zipfile.Path): path = ResourcePath(path.root, path.at) return typing.cast(abstract_path.Path, path) elif isinstance(path, importlib_resources.abc.Traversable): return typing.cast(abstract_path.Path, path) else: raise TypeError(f'Unknown resource path: {type(path)}: {path}')
Returns read-only root directory path of the module. Used to access module resource files. Usage: ```python path = epath.resource_path('tensorflow_datasets') / 'README.md' content = path.read_text() ``` This is compatible with everything, including zipapp (`.par`). Resource files should be in the `data=` of the `py_library(` (when using bazel). To write to your project (e.g. automatically update your code), read-only resource paths can be converted to read-write paths with `epath.to_write_path(path)`. Args: package: Module or module name. Returns: The read-only path to the root module directory
github-repos
def valid_as_v2_0(voevent): _return_to_standard_xml(voevent) valid_bool = voevent_v2_0_schema.validate(voevent) _remove_root_tag_prefix(voevent) return valid_bool
Tests if a voevent conforms to the schema. Args: voevent(:class:`Voevent`): Root node of a VOEvent etree. Returns: bool: Whether VOEvent is valid
codesearchnet
def options(self): response = self.repo.api.http_request('OPTIONS', self.uri) return response.headers
Small method to return headers of an OPTIONS request to self.uri Args: None Return: (dict) response headers from OPTIONS request
juraj-google-style
def set_default_by_alias(self, alias): if alias not in self._aliases: raise DataInvalidAlias('A dataset with alias {} does not exist'.format(alias)) self._default_index = self._aliases[alias]
Set the default dataset by its alias. After changing the default dataset, all calls without explicitly specifying the dataset by index or alias will be redirected to this dataset. Args: alias (str): The alias of the dataset that should be made the default. Raises: DataInvalidAlias: If the alias does not represent a valid dataset.
juraj-google-style
def cast_to_type(obj, out_type): in_type = type(obj) if out_type is in_type: return obj else: return out_type(obj)
Cast obj to out_type if it's not out_type already. If the obj happens to be out_type already, it just returns obj as is. Args: obj: input object out_type: type. Returns: obj cast to out_type. Usual python conversion / casting rules apply.
juraj-google-style
def from_api_repr(cls, api_repr): api_repr = api_repr.strip() if not api_repr: raise ValueError("Field path API representation cannot be empty.") return cls(*parse_field_path(api_repr))
Factory: create a FieldPath from the string formatted per the API. Args: api_repr (str): a string path, with non-identifier elements quoted It cannot exceed 1500 characters, and cannot be empty. Returns: (:class:`FieldPath`) An instance parsed from ``api_repr``. Raises: ValueError if the parsing fails
juraj-google-style
def init(config, workdir=None, logfile=None, loglevel=logging.INFO, **kwargs): setup_sdk_logging(logfile, loglevel) defaults = lago_config.get_section('init') if workdir is None: workdir = os.path.abspath('.lago') defaults['workdir'] = workdir defaults['virt_config'] = config defaults.update(kwargs) workdir, prefix = cmd.do_init(**defaults) return SDK(workdir, prefix)
Initialize the Lago environment Args: config(str): Path to LagoInitFile workdir(str): Path to initalize the workdir, defaults to "$PWD/.lago" **kwargs(dict): Pass arguments to :func:`~lago.cmd.do_init` logfile(str): A path to setup a log file. loglevel(int): :mod:`logging` log level. Returns: :class:`~lago.sdk.SDK`: Initialized Lago enviornment Raises: :exc:`~lago.utils.LagoException`: If initialization failed
juraj-google-style
def read_structs(fstream): struct = read_struct(fstream) while (struct is not None): (yield struct) struct = read_struct(fstream)
Read all structs from likwid's file stream. Args: fstream: Likwid's output file stream. Returns: A generator that can be used to iterate over all structs in the fstream.
codesearchnet
def unsafe_peek(init): def peek(store, container, _stack=None): return init(*[store.peek(attr, container, _stack=_stack) for attr in container]) return peek
Deserialize all the attributes available in the container and pass them in the same order as they come in the container. This is a factory function; returns the actual `peek` routine. Arguments: init: type constructor. Returns: callable: deserializer (`peek` routine).
codesearchnet
def onTagAdd(self, name, func): if ('*' in name): self.ontagaddglobs.add(name, func) else: self.ontagadds[name].append(func)
Register a callback for tag addition. Args: name (str): The name of the tag or tag glob. func (function): The callback func(node, tagname, tagval).
codesearchnet
def save_images(images, filenames, output_dir): for i, filename in enumerate(filenames): with tf.gfile.Open(os.path.join(output_dir, filename), 'w') as f: img = (((images[i, :, :, :] + 1.0) * 0.5) * 255.0).astype(np.uint8) Image.fromarray(img).save(f, format='PNG')
Saves images to the output directory. Args: images: array with minibatch of images filenames: list of filenames without path If number of file names in this list less than number of images in the minibatch then only first len(filenames) images will be saved. output_dir: directory where to save images
juraj-google-style
def __init__(self, shape, dtype=dtypes.float32, name=None): self._shape = tensor_shape.TensorShape(shape) try: self._shape_tuple = tuple(self.shape.as_list()) except ValueError: self._shape_tuple = None self._dtype = dtypes.as_dtype(dtype) self._name = name
Creates a TensorSpec. Args: shape: Value convertible to `tf.TensorShape`. The shape of the tensor. dtype: Value convertible to `tf.DType`. The type of the tensor values. name: Optional name for the Tensor. Raises: TypeError: If shape is not convertible to a `tf.TensorShape`, or dtype is not convertible to a `tf.DType`.
juraj-google-style
def get_edgestore_handle( client: arango.client.ArangoClient, username=None, password=None, edgestore_db_name: str = edgestore_db_name, edgestore_edges_name: str = edgestore_edges_name, edgestore_nodes_name: str = edgestore_nodes_name, edgestore_pipeline_name: str = edgestore_pipeline_name, edgestore_pipeline_stats_name: str = edgestore_pipeline_stats_name, edgestore_pipeline_errors_name: str = edgestore_pipeline_errors_name, ) -> arango.database.StandardDatabase: (username, password) = get_user_creds(username, password) sys_db = client.db("_system", username=username, password=password) try: if username and password: edgestore_db = sys_db.create_database( name=edgestore_db_name, users=[{"username": username, "password": password, "active": True}], ) else: edgestore_db = sys_db.create_database(name=edgestore_db_name) except arango.exceptions.DatabaseCreateError: if username and password: edgestore_db = client.db( edgestore_db_name, username=username, password=password ) else: edgestore_db = client.db(edgestore_db_name) try: nodes = edgestore_db.create_collection( edgestore_nodes_name, index_bucket_count=64 ) nodes.add_hash_index(fields=["name"], unique=False) nodes.add_hash_index( fields=["components"], unique=False ) except Exception: pass try: edges = edgestore_db.create_collection( edgestore_edges_name, edge=True, index_bucket_count=64 ) edges.add_hash_index(fields=["relation"], unique=False) edges.add_hash_index(fields=["edge_types"], unique=False) edges.add_hash_index(fields=["nanopub_id"], unique=False) edges.add_hash_index(fields=["metadata.project"], unique=False) edges.add_hash_index(fields=["annotations[*].id"], unique=False) except Exception: pass try: edgestore_db.create_collection(edgestore_pipeline_name) except Exception: pass try: edgestore_db.create_collection(edgestore_pipeline_errors_name) except Exception: pass try: edgestore_db.create_collection(edgestore_pipeline_stats_name) except arango.exceptions.CollectionCreateError as e: pass return edgestore_db
Get Edgestore arangodb database handle Args: client (arango.client.ArangoClient): Description username (None, optional): Description password (None, optional): Description edgestore_db_name (str, optional): Description edgestore_edges_name (str, optional): Description edgestore_nodes_name (str, optional): Description Returns: arango.database.StandardDatabase: Description
juraj-google-style
def _check_response(response, expected): response_code = response.status_code if expected == response_code: return if response_code < 400: raise ex.UnexpectedResponseCodeException(response.text) elif response_code == 401: raise ex.UnauthorizedException(response.text) elif response_code == 400: raise ex.BadRequestException(response.text) elif response_code == 403: raise ex.ForbiddenException(response.text) elif response_code == 404: raise ex.NotFoundException(response.text) elif response_code == 429: raise ex.RateLimitedException(response.text) else: raise ex.InternalServerErrorException(response.text)
Checks if the expected response code matches the actual response code. If they're not equal, raises the appropriate exception Args: response: (int) Actual status code expected: (int) Expected status code
juraj-google-style
def store_container(self, container): with self._store_lock: self.store.setdefault(container.CONTAINER_TYPE, []).append(container)
Thread-safe method to store data in the state's store. Args: container (containers.interface.AttributeContainer): The data to store.
codesearchnet
def rtt_get_num_up_buffers(self): cmd = enums.JLinkRTTCommand.GETNUMBUF dir = ctypes.c_int(enums.JLinkRTTDirection.UP) return self.rtt_control(cmd, dir)
After starting RTT, get the current number of up buffers. Args: self (JLink): the ``JLink`` instance Returns: The number of configured up buffers on the target. Raises: JLinkRTTException if the underlying JLINK_RTTERMINAL_Control call fails.
juraj-google-style
def get_text(revision, strip=True): start_pos = revision.find('<text') assert (start_pos != (- 1)) end_tag_pos = revision.find('>', start_pos) assert (end_tag_pos != (- 1)) end_tag_pos += len('>') end_pos = revision.find('</text>') if (end_pos == (- 1)): ret = '' else: ret = revision[end_tag_pos:end_pos] if strip: ret = strip_text(ret) ret = text_encoder.to_unicode_utf8(ret) return ret
Extract the text from a revision. Args: revision: a string strip: a boolean Returns: a string
codesearchnet
def identity(x, name=None): return array_ops.identity(x, name=name)
Returns a tensor with the same content as the input tensor. Args: x: The input tensor. name: String, name for the variable to create. Returns: A tensor of the same shape, type and content.
github-repos
def list_dir(root, prefix=False): root = os.path.expanduser(root) directories = list( filter( lambda p: os.path.isdir(os.path.join(root, p)), os.listdir(root) ) ) if prefix is True: directories = [os.path.join(root, d) for d in directories] return directories
List all directories at a given root Args: root (str): Path to directory whose folders need to be listed prefix (bool, optional): If true, prepends the path to each result, otherwise only returns the name of the directories found
juraj-google-style
def memory_write(self, addr, data, zone=None, nbits=None): buf_size = len(data) buf = None access = 0 if (nbits is None): packed_data = map((lambda d: reversed(binpacker.pack(d))), data) packed_data = list(itertools.chain(*packed_data)) buf_size = len(packed_data) buf = (ctypes.c_uint8 * buf_size)(*packed_data) access = 0 elif (nbits == 8): buf = (ctypes.c_uint8 * buf_size)(*data) access = 1 elif (nbits == 16): buf = (ctypes.c_uint16 * buf_size)(*data) access = 2 buf_size = (buf_size * access) elif (nbits == 32): buf = (ctypes.c_uint32 * buf_size)(*data) access = 4 buf_size = (buf_size * access) else: raise ValueError(('Given bit size is invalid: %s' % nbits)) args = [addr, buf_size, buf, access] method = self._dll.JLINKARM_WriteMemEx if (zone is not None): method = self._dll.JLINKARM_WriteMemZonedEx args.append(zone.encode()) units_written = method(*args) if (units_written < 0): raise errors.JLinkWriteException(units_written) return units_written
Writes memory to a target system or specific memory zone. The optional ``zone`` specifies a memory zone to access to write to, e.g. ``IDATA``, ``DDATA``, or ``CODE``. The given number of bits, if provided, must be either ``8``, ``16``, or ``32``. Args: self (JLink): the ``JLink`` instance addr (int): start address to write to data (list): list of data units to write zone (str): optional memory zone name to access nbits (int): number of bits to use for each unit Returns: Number of units written. Raises: JLinkException: on write hardware failure. ValueError: if ``nbits`` is not ``None``, and not in ``8``, ``16`` or ``32``.
codesearchnet
def from_index_amount(cls, idx, amount): if (np.array(idx).ndim == 0): v = np.zeros(6) v[idx] = amount return cls.from_voigt(v) elif (np.array(idx).ndim == 1): v = np.zeros((3, 3)) for i in itertools.permutations(idx): v[i] = amount return cls(v) else: raise ValueError('Index must either be 2-tuple or integer corresponding to full-tensor or voigt index')
Like Deformation.from_index_amount, except generates a strain from the zero 3x3 tensor or voigt vector with the amount specified in the index location. Ensures symmetric strain. Args: idx (tuple or integer): index to be perturbed, can be voigt or full-tensor notation amount (float): amount to perturb selected index
codesearchnet
def noise_new( dim: int, h: float = NOISE_DEFAULT_HURST, l: float = NOISE_DEFAULT_LACUNARITY, random: Optional[tcod.random.Random] = None, ) -> tcod.noise.Noise: return tcod.noise.Noise(dim, hurst=h, lacunarity=l, seed=random)
Return a new Noise instance. Args: dim (int): Number of dimensions. From 1 to 4. h (float): The hurst exponent. Should be in the 0.0-1.0 range. l (float): The noise lacunarity. random (Optional[Random]): A Random instance, or None. Returns: Noise: The new Noise instance.
juraj-google-style
def plot_cv(self, tmin, tmax, ntemp, ylim=None, **kwargs): temperatures = np.linspace(tmin, tmax, ntemp) if self.structure: ylabel = '$C_v$ (J/K/mol)' else: ylabel = '$C_v$ (J/K/mol-c)' fig = self._plot_thermo(self.dos.cv, temperatures, ylabel=ylabel, ylim=ylim, **kwargs) return fig
Plots the constant volume specific heat C_v in a temperature range. Args: tmin: minimum temperature tmax: maximum temperature ntemp: number of steps ylim: tuple specifying the y-axis limits. kwargs: kwargs passed to the matplotlib function 'plot'. Returns: matplotlib figure
codesearchnet
def _FormatServiceText(self, service): string_segments = [service.name, '\tImage Path = {0:s}'.format(service.image_path), '\tService Type = {0:s}'.format(service.HumanReadableType()), '\tStart Type = {0:s}'.format(service.HumanReadableStartType()), '\tService Dll = {0:s}'.format(service.service_dll), '\tObject Name = {0:s}'.format(service.object_name), '\tSources:'] for source in service.sources: string_segments.append('\t\t{0:s}:{1:s}'.format(source[0], source[1])) return '\n'.join(string_segments)
Produces a human readable multi-line string representing the service. Args: service (WindowsService): service to format. Returns: str: human readable representation of a Windows Service.
codesearchnet
def json_to_pybel(data, infer_bonds=False): obmol = ob.OBMol() obmol.BeginModify() for atom in data['atoms']: obatom = obmol.NewAtom() obatom.SetAtomicNum(table.GetAtomicNum(str(atom['element']))) obatom.SetVector(*atom['location']) if ('label' in atom): pd = ob.OBPairData() pd.SetAttribute('_atom_site_label') pd.SetValue(atom['label']) obatom.CloneData(pd) if (('bonds' not in data) or (not data['bonds'])): if infer_bonds: obmol.ConnectTheDots() obmol.PerceiveBondOrders() else: for bond in data['bonds']: if ('atoms' not in bond): continue obmol.AddBond((bond['atoms'][0] + 1), (bond['atoms'][1] + 1), bond['order']) if ('unitcell' in data): uc = ob.OBUnitCell() uc.SetData(*(ob.vector3(*v) for v in data['unitcell'])) uc.SetSpaceGroup('P1') obmol.CloneData(uc) obmol.EndModify() mol = pybel.Molecule(obmol) if ('charge' in data['atoms'][0]): mol.OBMol.SetPartialChargesPerceived() for (atom, pyatom) in zip(data['atoms'], mol.atoms): pyatom.OBAtom.SetPartialCharge(atom['charge']) return mol
Converts python data structure to pybel.Molecule. This will infer bond data if not specified. Args: data: The loaded json data of a molecule, as a Python object infer_bonds (Optional): If no bonds specified in input, infer them Returns: An instance of `pybel.Molecule`
codesearchnet
async def start(self, name='websocket_client'): self._con = (await websockets.connect(self.url)) self._connection_task = self._loop.add_task(self._manage_connection(), name=name)
Connect to the websocket server. This method will spawn a background task in the designated event loop that will run until stop() is called. You can control the name of the background task for debugging purposes using the name parameter. The name is not used in anyway except for debug logging statements. Args: name (str): Optional name for the background task.
codesearchnet
def add_time_dimension(padded_inputs, seq_lens): padded_batch_size = tf.shape(padded_inputs)[0] max_seq_len = padded_batch_size new_batch_size = padded_batch_size new_shape = ([new_batch_size, max_seq_len] + padded_inputs.get_shape().as_list()[1:]) return tf.reshape(padded_inputs, new_shape)
Adds a time dimension to padded inputs. Arguments: padded_inputs (Tensor): a padded batch of sequences. That is, for seq_lens=[1, 2, 2], then inputs=[A, *, B, B, C, C], where A, B, C are sequence elements and * denotes padding. seq_lens (Tensor): the sequence lengths within the input batch, suitable for passing to tf.nn.dynamic_rnn(). Returns: Reshaped tensor of shape [NUM_SEQUENCES, MAX_SEQ_LEN, ...].
juraj-google-style
def insert_query_m(data, table, conn, columns=None, db_type='mysql'): if len(data) > 10000: _chunk_query(data, 10000, columns, conn, table, db_type) else: if db_type == 'sqlite': type_sign = '?' else: type_sign = '%s' type_com = type_sign + ", " type = type_com * (len(data[0]) - 1) type = type + type_sign if columns: stmt = "INSERT INTO " + table + "( " + columns + ") VALUES (" + type + ")" else: stmt = "INSERT INTO " + table + " VALUES (" + type + ")" cursor = conn.cursor() cursor.executemany(stmt, data) conn.commit()
Insert python list of tuples into SQL table Args: data (list): List of tuples table (str): Name of database table conn (connection object): database connection object columns (str): String of column names to use if not assigned then all columns are presumed to be used [Optional] db_type (str): If "sqlite" or "mysql"
juraj-google-style
def get_additional_charge_by_identifier(self, recurring_billing_id): fmt = 'recurringBillItems/{}'.format(recurring_billing_id) return self.client._get((self.url + fmt), headers=self.get_headers())
Query extra charge information of an invoice from its identifier. Args: recurring_billing_id: Identifier of the additional charge. Returns:
codesearchnet
def plot_bloch_multivector(rho, title='', figsize=None): if not HAS_MATPLOTLIB: raise ImportError('Must have Matplotlib installed.') rho = _validate_input_state(rho) num = int(np.log2(len(rho))) width, height = plt.figaspect(1/num) fig = plt.figure(figsize=(width, height)) for i in range(num): ax = fig.add_subplot(1, num, i + 1, projection='3d') pauli_singles = [ Pauli.pauli_single(num, i, 'X'), Pauli.pauli_single(num, i, 'Y'), Pauli.pauli_single(num, i, 'Z') ] bloch_state = list( map(lambda x: np.real(np.trace(np.dot(x.to_matrix(), rho))), pauli_singles)) plot_bloch_vector(bloch_state, "qubit " + str(i), ax=ax, figsize=figsize) fig.suptitle(title, fontsize=16) plt.close(fig) return fig
Plot the Bloch sphere. Plot a sphere, axes, the Bloch vector, and its projections onto each axis. Args: rho (ndarray): Numpy array for state vector or density matrix. title (str): a string that represents the plot title figsize (tuple): Has no effect, here for compatibility only. Returns: Figure: A matplotlib figure instance if `ax = None`. Raises: ImportError: Requires matplotlib.
juraj-google-style
def __init__(self, class_number, train_examples, test_examples, **kwargs): super(EMNISTConfig, self).__init__(**kwargs) self.class_number = class_number self.train_examples = train_examples self.test_examples = test_examples
BuilderConfig for EMNIST class number. Args: class_number: There are six different splits provided in this dataset. And have different class numbers. train_examples: number of train examples test_examples: number of test examples **kwargs: keyword arguments forwarded to super.
juraj-google-style
def normalize(self, image: np.ndarray, data_format: Optional[Union[str, ChannelDimension]]=None, input_data_format: Optional[Union[str, ChannelDimension]]=None) -> np.ndarray: image = rescale(image=image, scale=1 / 127.5, data_format=data_format, input_data_format=input_data_format) image = image - 1 return image
Normalizes an images' pixel values to between [-1, 1]. Args: image (`np.ndarray`): Image to normalize. data_format (`str` or `ChannelDimension`, *optional*): The channel dimension format of the image. If not provided, it will be the same as the input image. input_data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred.
github-repos
def GetHelp(self, prefix='', include_special_flags=True): helplist = [] flags_by_module = self.FlagsByModuleDict() if flags_by_module: modules = sorted(flags_by_module) main_module = sys.argv[0] if (main_module in modules): modules.remove(main_module) modules = ([main_module] + modules) for module in modules: self.__RenderOurModuleFlags(module, helplist) if include_special_flags: self.__RenderModuleFlags('gflags', _helpers.SPECIAL_FLAGS.FlagDict().values(), helplist) else: values = self.FlagDict().values() if include_special_flags: values.append(_helpers.SPECIAL_FLAGS.FlagDict().values()) self.__RenderFlagList(values, helplist, prefix) return '\n'.join(helplist)
Generates a help string for all known flags. Args: prefix: str, per-line output prefix. include_special_flags: bool, whether to include description of _SPECIAL_FLAGS, i.e. --flagfile and --undefok. Returns: str, formatted help message.
codesearchnet
def download_software_file(filename=None, synch=False): if not filename: raise CommandExecutionError("Filename option must not be none.") if not isinstance(synch, bool): raise CommandExecutionError("Synch option must be boolean..") if synch is True: query = {'type': 'op', 'cmd': '<request><system><software><download>' '<file>{0}</file></download></software></system></request>'.format(filename)} else: query = {'type': 'op', 'cmd': '<request><system><software><download><sync-to-peer>yes</sync-to-peer>' '<file>{0}</file></download></software></system></request>'.format(filename)} return _get_job_results(query)
Download software packages by filename. Args: filename(str): The filename of the PANOS file to download. synch (bool): If true then the file will synch to the peer unit. CLI Example: .. code-block:: bash salt '*' panos.download_software_file PanOS_5000-8.0.0 salt '*' panos.download_software_file PanOS_5000-8.0.0 True
juraj-google-style
def from_dict(cls, config_dict, **kwargs): config = cls(**config_dict) to_remove = [] for key, value in kwargs.items(): if hasattr(config, key): setattr(config, key, value) to_remove.append(key) for key in to_remove: kwargs.pop(key, None) return config
Constructs a BaseWatermarkingConfig instance from a dictionary of parameters. Args: config_dict (Dict[str, Any]): Dictionary containing configuration parameters. **kwargs: Additional keyword arguments to override dictionary values. Returns: BaseWatermarkingConfig: Instance of BaseWatermarkingConfig constructed from the dictionary.
github-repos
def determine_opening_indent(indent_texts): num_lines = len(indent_texts) if num_lines < 1: return 0 assert num_lines >= 1 first_line_indent = indent_texts[0][0] if num_lines == 1: return first_line_indent assert num_lines >= 2 second_line_indent = indent_texts[1][0] second_line_text = indent_texts[1][1] if len(second_line_text) == 0: return first_line_indent return second_line_indent
Determine the opening indent level for a docstring. The opening indent level is the indent level is the first non-zero indent level of a non-empty line in the docstring. Args: indent_texts: The lines of the docstring as an iterable over 2-tuples each containing an integer indent level as the first element and the text as the second element. Returns: The opening indent level as an integer.
juraj-google-style
def get_nearest_site(self, coords, site, r=None): index = self.index(site) if r is None: r = np.linalg.norm(np.sum(self.lattice.matrix, axis=0)) ns = self.get_sites_in_sphere(coords, r, include_index=True) ns = [n for n in ns if n[2] == index] ns.sort(key=lambda x: x[1]) return ns[0][0:2]
Given coords and a site, find closet site to coords. Args: coords (3x1 array): cartesian coords of center of sphere site: site to find closest to coords r: radius of sphere. Defaults to diagonal of unit cell Returns: Closest site and distance.
juraj-google-style
def draw(vertexes, edges): Xs = [] Ys = [] sug = _build_sugiyama_layout(vertexes, edges) for vertex in sug.g.sV: Xs.append((vertex.view.xy[0] - (vertex.view.w / 2.0))) Xs.append((vertex.view.xy[0] + (vertex.view.w / 2.0))) Ys.append(vertex.view.xy[1]) Ys.append((vertex.view.xy[1] + vertex.view.h)) for edge in sug.g.sE: for (x, y) in edge.view._pts: Xs.append(x) Ys.append(y) minx = min(Xs) miny = min(Ys) maxx = max(Xs) maxy = max(Ys) canvas_cols = (int(math.ceil((math.ceil(maxx) - math.floor(minx)))) + 1) canvas_lines = int(round((maxy - miny))) canvas = AsciiCanvas(canvas_cols, canvas_lines) for edge in sug.g.sE: assert (len(edge.view._pts) > 1) for index in range(1, len(edge.view._pts)): start = edge.view._pts[(index - 1)] end = edge.view._pts[index] start_x = int(round((start[0] - minx))) start_y = int(round((start[1] - miny))) end_x = int(round((end[0] - minx))) end_y = int(round((end[1] - miny))) assert (start_x >= 0) assert (start_y >= 0) assert (end_x >= 0) assert (end_y >= 0) canvas.line(start_x, start_y, end_x, end_y, '*') for vertex in sug.g.sV: x = (vertex.view.xy[0] - (vertex.view.w / 2.0)) y = vertex.view.xy[1] canvas.box(int(round((x - minx))), int(round((y - miny))), vertex.view.w, vertex.view.h) canvas.text((int(round((x - minx))) + 1), (int(round((y - miny))) + 1), vertex.data) canvas.draw()
Build a DAG and draw it in ASCII. Args: vertexes (list): list of graph vertexes. edges (list): list of graph edges.
codesearchnet
def adaptive_gaussian_prior_builder(getter, name, *args, **kwargs): kwargs['shape'] = () loc_var = getter((name + '_prior_loc'), *args, **kwargs) kwargs['initializer'] = scale_variable_initializer(0.01) scale_var = getter((name + '_prior_scale'), *args, **kwargs) prior = tfp.distributions.Normal(loc=loc_var, scale=tf.nn.softplus(scale_var), name='{}_prior_dist'.format(name)) return prior
A pre-canned builder for adaptive scalar gaussian prior distributions. Given a true `getter` function and arguments forwarded from `tf.get_variable`, return a distribution object for a scalar-valued adaptive gaussian prior which will be broadcast over a variable of the requisite shape. This prior's parameters (e.g `loc` and `scale` for a gaussian) will consist of a single learned scalar for the entire `tf.Variable` for which it serves as the prior, regardless of that `tf.Variable`'s shape. Args: getter: The `getter` passed to a `custom_getter`. Please see the documentation for `tf.get_variable`. name: The `name` argument passed to `tf.get_variable`. *args: See positional arguments passed to `tf.get_variable`. **kwargs: See keyword arguments passed to `tf.get_variable`. Returns: An instance of `tfp.distributions.Normal` representing the prior distribution over the variable in question.
codesearchnet
def merge_bindings(program: cfg.Program, node: cfg.CFGNode, bindings: Sequence[cfg.Binding]) -> cfg.Variable: v = program.NewVariable() for b in bindings: v.PasteBinding(b, node) return v
Create a combined Variable for a list of bindings. Args: program: A cfg.Program instance. node: The current CFG node. bindings: A list of cfg.Bindings. Returns: A cfg.Variable.
github-repos
def _ParseIndex(self, preread, precompile): self.index = texttable.TextTable() self.index.CsvToTable(self._index_handle) if preread: for row in self.index: for col in row.header: row[col] = preread(col, row[col]) self.compiled = copy.deepcopy(self.index) for row in self.compiled: for col in row.header: if precompile: row[col] = precompile(col, row[col]) if row[col]: row[col] = copyable_regex_object.CopyableRegexObject(row[col])
Reads index file and stores entries in TextTable. For optimisation reasons, a second table is created with compiled entries. Args: preread: func, Pre-processing, applied to each field as it is read. precompile: func, Pre-compilation, applied to each field before compiling. Raises: IndexTableError: If the column headers has illegal column labels.
juraj-google-style
def create_test_method(pipeline_spec_file: str, custom_preprocessors: List[Callable[..., Union[Dict, List]]]): @mock.patch('apache_beam.Pipeline', TestPipeline) def test_yaml_example(self): with open(pipeline_spec_file, encoding='utf-8') as f: lines = f.readlines() expected_key = ' if expected_key in lines: expected = lines[lines.index(' else: raise ValueError(f"Missing ' for i, line in enumerate(expected): expected[i] = line.replace(' pipeline_spec = yaml.load(''.join(lines), Loader=yaml_transform.SafeLineLoader) with TestEnvironment() as env: for fn in custom_preprocessors: pipeline_spec = fn(pipeline_spec, expected, env) with beam.Pipeline(options=PipelineOptions(pickle_library='cloudpickle', **yaml_transform.SafeLineLoader.strip_metadata(pipeline_spec.get('options', {})))) as p: actual = [yaml_transform.expand_pipeline(p, pipeline_spec, [yaml_provider.InlineProvider(TEST_PROVIDERS, INPUT_TRANSFORM_TEST_PROVIDERS)])] if not actual[0]: actual = list(p.transforms_stack[0].parts[-1].outputs.values()) for transform in p.transforms_stack[0].parts[:-1]: if transform.transform.label == 'log_for_testing': actual += list(transform.outputs.values()) check_output(expected)(actual) if 'deps' in pipeline_spec_file: test_yaml_example = pytest.mark.no_xdist(test_yaml_example) test_yaml_example = unittest.skipIf(sys.platform == 'win32', 'Github virtualenv permissions issues.')(test_yaml_example) test_yaml_example = unittest.skipIf('-cloud' in os.environ.get('TOX_ENV_NAME', ''), 'Github actions environment issue.')(test_yaml_example) if 'java_deps' in pipeline_spec_file: test_yaml_example = pytest.mark.xlang_sql_expansion_service(test_yaml_example) test_yaml_example = unittest.skipIf(not os.path.exists(subprocess_server.JavaJarServer.path_to_dev_beam_jar('sdks:java:extensions:sql:expansion-service:shadowJar')), 'Requires expansion service jars.')(test_yaml_example) return test_yaml_example
Generates a test method for a given YAML pipeline specification file. This function reads the YAML file, extracts the expected output (if present), and creates a test function that uses `TestPipeline` to run the pipeline defined in the YAML file. It also applies any custom preprocessors registered for this test. Args: pipeline_spec_file: The path to the YAML file containing the pipeline specification. custom_preprocessors: A list of preprocessor functions to apply before running the test. Returns: A test method (Callable) that can be added to a unittest.TestCase class.
github-repos
def get_likelihood(self, uni_matrix): if (self.parents is None): left_u = uni_matrix[(:, self.L)] right_u = uni_matrix[(:, self.R)] else: left_ing = list((self.D - self.parents[0].D))[0] right_ing = list((self.D - self.parents[1].D))[0] left_u = uni_matrix[(self.L, left_ing)] right_u = uni_matrix[(self.R, right_ing)] copula = Bivariate(self.name) copula.theta = self.theta X_left_right = np.array([[left_u, right_u]]) X_right_left = np.array([[right_u, left_u]]) value = np.sum(copula.probability_density(X_left_right)) left_given_right = copula.partial_derivative(X_left_right) right_given_left = copula.partial_derivative(X_right_left) return (value, left_given_right, right_given_left)
Compute likelihood given a U matrix. Args: uni_matrix(numpy.array): Matrix to compute the likelihood. Return: tuple(np.ndarray, np.ndarray, np.array): likelihood and conditional values.
codesearchnet
def ec2_pipeline_setup(generated=None, project='', settings=None, env='', pipeline_type='', region='', region_subnets=None): data = copy.deepcopy(settings) user_data = generate_encoded_user_data(env=env, region=region, generated=generated, group_name=project, pipeline_type=pipeline_type) instance_security_groups = sorted(DEFAULT_EC2_SECURITYGROUPS[env]) instance_security_groups.append(generated.security_group_app) instance_security_groups.extend(settings['security_group']['instance_extras']) instance_security_groups = remove_duplicate_sg(instance_security_groups) LOG.info('Instance security groups to attach: %s', instance_security_groups) if settings['asg']['scaling_policy']: scalingpolicy = True LOG.info('Found scaling policy') else: scalingpolicy = False LOG.info('No scaling policy found') if settings['app']['eureka_enabled']: elb = [] else: elb = [generated.elb_app] LOG.info('Attaching the following ELB: %s', elb) health_checks = check_provider_healthcheck(settings) if ((env == 'dev') or settings['app']['eureka_enabled']): data['asg'].update({'hc_type': 'EC2'}) LOG.info('Switching health check type to: EC2') hc_grace_period = data['asg'].get('hc_grace_period') app_grace_period = data['asg'].get('app_grace_period') grace_period = (hc_grace_period + app_grace_period) ssh_keypair = data['asg'].get('ssh_keypair', None) if (not ssh_keypair): ssh_keypair = '{0}_{1}_default'.format(env, region) LOG.info('SSH keypair (%s) used', ssh_keypair) if settings['app']['canary']: canary_user_data = generate_encoded_user_data(env=env, region=region, generated=generated, group_name=project, canary=True) data['app'].update({'canary_encoded_user_data': canary_user_data}) data['asg'].update({'hc_type': data['asg'].get('hc_type').upper(), 'hc_grace_period': grace_period, 'ssh_keypair': ssh_keypair, 'provider_healthcheck': json.dumps(health_checks.providers), 'enable_public_ips': json.dumps(settings['asg']['enable_public_ips']), 'has_provider_healthcheck': health_checks.has_healthcheck, 'asg_whitelist': ASG_WHITELIST}) data['app'].update({'az_dict': json.dumps(region_subnets), 'encoded_user_data': user_data, 'instance_security_groups': json.dumps(instance_security_groups), 'elb': json.dumps(elb), 'scalingpolicy': scalingpolicy}) return data
Handles ec2 pipeline data setup Args: generated (gogoutils.Generator): Generated naming formats. project (str): Group name of application settings (dict): Environment settings from configurations. env (str): Deploy environment name, e.g. dev, stage, prod. pipeline_type (str): Type of Foremast Pipeline to configure. region (str): AWS Region to deploy to. region_subnets (dict): Subnets for a Region, e.g. {'us-west-2': ['us-west-2a', 'us-west-2b', 'us-west-2c']}. Returns: dict: Updated settings to pass to templates for EC2 info
codesearchnet
def get(self, block_id): pool = current_app.config['bigchain_pool'] with pool() as bigchain: block = bigchain.get_block(block_id=block_id) if not block: return make_error(404) return block
API endpoint to get details about a block. Args: block_id (str): the id of the block. Return: A JSON string containing the data about the block.
juraj-google-style
def delete(self, url, params=None, **kwargs): return self.call_api( "DELETE", url, params=params, **kwargs )
Call the API with a DELETE request. Args: url (str): Resource location relative to the base URL. params (dict or None): Query-string parameters. Returns: ResultParser or ErrorParser.
juraj-google-style
def submit_evaluation(self, variant_obj, user_obj, institute_obj, case_obj, link, criteria): variant_specific = variant_obj['_id'] variant_id = variant_obj['variant_id'] user_id = user_obj['_id'] user_name = user_obj.get('name', user_obj['_id']) institute_id = institute_obj['_id'] case_id = case_obj['_id'] evaluation_terms = [evluation_info['term'] for evluation_info in criteria] classification = get_acmg(evaluation_terms) evaluation_obj = build_evaluation(variant_specific=variant_specific, variant_id=variant_id, user_id=user_id, user_name=user_name, institute_id=institute_id, case_id=case_id, classification=classification, criteria=criteria) self._load_evaluation(evaluation_obj) self.update_acmg(institute_obj, case_obj, user_obj, link, variant_obj, classification) return classification
Submit an evaluation to the database Get all the relevant information, build a evaluation_obj Args: variant_obj(dict) user_obj(dict) institute_obj(dict) case_obj(dict) link(str): variant url criteria(list(dict)): [ { 'term': str, 'comment': str, 'links': list(str) }, . . ]
codesearchnet
def match_from_mro(self, left, other_type, allow_compat_builtins=True): for base in left.mro: if isinstance(base, abstract.ParameterizedClass): base_cls = base.base_cls else: base_cls = base if isinstance(base_cls, abstract.Class): if self._match_base_class_flat(base_cls, other_type, allow_compat_builtins): return base elif isinstance(base_cls, abstract.AMBIGUOUS): return base_cls elif isinstance(base_cls, abstract.Empty): continue else: log.warning('Invalid base class %r', base_cls) continue
Checks a type's MRO for a match for a formal type. Args: left: The type. other_type: The formal type. allow_compat_builtins: Whether to allow compatible builtins to match - e.g., int against float. Returns: The match, if any, None otherwise.
github-repos
def get_result(self, timeout=None) -> Optional[GenerationOutput]: if self._generation_thread is None and self.output_queue.empty(): return None try: result = self.output_queue.get(block=True, timeout=timeout) logger.debug(f'Retrieved result for request {result.request_id}') return result except queue.Empty: return None
Retrieve one result from the output queue. Args: timeout: Maximum time to wait for a result Returns: Optional[Dict]: The result data or None if timeout
github-repos
def configure_tests(tests, test_run_id): print('UPDATE CONFIG') os.makedirs(HARNESS_DIRECTORY, exist_ok=True) for filename, script in tests: script_fields = json_get_fields(script) script_name = filename.split('.')[0] harness_fields = {} harness_path = HARNESS_DIRECTORY + script_name + '.json' if os.path.exists(harness_path): with open(harness_path, 'r') as f: harness_fields = json.load(f) new_fields = {} for field in script_fields: if field['name'] == 'test_run_id': new_fields['test_run_id'] = test_run_id else: new_fields[field['name']] = harness_fields.get(field['name'], field.get('default')) new_fields['%s_description' % field['name']] = '(%s) %s' % (field.get('kind', 'string'), field.get('description', 'No description.')) if field['name'] not in harness_fields: print('NEW FIELD ADDED', script_name, field['name']) if new_fields: with open(harness_path, 'w') as f: json.dump(new_fields, f, indent=2) elif os.path.exists(harness_path): os.remove(harness_path) print('') print('------') print('------------') print('------------------------') print('Some tests require custom values. Update the necessary fields for the tests you wish to run.') print('EDIT: ' + HARNESS_DIRECTORY) print('------------------------') print('Some tests require external assets. Join the following group to gain access.') print('VISIT: https: print('------------------------') print('------------') print('------') print('') sleep(3)
Initialize the starthinker_assets/tests.json variable harness. Read all existing tests from tests/*.json and create a harness file in starthinker_assets/tests/*.json so developer can configure tests. Args: test: List of (filename, json) pairs containing all the tests. Returns: None
github-repos
def __init__( self, base_url, username=None, api_key=None, status_endpoint=None, timeout=60 ): self.base_url = base_url self.username = username self.api_key = api_key self.status_endpoint = urljoin(self.base_url, status_endpoint) self.timeout = timeout
Initialise client. Args: base_url (str): The base URL to the service being used. username (str): The username to authenticate with. api_key (str): The API key to authenticate with. timeout (int): Maximum time before timing out.
juraj-google-style
def _add_session_callback(self, callback_obj, callback, one_shot, originator): if one_shot: @wraps(callback) def remove_then_invoke(*args, **kwargs): if (callback_obj in self._session_callbacks): self._remove_session_callback(callback_obj, originator) return callback(*args, **kwargs) actual_callback = remove_then_invoke else: actual_callback = callback callback_obj._callback = self._wrap_with_self_as_curdoc(actual_callback) self._session_callbacks.add(callback_obj) self._callback_objs_by_callable[originator][callback].add(callback_obj) self._trigger_on_change(SessionCallbackAdded(self, callback_obj)) return callback_obj
Internal implementation for adding session callbacks. Args: callback_obj (SessionCallback) : A session callback object that wraps a callable and is passed to ``trigger_on_change``. callback (callable) : A callable to execute when session events happen. one_shot (bool) : Whether the callback should immediately auto-remove itself after one execution. Returns: SessionCallback : passed in as ``callback_obj``. Raises: ValueError, if the callback has been previously added
codesearchnet
def forward(self, hidden_states): hidden_states = self.wi(hidden_states) hidden_states = self.act(hidden_states) hidden_states = self.dropout(hidden_states) hidden_states = self.wo(hidden_states) return hidden_states
Args: hidden_states (`torch.Tensor`) : [num_groups, tokens_per_group, hidden_dim] inputs to send to experts. Returns: torch.Tensor[num_groups, tokens_per_group, hidden_dim]
github-repos
def _oai_to_xml(marc_oai): record = MARCXMLRecord(marc_oai) record.oai_marc = False return record.to_XML()
Convert OAI to MARC XML. Args: marc_oai (str): String with either OAI or MARC XML. Returns: str: String with MARC XML.
juraj-google-style
def sg_producer_func(func): @wraps(func) def wrapper(**kwargs): 'Manages arguments of `tf.sg_opt`.\n\n Args:\n **kwargs:\n source: A source queue list to enqueue\n dtypes: Input data types of each tensor\n out_dtypes: Output data types of each tensor ( If None, same as dtypes )\n capacity: Queue capacity. Default is 32.\n num_threads: Number of threads. Default is 1.\n ' opt = (tf.sg_opt(kwargs) + tf.sg_opt(dtypes=[tf.sg_floatx], capacity=32, num_threads=1)) assert (opt.source is not None), 'source is mandatory.' if ((type(opt.source) is not list) and (type(opt.source) is not tuple)): opt.source = [opt.source] if ((type(opt.dtypes) is not list) and (type(opt.dtypes) is not tuple)): opt.dtypes = [opt.dtypes] if (opt.out_dtypes is None): opt.out_dtypes = opt.dtypes if ((type(opt.out_dtypes) is not list) and (type(opt.out_dtypes) is not tuple)): opt.out_dtypes = [opt.out_dtypes] assert (len(opt.source) == len(opt.dtypes)), 'Source and dtypes should have same length.' def enqueue_func(sess, op): data = func(sess.run(opt.source)) feed_dict = {} for (ph, col) in zip(placeholders, data): feed_dict[ph] = col sess.run(op, feed_dict=feed_dict) placeholders = [] for dtype in opt.dtypes: placeholders.append(tf.placeholder(dtype=dtype)) queue = tf.FIFOQueue(opt.capacity, dtypes=opt.out_dtypes) enqueue_op = queue.enqueue(placeholders) runner = _FuncQueueRunner(enqueue_func, queue, ([enqueue_op] * opt.num_threads)) tf.train.add_queue_runner(runner) return queue.dequeue() return wrapper
r"""Decorates a function `func` as sg_producer_func. Args: func: A function to decorate.
codesearchnet
def __init__(self, excluded_sites=None, **kwargs): super().__init__(**kwargs) self.excluded_site = excluded_sites if excluded_sites is None: self.excluded_site = []
Constructor. Args: excluded_sites(list): sites to forget about when reloading the jobs. The primary use case was to exclude unreachable sites and allow the program to go on.
juraj-google-style
def read_as_base64(fn): with open(fn) as unpacked_file: with tempfile.TemporaryFile() as b64_file: base64.encode(unpacked_file, b64_file) b64_file.flush() b64_file.seek(0) return b64_file.read()
Convert given `fn` to base64 and return it. This method does the process in not-so-much memory consuming way. Args: fn (str): Path to the file which should be converted. Returns: str: File encoded as base64.
juraj-google-style
def parse_pv(header): order_fit = parse_order_fit(header) def parse_with_base(i): key_base = "PV%d_" % i pvi_x = [header[key_base + "0"]] def parse_range(lower, upper): for j in range(lower, upper + 1): pvi_x.append(header[key_base + str(j)]) if order_fit >= 1: parse_range(1, 3) if order_fit >= 2: parse_range(4, 6) if order_fit >= 3: parse_range(7, 10) return pvi_x return [parse_with_base(1), parse_with_base(2)]
Parses the PV array from an astropy FITS header. Args: header: astropy.io.fits.header.Header The header containing the PV values. Returns: cd: 2d array (list(list(float)) [[PV1_0, PV1_1, ... PV1_N], [PV2_0, PV2_1, ... PV2_N]] Note that N depends on the order of the fit. For example, an order 3 fit goes up to PV?_10.
juraj-google-style
def strip_prefix_from_items(prefix, items): items_no_prefix = [] for item in items: if item.startswith(prefix): items_no_prefix.append(item[len(prefix):]) else: items_no_prefix.append(item) return items_no_prefix
Strips out the prefix from each of the items if it is present. Args: prefix: the string for that you wish to strip from the beginning of each of the items. items: a list of strings that may or may not contain the prefix you want to strip out. Returns: items_no_prefix: a copy of the list of items (same order) without the prefix (if present).
juraj-google-style
def get_niggli_reduced_lattice(self, tol: float = 1e-5) -> "Lattice": matrix = self.lll_matrix a = matrix[0] b = matrix[1] c = matrix[2] e = tol * self.volume ** (1 / 3) G = [ [dot(a, a), dot(a, b), dot(a, c)], [dot(a, b), dot(b, b), dot(b, c)], [dot(a, c), dot(b, c), dot(c, c)], ] G = np.array(G) for count in range(100): (A, B, C, E, N, Y) = ( G[0, 0], G[1, 1], G[2, 2], 2 * G[1, 2], 2 * G[0, 2], 2 * G[0, 1], ) if A > B + e or (abs(A - B) < e and abs(E) > abs(N) + e): M = [[0, -1, 0], [-1, 0, 0], [0, 0, -1]] G = dot(transpose(M), dot(G, M)) if (B > C + e) or (abs(B - C) < e and abs(N) > abs(Y) + e): M = [[-1, 0, 0], [0, 0, -1], [0, -1, 0]] G = dot(transpose(M), dot(G, M)) continue l = 0 if abs(E) < e else E / abs(E) m = 0 if abs(N) < e else N / abs(N) n = 0 if abs(Y) < e else Y / abs(Y) if l * m * n == 1: i = -1 if l == -1 else 1 j = -1 if m == -1 else 1 k = -1 if n == -1 else 1 M = [[i, 0, 0], [0, j, 0], [0, 0, k]] G = dot(transpose(M), dot(G, M)) elif l * m * n == 0 or l * m * n == -1: i = -1 if l == 1 else 1 j = -1 if m == 1 else 1 k = -1 if n == 1 else 1 if i * j * k == -1: if n == 0: k = -1 elif m == 0: j = -1 elif l == 0: i = -1 M = [[i, 0, 0], [0, j, 0], [0, 0, k]] G = dot(transpose(M), dot(G, M)) (A, B, C, E, N, Y) = ( G[0, 0], G[1, 1], G[2, 2], 2 * G[1, 2], 2 * G[0, 2], 2 * G[0, 1], ) if ( abs(E) > B + e or (abs(E - B) < e and 2 * N < Y - e) or (abs(E + B) < e and Y < -e) ): M = [[1, 0, 0], [0, 1, -E / abs(E)], [0, 0, 1]] G = dot(transpose(M), dot(G, M)) continue if ( abs(N) > A + e or (abs(A - N) < e and 2 * E < Y - e) or (abs(A + N) < e and Y < -e) ): M = [[1, 0, -N / abs(N)], [0, 1, 0], [0, 0, 1]] G = dot(transpose(M), dot(G, M)) continue if ( abs(Y) > A + e or (abs(A - Y) < e and 2 * E < N - e) or (abs(A + Y) < e and N < -e) ): M = [[1, -Y / abs(Y), 0], [0, 1, 0], [0, 0, 1]] G = dot(transpose(M), dot(G, M)) continue if E + N + Y + A + B < -e or (abs(E + N + Y + A + B) < e < Y + (A + N) * 2): M = [[1, 0, 1], [0, 1, 1], [0, 0, 1]] G = dot(transpose(M), dot(G, M)) continue break A = G[0, 0] B = G[1, 1] C = G[2, 2] E = 2 * G[1, 2] N = 2 * G[0, 2] Y = 2 * G[0, 1] a = math.sqrt(A) b = math.sqrt(B) c = math.sqrt(C) alpha = math.acos(E / 2 / b / c) / math.pi * 180 beta = math.acos(N / 2 / a / c) / math.pi * 180 gamma = math.acos(Y / 2 / a / b) / math.pi * 180 latt = Lattice.from_parameters(a, b, c, alpha, beta, gamma) mapped = self.find_mapping(latt, e, skip_rotation_matrix=True) if mapped is not None: if np.linalg.det(mapped[0].matrix) > 0: return mapped[0] else: return Lattice(-mapped[0].matrix) raise ValueError("can't find niggli")
Get the Niggli reduced lattice using the numerically stable algo proposed by R. W. Grosse-Kunstleve, N. K. Sauter, & P. D. Adams, Acta Crystallographica Section A Foundations of Crystallography, 2003, 60(1), 1-6. doi:10.1107/S010876730302186X Args: tol (float): The numerical tolerance. The default of 1e-5 should result in stable behavior for most cases. Returns: Niggli-reduced lattice.
juraj-google-style
def GetMountPoint(self, path=None): path = os.path.abspath(client_utils.CanonicalPathToLocalPath((path or self.path))) while (not os.path.ismount(path)): path = os.path.dirname(path) return path
Walk back from the path to find the mount point. Args: path: a Unicode string containing the path or None. If path is None the value in self.path is used. Returns: path string of the mount point
codesearchnet
def _args_to_val(func, args): from .google_imports import gql vals = [] for arg in args: if isinstance(arg, (int, long, basestring)): val = Parameter(arg) elif isinstance(arg, gql.Literal): val = arg.Get() else: raise TypeError(('Unexpected arg (%r)' % arg)) vals.append(val) if (func == 'nop'): if (len(vals) != 1): raise TypeError('"nop" requires exactly one value') return vals[0] pfunc = ParameterizedFunction(func, vals) if pfunc.is_parameterized(): return pfunc else: return pfunc.resolve({}, {})
Helper for GQL parsing to extract values from GQL expressions. This can extract the value from a GQL literal, return a Parameter for a GQL bound parameter (:1 or :foo), and interprets casts like KEY(...) and plain lists of values like (1, 2, 3). Args: func: A string indicating what kind of thing this is. args: One or more GQL values, each integer, string, or GQL literal.
codesearchnet
def _minigui_report_search_status(self, leaves): root = self._player.get_root() msg = {'id': hex(id(root)), 'n': int(root.N), 'q': float(root.Q)} msg['childQ'] = [int(round((q * 1000))) for q in root.child_Q] msg['childN'] = [int(n) for n in root.child_N] ranked_children = root.rank_children() variations = {} for i in ranked_children[:15]: if ((root.child_N[i] == 0) or (i not in root.children)): break c = coords.to_gtp(coords.from_flat(i)) child = root.children[i] nodes = child.most_visited_path_nodes() moves = [coords.to_gtp(coords.from_flat(m.fmove)) for m in nodes] variations[c] = {'n': int(root.child_N[i]), 'q': float(root.child_Q[i]), 'moves': ([c] + moves)} if leaves: path = [] leaf = leaves[0] while (leaf != root): path.append(leaf.fmove) leaf = leaf.parent if path: path.reverse() variations['live'] = {'n': int(root.child_N[path[0]]), 'q': float(root.child_Q[path[0]]), 'moves': [coords.to_gtp(coords.from_flat(m)) for m in path]} if variations: msg['variations'] = variations dbg(('mg-update:%s' % json.dumps(msg, sort_keys=True)))
Prints the current MCTS search status to stderr. Reports the current search path, root node's child_Q, root node's child_N, the most visited path in a format that can be parsed by one of the STDERR_HANDLERS in minigui.ts. Args: leaves: list of leaf MCTSNodes returned by tree_search().
codesearchnet
def _remove_double_brackets(text): def replacement_fn(s): if ":" in s: return "" bar_pos = s.find("|") if bar_pos == -1: return s return s[bar_pos + 1:] return _find_and_replace(text, "[[", "]]", replacement_fn)
Remove double brackets, but leave the viewable text. Args: text: a string Returns: a string
juraj-google-style
def checksum(self, path): raise NotImplementedError
Fetch checksum metadata of a file on the :class:`~apache_beam.io.filesystem.FileSystem`. This operation returns checksum metadata as stored in the underlying FileSystem. It should not need to read file data to obtain this value. Checksum type and format are FileSystem dependent and are not compatible between FileSystems. FileSystem implementations may return file size if a checksum isn't available. Args: path: string path of a file. Returns: string containing checksum Raises: ``BeamIOError``: if path isn't a file or doesn't exist.
github-repos
def short_repr(obj, max_len=40): obj_repr = repr(obj) if (len(obj_repr) <= max_len): return obj_repr return '<{} of length {}>'.format(type(obj).__name__, len(obj_repr))
Returns a short, term-friendly string representation of the object. Args: obj: An object for which to return a string representation. max_len: Maximum length of the returned string. Longer reprs will be turned into a brief descriptive string giving the type and length of obj.
codesearchnet
def _validate_cidr(self, rule): try: network = ipaddress.IPv4Network(rule['app']) except (ipaddress.NetmaskValueError, ValueError) as error: raise SpinnakerSecurityGroupCreationFailed(error) self.log.debug('Validating CIDR: %s', network.exploded) return True
Validate the cidr block in a rule. Returns: True: Upon successful completion. Raises: SpinnakerSecurityGroupCreationFailed: CIDR definition is invalid or the network range is too wide.
codesearchnet
def build_shuffle_all_reduce(input_tensors, gather_devices, red_op, un_op=None): input_tensors, shape = _flatten_tensors(input_tensors) dst_devices = [t.device for t in input_tensors] reduced_shards = _build_shuffle_gather(input_tensors, gather_devices, red_op, un_op) output_tensors = _build_shuffle_scatter(reduced_shards, dst_devices) if len(shape) != 1: output_tensors = _reshape_tensors(output_tensors, shape) return output_tensors
Construct a subgraph for shuffle all-reduce. Shuffle reduce is essentially the algorithm implemented when using parameter servers. Suppose tensor length is n, there are d devices and g gather shards. Each device sends a n/g length sub-tensor to each gather shard. The gather shards perform a reduction across d fragments, then broadcast the result back to each device. The devices then join the g fully reduced fragments they receive from the shards. The gather shards could perform d-1 pairwise reductions, or one d-way reduction. The first is better where reduction Op time is low compared to transmission time, the second better in the other case. Args: input_tensors: list of `tf.Tensor` values to be reduced. gather_devices: list of names of devices on which reduction shards should be placed. red_op: an n-array elementwise reduction Op un_op: optional elementwise unary Op to be applied to fully-reduced values. Returns: list of `tf.Tensor` which are the fully reduced tensors.
github-repos
def redraw(self, reset_camera=False): self.ren.RemoveAllViewProps() self.picker = None self.add_picker_fixed() self.helptxt_mapper = vtk.vtkTextMapper() tprops = self.helptxt_mapper.GetTextProperty() tprops.SetFontSize(14) tprops.SetFontFamilyToTimes() tprops.SetColor(0, 0, 0) if self.structure is not None: self.set_structure(self.structure, reset_camera) self.ren_win.Render()
Redraw the render window. Args: reset_camera: Set to True to reset the camera to a pre-determined default for each structure. Defaults to False.
juraj-google-style
def from_pb(cls, policy_pb): policy = cls(policy_pb.etag, policy_pb.version) for binding in policy_pb.bindings: policy[binding.role] = sorted(binding.members) return policy
Factory: create a policy from a protobuf message. Args: policy_pb (google.iam.policy_pb2.Policy): message returned by ``get_iam_policy`` gRPC API. Returns: :class:`Policy`: the parsed policy
juraj-google-style
def per_device_batch_size(batch_size, num_gpus): if (num_gpus <= 1): return batch_size remainder = (batch_size % num_gpus) if remainder: err = 'When running with multiple GPUs, batch size must be a multiple of the number of available GPUs. Found {} GPUs with a batch size of {}; try --batch_size={} instead.'.format(num_gpus, batch_size, (batch_size - remainder)) raise ValueError(err) return int((batch_size / num_gpus))
For multi-gpu, batch-size must be a multiple of the number of GPUs. Note that this should eventually be handled by DistributionStrategies directly. Multi-GPU support is currently experimental, however, so doing the work here until that feature is in place. Args: batch_size: Global batch size to be divided among devices. This should be equal to num_gpus times the single-GPU batch_size for multi-gpu training. num_gpus: How many GPUs are used with DistributionStrategies. Returns: Batch size per device. Raises: ValueError: if batch_size is not divisible by number of devices
codesearchnet