code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _retry_on_appropriate_gcp_error(exception): return isinstance(exception, (TooManyRequests, ServerError))
Retry filter that returns True if a returned HTTP error code is 5xx or 429. This is used to retry remote requests that fail, most notably 429 (TooManyRequests.) Args: exception: the returned exception encountered during the request/response loop. Returns: boolean indication whether or not the exception is a Server Error (5xx) or a TooManyRequests (429) error.
github-repos
def _maybe_extract(compressed_filename, directory, extension=None): logger.info('Extracting {}'.format(compressed_filename)) if (extension is None): basename = os.path.basename(compressed_filename) extension = basename.split('.', 1)[1] if ('zip' in extension): with zipfile.ZipFile(compressed_filename, 'r') as zip_: zip_.extractall(directory) elif (('tar' in extension) or ('tgz' in extension)): with tarfile.open(compressed_filename, mode='r') as tar: tar.extractall(path=directory) logger.info('Extracted {}'.format(compressed_filename))
Extract a compressed file to ``directory``. Args: compressed_filename (str): Compressed file. directory (str): Extract to directory. extension (str, optional): Extension of the file; Otherwise, attempts to extract extension from the filename.
codesearchnet
def dispatch(self, message): for (validator, callback) in self.validators: if (not validator.matches(message)): continue callback(message) return raise ArgumentError('No handler was registered for message', message=message)
Dispatch a message to a callback based on its schema. Args: message (dict): The message to dispatch
codesearchnet
def describe_training_job(self, TrainingJobName): if (TrainingJobName not in LocalSagemakerClient._training_jobs): error_response = {'Error': {'Code': 'ValidationException', 'Message': 'Could not find local training job'}} raise ClientError(error_response, 'describe_training_job') else: return LocalSagemakerClient._training_jobs[TrainingJobName].describe()
Describe a local training job. Args: TrainingJobName (str): Training job name to describe. Returns: (dict) DescribeTrainingJob Response.
codesearchnet
def unlock_kinetis_read_until_ack(jlink, address): request = swd.ReadRequest(address, ap=True) response = None while True: response = request.send(jlink) if response.ack(): break elif response.wait(): continue raise KinetisException('Read exited with status: %s', response.status) return response
Polls the device until the request is acknowledged. Sends a read request to the connected device to read the register at the given 'address'. Polls indefinitely until either the request is ACK'd or the request ends in a fault. Args: jlink (JLink): the connected J-Link address (int) the address of the register to poll Returns: ``SWDResponse`` object on success. Raises: KinetisException: when read exits with non-ack or non-wait status. Note: This function is required in order to avoid reading corrupt or otherwise invalid data from registers when communicating over SWD.
codesearchnet
def call_with_layout(fn: Callable[..., Any], layout: Optional[layout_lib.Layout], *args, **kwargs) -> Any: if layout is not None: if context.executing_eagerly(): with default_mesh(layout.mesh): with _dtensor_device()._default_layout(layout): return fn(*args, **kwargs) else: return relayout(fn(*args, **kwargs), layout) return fn(*args, **kwargs)
Calls a function in the DTensor device scope if `layout` is not None. If `layout` is not None, `fn` consumes DTensor(s) as input and produces a DTensor as output; a DTensor is a tf.Tensor with layout-related attributes. If `layout` is None, `fn` consumes and produces regular tf.Tensors. Args: fn: A supported TF API function such as tf.zeros. layout: Optional, the layout of the output DTensor. *args: Arguments given to `fn`. **kwargs: Keyword arguments given to `fn`. Returns: The return value of `fn` transformed to a DTensor if requested.
github-repos
def set_memcache_policy(self, func): if (func is None): func = self.default_memcache_policy elif isinstance(func, bool): func = (lambda unused_key, flag=func: flag) self._memcache_policy = func
Set the memcache policy function. Args: func: A function that accepts a Key instance as argument and returns a bool indicating if it should be cached. May be None.
codesearchnet
def add(self, full_name: str, alias: str | None=None): if not self.track_imports: return alias = alias or full_name if '.' not in full_name or (full_name == alias and (not alias.endswith('.*'))): self._direct_imports[alias] = full_name else: module, name = full_name.rsplit('.', 1) if name == '*': alias = '*' if module == 'typing': self._typing.add(name, alias) else: self._from_imports.setdefault(module, {})[alias] = name self._reverse_alias_map[full_name] = alias
Adds an import. Examples: ------------------------------------------------------- Import Statement | Method Call ------------------------------------------------------- import abc | add('abc') import abc as xyz | add('abc', 'xyz') import foo.bar | add('foo.bar') from foo import bar | add('foo.bar', 'bar') from foo import bar as baz | add('foo.bar', 'baz') Args: full_name: The full name of the thing being imported. alias: The name that the imported thing is assigned to.
github-repos
def unstack(self, value, name=None): return self._implementation.unstack(value, name=name)
Unstack the values of a `Tensor` in the TensorArray. If input value shapes have rank-`R`, then the output TensorArray will contain elements whose shapes are rank-`(R-1)`. Args: value: (N+1)-D. Tensor of type `dtype`. The Tensor to unstack. name: A name for the operation (optional). Returns: A new TensorArray object with flow that ensures the unstack occurs. Use this object for all subsequent operations. Raises: ValueError: if the shape inference fails.
github-repos
def mom(self, K, **kws): K = numpy.asarray(K, dtype=int) shape = K.shape dim = len(self) if (dim > 1): shape = shape[1:] size = int((K.size / dim)) K = K.reshape(dim, size) cache = {} out = [evaluation.evaluate_moment(self, kdata, cache) for kdata in K.T] out = numpy.array(out) return out.reshape(shape)
Raw statistical moments. Creates non-centralized raw moments from the random variable. If analytical options can not be utilized, Monte Carlo integration will be used. Args: K (numpy.ndarray): Index of the raw moments. k.shape must be compatible with distribution shape. Sampling scheme when performing Monte Carlo rule (str): rule for estimating the moment if the analytical method fails. composite (numpy.ndarray): If provided, composit quadrature will be used. Ignored in the case if gaussian=True. If int provided, determines number of even domain splits. If array of ints, determines number of even domain splits along each axis. If array of arrays/floats, determines location of splits. antithetic (numpy.ndarray): List of bool. Represents the axes to mirror using antithetic variable during MCI. Returns: (numpy.ndarray): Shapes are related through the identity ``k.shape == dist.shape+k.shape``.
codesearchnet
def makecontinuum(cube, **kwargs): inchs = kwargs.pop('inchs', None) exchs = kwargs.pop('exchs', None) if ((inchs is not None) or (exchs is not None)): raise KeyError('Inchs and exchs are no longer supported. Use weight instead.') if (weight is None): weight = 1.0 cont = ((cube * (1 / (weight ** 2))).sum(dim='ch') / (1 / (weight ** 2)).sum(dim='ch')) xcoords = {'x': cube.x.values} ycoords = {'y': cube.y.values} chcoords = {'masterid': np.array([0]), 'kidid': np.array([0]), 'kidfq': np.array([0]), 'kidtp': np.array([1])} scalarcoords = {'coordsys': cube.coordsys.values, 'datatype': cube.datatype.values, 'xref': cube.xref.values, 'yref': cube.yref.values} return dc.cube(cont.values, xcoords=xcoords, ycoords=ycoords, chcoords=chcoords, scalarcoords=scalarcoords)
Make a continuum array. Args: cube (decode.cube): Decode cube which will be averaged over channels. kwargs (optional): Other arguments. inchs (list): Included channel kidids. exchs (list): Excluded channel kidids. Returns: decode cube (decode.cube): Decode cube (2d).
codesearchnet
def generator_consumer(coro): if not asyncio.iscoroutinefunction(coro): raise TypeError('paco: coro must be a coroutine function') @functools.wraps(coro) @asyncio.coroutine def wrapper(*args, **kw): if len(args) > 1 and isgenerator(args[1]): args = list(args) args[1] = (yield from consume(args[1]) if hasattr(args[1], '__anext__') else list(args[1])) args = tuple(args) return (yield from coro(*args, **kw)) return wrapper
Decorator wrapper that consumes sync/async generators provided as interable input argument. This function is only intended to be used internally. Arguments: coro (coroutinefunction): function to decorate Raises: TypeError: if function or coroutine function is not provided. Returns: function: decorated function.
juraj-google-style
def broadcast_to(rt_input, shape: DynamicRaggedShape): if not isinstance(shape, DynamicRaggedShape): raise TypeError('shape must be a DynamicRaggedShape') rt_input = ragged_tensor.convert_to_tensor_or_ragged_tensor(rt_input) origin_shape = None if ragged_tensor.is_ragged(rt_input): if shape.num_row_partitions != 0: if rt_input.row_splits.dtype != shape.dtype: raise ValueError('Cannot coerce row_splits.dtype') else: shape = shape.with_dtype(rt_input.row_splits.dtype) origin_shape = DynamicRaggedShape.from_tensor(rt_input) elif shape.num_row_partitions != 0: origin_shape = DynamicRaggedShape.from_tensor(rt_input, dtype=shape.dtype) else: origin_shape = DynamicRaggedShape.from_tensor(rt_input, dtype=dtypes.int64) shape = shape.with_dtype(dtype=dtypes.int64) broadcaster = _get_broadcaster(origin_shape, shape) return broadcaster.broadcast(rt_input)
Broadcasts a potentially ragged tensor to a ragged shape. Tiles `rt_input` as necessary to match the given shape. Behavior is undefined if `rt_input` is not broadcast-compatible with `shape`. Args: rt_input: The potentially ragged tensor to broadcast. shape: A `DynamicRaggedShape` Returns: A potentially ragged tensor whose values are taken from `rt_input`, and whose shape matches `shape`.
github-repos
def _is_disk_usage_reset_each_run(self): return False
Indicates whether disk usage is reset after each Session.run. Subclasses that clean up the disk usage after every run should override this protected method. Returns: (`bool`) Whether the disk usage amount is reset to zero after each Session.run.
github-repos
def get(self, service_id, insert_defaults=None): return self.prepare_model(self.client.api.inspect_service(service_id, insert_defaults))
Get a service. Args: service_id (str): The ID of the service. insert_defaults (boolean): If true, default values will be merged into the output. Returns: :py:class:`Service`: The service. Raises: :py:class:`docker.errors.NotFound` If the service does not exist. :py:class:`docker.errors.APIError` If the server returns an error. :py:class:`docker.errors.InvalidVersion` If one of the arguments is not supported with the current API version.
codesearchnet
def write(self, ostream, kmip_version=enums.KMIPVersion.KMIP_1_0): super(Boolean, self).write(ostream, kmip_version=kmip_version) self.write_value(ostream, kmip_version=kmip_version)
Write the encoding of the Boolean object to the output stream. Args: ostream (Stream): A buffer to contain the encoded bytes of a Boolean object. Usually a BytearrayStream object. Required. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0.
juraj-google-style
def deepgetattr(obj, name, default=_UNSPECIFIED): try: if '.' in name: attr, subname = name.split('.', 1) return deepgetattr(getattr(obj, attr), subname, default) else: return getattr(obj, name) except AttributeError: if default is _UNSPECIFIED: raise else: return default
Try to retrieve the given attribute of an object, digging on '.'. This is an extended getattr, digging deeper if '.' is found. Args: obj (object): the object of which an attribute should be read name (str): the name of an attribute to look up. default (object): the default value to use if the attribute wasn't found Returns: the attribute pointed to by 'name', splitting on '.'. Raises: AttributeError: if obj has no 'name' attribute.
juraj-google-style
def __init__(self, node, function, enclosing_graph, first_function_input, type_attribute, function_attributes): super(_FunctionCaller, self).__init__(node, function, enclosing_graph) self._first_function_input = first_function_input self._type_attribute = type_attribute self._function_attributes = function_attributes
Initializes a _FunctionCaller. Args: node: As in _Node. function: As in _Node. enclosing_graph: As in _Node. first_function_input: The index of the first NodeDef input that is tied to the function inputs. It is assumed that the rest of the NodeDef inputs map one to one to function inputs. type_attribute: The name of the NodeDef attribute that defines the input types. It is assumed that the types listed here map one-to-one with the function inputs (that is, they do _not_ specify types for inputs that are not passed to functions). function_attributes: The names of the NodeDef attributes containing references to functions.
github-repos
def List(self, request, global_params=None): config = self.GetMethodConfig('List') return self._RunMethod(config, request, global_params=global_params)
Lists all tables in the specified dataset. Requires the READER dataset role. Args: request: (BigqueryTablesListRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (TableList) The response message.
github-repos
def pulse_drawer(samples, duration, dt=None, interp_method='None', filename=None, interactive=False, dpi=150, nop=1000, size=(6, 5)): try: from matplotlib import pyplot as plt except ImportError: raise ImportError('pulse_drawer need matplotlib. Run "pip install matplotlib" before.') if dt: _dt = dt else: _dt = 1 re_y = np.real(samples) im_y = np.imag(samples) image = plt.figure(figsize=size) ax0 = image.add_subplot(111) if (interp_method == 'CubicSpline'): time = ((np.arange(0, (duration + 1)) * _dt) + (0.5 * _dt)) cs_ry = CubicSpline(time[:(- 1)], re_y) cs_iy = CubicSpline(time[:(- 1)], im_y) _time = np.linspace(0, (duration * _dt), nop) _re_y = cs_ry(_time) _im_y = cs_iy(_time) elif (interp_method == 'None'): time = (np.arange(0, (duration + 1)) * _dt) _time = np.r_[(time[0], np.repeat(time[1:(- 1)], 2), time[(- 1)])] _re_y = np.repeat(re_y, 2) _im_y = np.repeat(im_y, 2) else: raise QiskitError(('Invalid interpolation method "%s"' % interp_method)) ax0.fill_between(x=_time, y1=_re_y, y2=np.zeros_like(_time), facecolor='red', alpha=0.3, edgecolor='red', linewidth=1.5, label='real part') ax0.fill_between(x=_time, y1=_im_y, y2=np.zeros_like(_time), facecolor='blue', alpha=0.3, edgecolor='blue', linewidth=1.5, label='imaginary part') ax0.set_xlim(0, (duration * _dt)) ax0.grid(b=True, linestyle='-') ax0.legend(bbox_to_anchor=(0.5, 1.0), loc='lower center', ncol=2, frameon=False, fontsize=14) if filename: image.savefig(filename, dpi=dpi, bbox_inches='tight') plt.close(image) if (image and interactive): plt.show(image) return image
Plot the interpolated envelope of pulse Args: samples (ndarray): Data points of complex pulse envelope. duration (int): Pulse length (number of points). dt (float): Time interval of samples. interp_method (str): Method of interpolation (set `None` for turn off the interpolation). filename (str): Name required to save pulse image. interactive (bool): When set true show the circuit in a new window (this depends on the matplotlib backend being used supporting this). dpi (int): Resolution of saved image. nop (int): Data points for interpolation. size (tuple): Size of figure. Returns: matplotlib.figure: A matplotlib figure object for the pulse envelope. Raises: ImportError: when the output methods requieres non-installed libraries. QiskitError: when invalid interpolation method is specified.
codesearchnet
def threat(self, name, **kwargs): group_obj = Threat(name, **kwargs) return self._group(group_obj)
Add Threat data to Batch object Args: name (str): The name for this Group. date_added (str, kwargs): The date timestamp the Indicator was created. xid (str, kwargs): The external id for this Group. Returns: obj: An instance of Threat.
codesearchnet
def index(self, value, start=0, end=None): try: index = self._dict[value] except KeyError: raise ValueError else: start = self._fix_neg_index(start) end = self._fix_end_index(end) if ((start <= index) and (index < end)): return index else: raise ValueError
Return the index of value between start and end. By default, the entire setlist is searched. This runs in O(1) Args: value: The value to find the index of start (int): The index to start searching at (defaults to 0) end (int): The index to stop searching at (defaults to the end of the list) Returns: int: The index of the value Raises: ValueError: If the value is not in the list or outside of start - end IndexError: If start or end are out of range
codesearchnet
def delete(self, entity): key = _normalize_key(entity) if key is None: return self.ndb_delete(entity) self.deletes.append(key)
Registers entity to delete from datastore. Args: entity: an entity, model instance, or key to delete.
juraj-google-style
def put_path(self, url, path): cache_path = self._url_to_path(url) try: dir = os.path.dirname(cache_path) os.makedirs(dir) except OSError as e: if e.errno != errno.EEXIST: raise Error('Failed to create cache directories for ' % cache_path) try: os.unlink(cache_path) except OSError: pass try: os.link(path, cache_path) except OSError: try: shutil.copyfile(path, cache_path) except IOError: raise Error('Failed to cache %s as %s for %s' % (path, cache_path, url))
Puts a resource already on disk into the disk cache. Args: url: The original url of the resource path: The resource already available on disk Raises: CacheError: If the file cannot be put in cache
juraj-google-style
def capture(self, payment_id, amount, data={}, **kwargs): url = '{}/{}/capture'.format(self.base_url, payment_id) data['amount'] = amount return self.post_url(url, data, **kwargs)
Capture Payment for given Id Args: payment_id : Id for which payment object has to be retrieved Amount : Amount for which the payment has to be retrieved Returns: Payment dict after getting captured
codesearchnet
def _get_updated_values(before_values, after_values): assert before_values.keys() == after_values.keys() return dict([(k, [before_values[k], after_values[k]]) for k in before_values.keys() if before_values[k] != after_values[k]])
Get updated values from 2 dicts of values Args: before_values (dict): values before update after_values (dict): values after update Returns: dict: a diff dict with key is field key, value is tuple of (before_value, after_value)
juraj-google-style
def span(self): other = VersionRange(None) bound = _Bound(self.bounds[0].lower, self.bounds[(- 1)].upper) other.bounds = [bound] return other
Return a contiguous range that is a superset of this range. Returns: A VersionRange object representing the span of this range. For example, the span of "2+<4|6+<8" would be "2+<8".
codesearchnet
def getConParams(virtualhost): return pika.ConnectionParameters( host=settings.RABBITMQ_HOST, port=int(settings.RABBITMQ_PORT), virtual_host=virtualhost, credentials=pika.PlainCredentials( settings.RABBITMQ_USER_NAME, settings.RABBITMQ_USER_PASSWORD ) )
Connection object builder. Args: virtualhost (str): selected virtualhost in rabbitmq Returns: pika.ConnectionParameters: object filled by `constants` from :class:`edeposit.amqp.settings`.
juraj-google-style
def generate_stack_policy_args(stack_policy=None): args = {} if stack_policy: logger.debug('Stack has a stack policy') if stack_policy.url: raise NotImplementedError else: args['StackPolicyBody'] = stack_policy.body return args
Converts a stack policy object into keyword args. Args: stack_policy (:class:`stacker.providers.base.Template`): A template object representing a stack policy. Returns: dict: A dictionary of keyword arguments to be used elsewhere.
codesearchnet
def duplicate(script, layer_num=None): filter_xml = ' <filter name="Duplicate Current layer"/>\n' if isinstance(script, mlx.FilterScript): if (layer_num is None) or (layer_num == script.current_layer()): util.write_filter(script, filter_xml) script.add_layer('{}_copy'.format(script.layer_stack[script.current_layer()]), True) else: change(script, layer_num) util.write_filter(script, filter_xml) script.add_layer('{}_copy'.format(script.layer_stack[layer_num]), True) else: util.write_filter(script, filter_xml) return None
Duplicate a layer. New layer label is '*_copy'. Args: script: the mlx.FilterScript object or script filename to write the filter to. layer_num (int): layer number to duplicate. Default is the current layer. Not supported on the file base API. Layer stack: Creates a new layer Changes current layer to the new layer MeshLab versions: 2016.12 1.3.4BETA
juraj-google-style
def add_string_pairs_from_label_element(xib_file, results, label, special_ui_components_prefix): label_entry_comment = extract_element_internationalized_comment(label) if label_entry_comment is None: return warn_if_element_not_of_class(label, 'Label', special_ui_components_prefix) if label.hasAttribute('usesAttributedText') and label.attributes['usesAttributedText'].value == 'YES': add_string_pairs_from_attributed_ui_element(results, label, label_entry_comment) else: try: label_entry_key = label.attributes['text'].value except KeyError: try: label_entry_key = label.getElementsByTagName('string')[0].firstChild.nodeValue except Exception: label_entry_key = 'N/A' logging.warn("%s: Missing text entry in %s", xib_file, label.toxml('UTF8')) results.append((label_entry_key, label_entry_comment))
Adds string pairs from a label element. Args: xib_file (str): Path to the xib file. results (list): The list to add the results to. label (element): The label element from the xib, to extract the string pairs from. special_ui_components_prefix (str): If not None, extraction will not warn about internationalized UI components with this class prefix.
juraj-google-style
def attach(self, container, stdout=True, stderr=True, stream=False, logs=False, demux=False): params = {'logs': ((logs and 1) or 0), 'stdout': ((stdout and 1) or 0), 'stderr': ((stderr and 1) or 0), 'stream': ((stream and 1) or 0)} headers = {'Connection': 'Upgrade', 'Upgrade': 'tcp'} u = self._url('/containers/{0}/attach', container) response = self._post(u, headers=headers, params=params, stream=True) output = self._read_from_socket(response, stream, self._check_is_tty(container), demux=demux) if stream: return CancellableStream(output, response) else: return output
Attach to a container. The ``.logs()`` function is a wrapper around this method, which you can use instead if you want to fetch/stream container output without first retrieving the entire backlog. Args: container (str): The container to attach to. stdout (bool): Include stdout. stderr (bool): Include stderr. stream (bool): Return container output progressively as an iterator of strings, rather than a single string. logs (bool): Include the container's previous output. demux (bool): Keep stdout and stderr separate. Returns: By default, the container's output as a single string (two if ``demux=True``: one for stdout and one for stderr). If ``stream=True``, an iterator of output strings. If ``demux=True``, two iterators are returned: one for stdout and one for stderr. Raises: :py:class:`docker.errors.APIError` If the server returns an error.
codesearchnet
def _resize_image(image, height, width): return tf.image.resize_images(image, [height, width], method=tf.image.ResizeMethod.BILINEAR, align_corners=False)
Simple wrapper around tf.resize_images. This is primarily to make sure we use the same `ResizeMethod` and other details each time. Args: image: A 3-D image `Tensor`. height: The target height for the resized image. width: The target width for the resized image. Returns: resized_image: A 3-D tensor containing the resized image. The first two dimensions have the shape [height, width].
codesearchnet
def update_(self, conf_dict, conf_arg=True): for (section, secdict) in conf_dict.items(): self[section].update_(secdict, conf_arg)
Update values of configuration options with dict. Args: conf_dict (dict): dict of dict indexed with section and option names. conf_arg (bool): if True, only options that can be set in a config file are updated.
codesearchnet
def _parse_meta_info(self, line): if self.mslevel: self.meta_info['ms_level'] = self.mslevel if self.polarity: self.meta_info['polarity'] = self.polarity for k, regexes in six.iteritems(self.meta_regex): for reg in regexes: m = re.search(reg, line, re.IGNORECASE) if m: self.meta_info[k] = m.group(1).strip()
Parse and extract all meta data by looping through the dictionary of meta_info regexs updates self.meta_info Args: line (str): line of the msp file
juraj-google-style
def sort(expr, field=None, keytype=None, ascending=True): weld_obj = WeldObject(encoder_, decoder_) expr_var = weld_obj.update(expr) if isinstance(expr, WeldObject): expr_var = expr.obj_id weld_obj.dependencies[expr_var] = expr if (field is not None): key_str = ('x.$%s' % field) else: key_str = 'x' if (not ascending): key_str = (key_str + ('* %s(-1)' % keytype)) weld_template = '\n sort(%(expr)s, |x| %(key)s)\n ' weld_obj.weld_code = (weld_template % {'expr': expr_var, 'key': key_str}) return weld_obj
Sorts the vector. If the field parameter is provided then the sort operators on a vector of structs where the sort key is the field of the struct. Args: expr (WeldObject) field (Int)
codesearchnet
def as_json(self, entity_url, context=None): try: urllib.request.urlopen(entity_url) except urllib.error.HTTPError: raise ValueError("Cannot open {}".format(entity_url)) entity_graph = self.read(entity_url) entity_json = json.loads( entity_graph.serialize( format='json-ld', context=context).decode()) return json.dumps(entity_json)
Method takes a entity uri and attempts to return the Fedora Object as a JSON-LD. Args: entity_url(str): Fedora Commons URL of Entity context(None): Returns JSON-LD with Context, default is None Returns: str: JSON-LD of Fedora Object
juraj-google-style
def docx_text_from_xml_node(node: ElementTree.Element, level: int, config: TextProcessingConfig) -> str: text = '' if (node.tag == DOCX_TEXT): text += (node.text or '') elif (node.tag == DOCX_TAB): text += '\t' elif (node.tag in DOCX_NEWLINES): text += '\n' elif (node.tag == DOCX_NEWPARA): text += '\n\n' if (node.tag == DOCX_TABLE): text += ('\n\n' + docx_table_from_xml_node(node, level, config)) else: for child in node: text += docx_text_from_xml_node(child, (level + 1), config) return text
Returns text from an XML node within a DOCX file. Args: node: an XML node level: current level in XML hierarchy (used for recursion; start level is 0) config: :class:`TextProcessingConfig` control object Returns: contents as a string
codesearchnet
def sibling(self, name: InstanceName) -> "ObjectMember": ssn = self.parinst._member_schema_node(name) try: sibs = self.siblings.copy() newval = sibs.pop(name) sibs[self.name] = self.value return ObjectMember(name, sibs, newval, self.parinst, ssn, self.timestamp) except KeyError: raise NonexistentInstance(self.json_pointer(), f"member '{name}'") from None
Return an instance node corresponding to a sibling member. Args: name: Instance name of the sibling member. Raises: NonexistentSchemaNode: If member `name` is not permitted by the schema. NonexistentInstance: If sibling member `name` doesn't exist.
juraj-google-style
def nsx_controller_name(self, **kwargs): name = kwargs.pop('name') name_args = dict(name=name) method_name = 'nsx_controller_name' method_class = self._brocade_tunnels nsxcontroller_attr = getattr(method_class, method_name) config = nsxcontroller_attr(**name_args) if kwargs.pop('get', False): output = self._callback(config, handler='get_config') else: output = self._callback(config) return output
Get/Set nsx controller name Args: name: (str) : Name of the nsx controller get (bool) : Get nsx controller config(True,False) callback (function): A function executed upon completion of the method. Returns: Return value of `callback`. Raises: None
juraj-google-style
def bytes_to_readable_str(num_bytes, include_b=False): if num_bytes is None: return str(num_bytes) if num_bytes < 1024: result = '%d' % num_bytes elif num_bytes < 1048576: result = '%.2fk' % (num_bytes / 1024.0) elif num_bytes < 1073741824: result = '%.2fM' % (num_bytes / 1048576.0) else: result = '%.2fG' % (num_bytes / 1073741824.0) if include_b: result += 'B' return result
Generate a human-readable string representing number of bytes. The units B, kB, MB and GB are used. Args: num_bytes: (`int` or None) Number of bytes. include_b: (`bool`) Include the letter B at the end of the unit. Returns: (`str`) A string representing the number of bytes in a human-readable way, including a unit at the end.
github-repos
def _pack_with_tf_ops(dataset, keys, length): empty_example = {} for k in keys: empty_example[k] = tf.zeros([0], dtype=tf.int32) empty_example[k + "_position"] = tf.zeros([0], dtype=tf.int32) keys_etc = empty_example.keys() def write_packed_example(partial, outputs): new_partial = empty_example.copy() new_outputs = {} for k in keys_etc: new_outputs[k] = outputs[k].write( outputs[k].size(), tf.pad(partial[k], [[0, length - tf.size(partial[k])]])) return new_partial, new_outputs def map_fn(x): partial = empty_example.copy() i = tf.zeros([], dtype=tf.int32) dynamic_batch_size = tf.shape(x[keys[0]])[0] outputs = {} for k in keys: outputs[k] = tf.TensorArray( tf.int32, size=0, dynamic_size=True, element_shape=[length]) outputs[k + "_position"] = tf.TensorArray( tf.int32, size=0, dynamic_size=True, element_shape=[length]) def cond_fn(i, partial, outputs): del partial, outputs return i < dynamic_batch_size def body_fn(i, partial, outputs): can_append = True one_example = {} for k in keys: val = tf.cast(x[k][i], tf.int32) val = val[:tf.reduce_sum(tf.cast(tf.not_equal(val, 0), tf.int32))] one_example[k] = val for k in keys: can_append = tf.logical_and( can_append, tf.less_equal( tf.size(partial[k]) + tf.size(one_example[k]), length)) def false_fn(): return write_packed_example(partial, outputs) def true_fn(): return partial, outputs partial, outputs = tf.cond(can_append, true_fn, false_fn) new_partial = {} for k in keys: new_seq = one_example[k][:length] new_seq_len = tf.size(new_seq) new_partial[k] = tf.concat([partial[k], new_seq], 0) new_partial[k + "_position"] = tf.concat( [partial[k + "_position"], tf.range(new_seq_len, dtype=tf.int32)], 0) partial = new_partial return i+1, partial, outputs i, partial, outputs = tf.while_loop( cond_fn, body_fn, (i, partial, outputs), back_prop=False, shape_invariants=( tf.TensorShape([]), {k: tf.TensorShape([None]) for k in keys_etc}, {k: tf.TensorShape(None) for k in keys_etc}, )) partial, outputs = write_packed_example(partial, outputs) packed = {k: outputs[k].stack() for k in keys_etc} for k in keys: packed[k + "_segmentation"] = ( tf.cumsum( tf.cast(tf.equal(packed[k + "_position"], 0), tf.int32), axis=1) * tf.cast(tf.not_equal(packed[k], 0), tf.int32)) return packed dataset = dataset.map(map_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE) return dataset.flat_map(tf.data.Dataset.from_tensor_slices)
Helper-function for packing a dataset which has already been batched. See pack_dataset() Uses tf.while_loop. Slow. Args: dataset: a dataset containing padded batches of examples. keys: a list of strings length: an integer Returns: a dataset.
juraj-google-style
def shortcut_string_merge(self, node_def): device = node_def.device or '' merge_key = (self._spec, device) result = _string_merge_cache.get(merge_key) if result is None: result = self.__call__(node_def).to_string() _string_merge_cache[merge_key] = result return result
Merge a node def without materializing a full DeviceSpec object. Often a device merge is invoked in order to generate a string which can be passed into the c api. In such a case, we can cache the node_def.device -> merge_result_string map, and in most cases avoid: - Materializing a copy of self._spec (In the case of DeviceSpecV1) - Materializing a DeviceSpec for node_def.device - A DeviceSpec.merge_from invocation In practice the cache hit rate for this function is very high, because the number of invocations when iterating through the device stack is much larger than the number of devices. Args: node_def: An Operation (or Operation-like) to merge device constraints with self._spec Returns: A string containing the merged device specification.
github-repos
def _selection(candidate): sample_index1 = np.random.choice(len(candidate)) sample_index2 = np.random.choice(len(candidate)) sample_1 = candidate[sample_index1] sample_2 = candidate[sample_index2] select_index = np.random.choice(len(sample_1)) logger.info((LOGGING_PREFIX + 'Perform selection from %sth to %sth at index=%s'), sample_index2, sample_index1, select_index) next_gen = [] for i in range(len(sample_1)): if (i is select_index): next_gen.append(sample_2[i]) else: next_gen.append(sample_1[i]) return next_gen
Perform selection action to candidates. For example, new gene = sample_1 + the 5th bit of sample2. Args: candidate: List of candidate genes (encodings). Examples: >>> # Genes that represent 3 parameters >>> gene1 = np.array([[0, 0, 1], [0, 1], [1, 0]]) >>> gene2 = np.array([[0, 1, 0], [1, 0], [0, 1]]) >>> new_gene = _selection([gene1, gene2]) >>> # new_gene could be gene1 overwritten with the >>> # 2nd parameter of gene2 >>> # in which case: >>> # new_gene[0] = gene1[0] >>> # new_gene[1] = gene2[1] >>> # new_gene[2] = gene1[0] Returns: New gene (encoding)
codesearchnet
def stop_worker(config, *, worker_ids=None): if ((worker_ids is not None) and (not isinstance(worker_ids, list))): worker_ids = [worker_ids] celery_app = create_app(config) celery_app.control.shutdown(destination=worker_ids)
Stop a worker process. Args: config (Config): Reference to the configuration object from which the settings for the worker are retrieved. worker_ids (list): An optional list of ids for the worker that should be stopped.
codesearchnet
def _SerializeRequest(self, request): parsed = urllib_parse.urlsplit(request.url) request_line = urllib_parse.urlunsplit(('', '', parsed.path, parsed.query, '')) if (not isinstance(request_line, six.text_type)): request_line = request_line.decode('utf-8') status_line = u' '.join((request.http_method, request_line, u'HTTP/1.1\n')) (major, minor) = request.headers.get('content-type', 'application/json').split('/') msg = mime_nonmultipart.MIMENonMultipart(major, minor) for (key, value) in request.headers.items(): if (key == 'content-type'): continue msg[key] = value msg['Host'] = parsed.netloc msg.set_unixfrom(None) if (request.body is not None): msg.set_payload(request.body) str_io = six.StringIO() gen = generator.Generator(str_io, maxheaderlen=0) gen.flatten(msg, unixfrom=False) body = str_io.getvalue() return (status_line + body)
Convert a http_wrapper.Request object into a string. Args: request: A http_wrapper.Request to serialize. Returns: The request as a string in application/http format.
codesearchnet
def ts_to_str(jwt_dict): d = ts_to_dt(jwt_dict) for k, v in list(d.items()): if isinstance(v, datetime.datetime): d[k] = v.isoformat().replace('T', ' ') return d
Convert timestamps in JWT to human readable dates. Args: jwt_dict: dict JWT with some keys containing timestamps. Returns: dict: Copy of input dict where timestamps have been replaced with human readable dates.
juraj-google-style
def _handle_port_request(self, client_data, writer): try: pid = int(client_data) except ValueError as error: self._client_request_errors += 1 log.warning('Could not parse request: %s', error) return log.info('Request on behalf of pid %d.', pid) log.info('cmdline: %s', _get_process_command_line(pid)) if (not _should_allocate_port(pid)): self._denied_allocations += 1 return port = self._port_pool.get_port_for_process(pid) if (port > 0): self._total_allocations += 1 writer.write('{:d}\n'.format(port).encode('utf-8')) log.debug('Allocated port %d to pid %d', port, pid) else: self._denied_allocations += 1
Given a port request body, parse it and respond appropriately. Args: client_data: The request bytes from the client. writer: The asyncio Writer for the response to be written to.
codesearchnet
def GetObject(self, identifier): cache_value = self._values.get(identifier, None) if (not cache_value): return None return cache_value.vfs_object
Retrieves a cached object based on the identifier. This method ignores the cache value reference count. Args: identifier (str): VFS object identifier. Returns: object: cached VFS object or None if not cached.
codesearchnet
def dropout(x, level, noise_shape=None, seed=None): if seed is None: seed = np.random.randint(10000000.0) return nn.dropout_v2(x, rate=level, noise_shape=noise_shape, seed=seed)
Sets entries in `x` to zero at random, while scaling the entire tensor. Args: x: tensor level: fraction of the entries in the tensor that will be set to 0. noise_shape: shape for randomly generated keep/drop flags, must be broadcastable to the shape of `x` seed: random seed to ensure determinism. Returns: A tensor.
github-repos
def _transform_indices(self, key): ndims = self.ndims if all(((not (isinstance(el, slice) or callable(el))) for el in key)): dim_inds = [] for dim in self.kdims: dim_type = self.get_dimension_type(dim) if (isinstance(dim_type, type) and issubclass(dim_type, Number)): dim_inds.append(self.get_dimension_index(dim)) str_keys = iter((key[i] for i in range(self.ndims) if (i not in dim_inds))) num_keys = [] if len(dim_inds): keys = list({tuple(((k[i] if (ndims > 1) else k) for i in dim_inds)) for k in self.keys()}) q = np.array([tuple(((key[i] if (ndims > 1) else key) for i in dim_inds))]) idx = np.argmin([(np.inner((q - np.array(x)), (q - np.array(x))) if (len(dim_inds) == 2) else np.abs((q - x))) for x in keys]) num_keys = iter(keys[idx]) key = tuple(((next(num_keys) if (i in dim_inds) else next(str_keys)) for i in range(self.ndims))) elif any(((not (isinstance(el, slice) or callable(el))) for el in key)): keys = self.keys() for (i, k) in enumerate(key): if isinstance(k, slice): continue dim_keys = np.array([ke[i] for ke in keys]) if (dim_keys.dtype.kind in 'OSU'): continue snapped_val = dim_keys[np.argmin(np.abs((dim_keys - k)))] key = list(key) key[i] = snapped_val key = tuple(key) return key
Snaps indices into the GridSpace to the closest coordinate. Args: key: Tuple index into the GridSpace Returns: Transformed key snapped to closest numeric coordinates
codesearchnet
def update_state(self, state_arr, action_arr): (x, y) = np.where((action_arr[(- 1)] == 1)) self.__agent_pos = (x[0], y[0]) self.__route_memory_list.append((x[0], y[0])) self.__route_long_memory_list.append((x[0], y[0])) self.__route_long_memory_list = list(set(self.__route_long_memory_list)) while (len(self.__route_memory_list) > self.__memory_num): self.__route_memory_list = self.__route_memory_list[1:] return self.extract_now_state()
Update state. Override. Args: state_arr: `np.ndarray` of state in `self.t`. action_arr: `np.ndarray` of action in `self.t`. Returns: `np.ndarray` of state in `self.t+1`.
codesearchnet
def add_network(self, network, netmask, area=0): if ((network == '') or (netmask == '')): raise ValueError('network and mask values may not be empty') cmd = 'network {}/{} area {}'.format(network, netmask, area) return self.configure_ospf(cmd)
Adds a network to be advertised by OSPF Args: network (str): The network to be advertised in dotted decimal notation netmask (str): The netmask to configure area (str): The area the network belongs to. By default this value is 0 Returns: bool: True if the command completes successfully Exception: ValueError: This will get raised if network or netmask are not passed to the method
codesearchnet
def maybe_download_image_dataset(image_ids, target_dir): tf.gfile.MakeDirs(target_dir) num_images = len(image_ids) for (i, image_id) in enumerate(image_ids): destination = os.path.join(target_dir, ('%s.jpg' % i)) tmp_destination = ('%s.temp' % destination) source_url = ('http: if tf.gfile.Exists(destination): tf.logging.info(('Image with ID already present, skipping download (%s of %s).' % ((i + 1), num_images))) continue tf.logging.info(('Downloading image with id %s (%s of %s)' % (image_id, (i + 1), num_images))) response = requests.get(source_url, stream=True) response.raise_for_status() with tf.gfile.Open(tmp_destination, 'w') as f: for block in response.iter_content(1024): f.write(block) tf.gfile.Rename(tmp_destination, destination)
Download a set of images from api.brain-map.org to `target_dir`. Args: image_ids: list, a list of image ids. target_dir: str, a directory to which to download the images.
codesearchnet
def protected_branches(): master = conf.get('git.master_branch', 'master') develop = conf.get('git.devel_branch', 'develop') return conf.get('git.protected_branches', (master, develop))
Return branches protected by deletion. By default those are master and devel branches as configured in pelconf. Returns: list[str]: Names of important branches that should not be deleted.
codesearchnet
def _retry_on_appropriate_openai_error(exception): return isinstance(exception, (RateLimitError, APIError))
Retry filter that returns True for rate limit (429) or server (5xx) errors. Args: exception: the returned exception encountered during the request/response loop. Returns: boolean indication whether or not the exception is a Server Error (5xx) or a RateLimitError (429) error.
github-repos
def _indicator(self, indicator_data): if isinstance(indicator_data, dict): xid = indicator_data.get('xid') else: xid = indicator_data.xid if self.indicators.get(xid) is not None: indicator_data = self.indicators.get(xid) elif self.indicators_shelf.get(xid) is not None: indicator_data = self.indicators_shelf.get(xid) else: self.indicators[xid] = indicator_data return indicator_data
Return previously stored indicator or new indicator. Args: indicator_data (dict|obj): An Indicator dict or instance of Indicator object. Returns: dict|obj: The new Indicator dict/object or the previously stored dict/object.
juraj-google-style
def __init__(self, destination, transport): self._destination = destination self._transport = transport
Create a new ADB stream. Args: destination: String identifier for the destination of this stream. transport: AdbStreamTransport to use for reads/writes.
juraj-google-style
def list_matching(self, ref_name: str, filter_: str) \ -> Iterable[ListEntry]: canonical, canonical_i = self._get_pattern(ref_name + filter_) for entry in self.list(): if entry.name == 'INBOX': if canonical_i.match('INBOX'): yield entry elif canonical.match(entry.name): yield entry
Return all the entries in the list tree that match the given query. Args: ref_name: Mailbox reference name. filter_: Mailbox name with possible wildcards.
juraj-google-style
def timestr2time(time_str): if any(c not in '0123456789:' for c in time_str): raise ValueError('Illegal character in time string') if time_str.count(':') == 2: h, m, s = time_str.split(':') elif time_str.count(':') == 1: h, m = time_str.split(':') s = '00' elif len(time_str) == 6: h = time_str[:2] m = time_str[2:4] s = time_str[4:] else: raise ValueError('Time format not recognised. {}'.format( VALID_TIME_FORMATS_TEXT)) if len(m) == 2 and len(s) == 2: mins = int(m) sec = int(s) else: raise ValueError('m and s must be 2 digits') try: return datetime.time(int(h), mins, sec) except ValueError: raise ValueError('Invalid time {}. {}'.format(time_str, VALID_TIME_FORMATS_TEXT))
Turns a string into a datetime.time object. This will only work if the format can be "guessed", so the string must have one of the formats from VALID_TIME_FORMATS_TEXT. Args: time_str (str) a string that represents a date Returns: datetime.time object Raises: ValueError if the input string does not have a valid format.
juraj-google-style
def select(self, selector): if self._is_single_string_selector(selector, 'name'): return self._all_models_by_name.get_all(selector['name']) else: return find(self._all_models.values(), selector)
Query this document for objects that match the given selector. Args: selector (JSON-like query dictionary) : you can query by type or by name, e.g. ``{"type": HoverTool}``, ``{"name": "mycircle"}`` Returns: seq[Model]
codesearchnet
def parse_author(cls, marc): name = None code = None linked_forms = None is_corporation = None record = None if marc["100a"]: name = _first_or_none(marc["100a"]) code = _first_or_none(marc["1007"]) is_corporation = False record = marc.datafields["100"][0] elif marc["110a"]: name = _first_or_none(marc["110a"]) code = _first_or_none(marc["1107"]) linked_forms = marc["410a2 "] is_corporation = True record = marc.datafields["110"][0] else: return None linked_forms = marc["410a2 "] type_descriptor = ["osoba", "organizace"] alt_name = "%s [%s]" % (name, type_descriptor[is_corporation]) if linked_forms: alt_name += " (" + ", ".join(linked_forms) + ")" return cls( name=name, code=code, linked_forms=linked_forms, is_corporation=is_corporation, record=record, alt_name=alt_name, )
Parse author from `marc` data. Args: marc (obj): :class:`.MARCXMLRecord` instance. See module :mod:`.marcxml_parser` for details. Returns: obj: :class:`Author`.
juraj-google-style
def get_experiment_kind(root): properties = {} if root.find('experimentType').text == 'Ignition delay measurement': properties['experiment-type'] = 'ignition delay' else: raise NotImplementedError(root.find('experimentType').text + ' not (yet) supported') properties['apparatus'] = {'kind': '', 'institution': '', 'facility': ''} kind = getattr(root.find('apparatus/kind'), 'text', False) if not kind: raise MissingElementError('apparatus/kind') elif kind in ['shock tube', 'rapid compression machine']: properties['apparatus']['kind'] = kind else: raise NotImplementedError(kind + ' experiment not (yet) supported') return properties
Read common properties from root of ReSpecTh XML file. Args: root (`~xml.etree.ElementTree.Element`): Root of ReSpecTh XML file Returns: properties (`dict`): Dictionary with experiment type and apparatus information.
juraj-google-style
def tensor_list(elements, element_dtype=None, element_shape=None, use_tensor_array=False): _validate_list_constructor(elements, element_dtype, element_shape) if use_tensor_array: return data_structures.tf_tensor_array_new(elements, element_dtype, element_shape) else: return data_structures.tf_tensor_list_new(elements, element_dtype, element_shape)
Creates an tensor list and populates it with the given elements. This function provides a more uniform access to tensor lists and tensor arrays, and allows optional initialization. Note: this function is a simplified wrapper. If you need greater control, it is recommended to use the underlying implementation directly. Args: elements: Iterable[tf.Tensor, ...], the elements to initially fill the list with element_dtype: Optional[tf.DType], data type for the elements in the list; required if the list is empty element_shape: Optional[tf.TensorShape], shape for the elements in the list; required if the list is empty use_tensor_array: bool, whether to use the more compatible but restrictive tf.TensorArray implementation Returns: Union[tf.Tensor, tf.TensorArray], the new list. Raises: ValueError: for invalid arguments
github-repos
def is_link(url, processed, files): if url not in processed: is_file = url.endswith(BAD_TYPES) if is_file: files.add(url) return False return True return False
Determine whether or not a link should be crawled A url should not be crawled if it - Is a file - Has already been crawled Args: url: str Url to be processed processed: list[str] List of urls that have already been crawled Returns: bool If `url` should be crawled
juraj-google-style
def find_in_matrix_2d(val, matrix): dim = len(matrix[0]) item_index = 0 for row in matrix: for i in row: if i == val: break item_index += 1 if i == val: break loc = (int(item_index / dim), item_index % dim) return loc
Returns a tuple representing the index of an item in a 2D matrix. Arguments: - val (str) Value to look for - matrix (list) 2D matrix to search for val in Returns: - (tuple) Ordered pair representing location of val
juraj-google-style
def _wait_for_glob(self, pattern, timeout_secs, for_checkpoint=True): end_time = time.time() + timeout_secs while time.time() < end_time: if for_checkpoint: if checkpoint_management.checkpoint_exists(pattern): return elif len(gfile.Glob(pattern)) >= 1: return time.sleep(0.05) self.assertFalse(True, 'Glob never matched any file: %s' % pattern)
Wait for a checkpoint file to appear. Args: pattern: A string. timeout_secs: How long to wait for in seconds. for_checkpoint: whether we're globbing for checkpoints.
github-repos
def dates_in_range(start_date, end_date): return [(start_date + timedelta(n)) for n in range(int((end_date - start_date).days))]
Returns all dates between two dates. Inclusive of the start date but not the end date. Args: start_date (datetime.date) end_date (datetime.date) Returns: (list) of datetime.date objects
codesearchnet
def Issue(self, state, results): result = CheckResult() if (results and all((isinstance(r, CheckResult) for r in results))): result.ExtendAnomalies(results) else: result.anomaly = [rdf_anomaly.Anomaly(type=anomaly_pb2.Anomaly.AnomalyType.Name(anomaly_pb2.Anomaly.ANALYSIS_ANOMALY), symptom=self.hint.Problem(state), finding=self.hint.Render(results), explanation=self.hint.Fix())] return result
Collect anomalous findings into a CheckResult. Comparisons with anomalous conditions collect anomalies into a single CheckResult message. The contents of the result varies depending on whether the method making the comparison is a Check, Method or Probe. - Probes evaluate raw host data and generate Anomalies. These are condensed into a new CheckResult. - Checks and Methods evaluate the results of probes (i.e. CheckResults). If there are multiple probe results, all probe anomalies are aggregated into a single new CheckResult for the Check or Method. Args: state: A text description of what combination of results were anomalous (e.g. some condition was missing or present.) results: Anomalies or CheckResult messages. Returns: A CheckResult message.
codesearchnet
def stacked_cnn(units: tf.Tensor, n_hidden_list: List, filter_width=3, use_batch_norm=False, use_dilation=False, training_ph=None, add_l2_losses=False): l2_reg = (tf.nn.l2_loss if add_l2_losses else None) for (n_layer, n_hidden) in enumerate(n_hidden_list): if use_dilation: dilation_rate = (2 ** n_layer) else: dilation_rate = 1 units = tf.layers.conv1d(units, n_hidden, filter_width, padding='same', dilation_rate=dilation_rate, kernel_initializer=INITIALIZER(), kernel_regularizer=l2_reg) if use_batch_norm: assert (training_ph is not None) units = tf.layers.batch_normalization(units, training=training_ph) units = tf.nn.relu(units) return units
Number of convolutional layers stacked on top of each other Args: units: a tensorflow tensor with dimensionality [None, n_tokens, n_features] n_hidden_list: list with number of hidden units at the ouput of each layer filter_width: width of the kernel in tokens use_batch_norm: whether to use batch normalization between layers use_dilation: use power of 2 dilation scheme [1, 2, 4, 8 .. ] for layers 1, 2, 3, 4 ... training_ph: boolean placeholder determining whether is training phase now or not. It is used only for batch normalization to determine whether to use current batch average (std) or memory stored average (std) add_l2_losses: whether to add l2 losses on network kernels to tf.GraphKeys.REGULARIZATION_LOSSES or not Returns: units: tensor at the output of the last convolutional layer
codesearchnet
def LSTMLayer(cell_name, weights, m, c, x_seq, pad_seq): if len(x_seq) != len(pad_seq): raise ValueError('length of x_seq(%d) != pad_seq(%d)' % (len(x_seq), len(pad_seq))) out_seq = [] for seq in range(len(x_seq)): with ops.name_scope('%s_%d' % (cell_name, seq)): m, c = LSTMCell(weights, m, c, x_seq[seq], pad_seq[seq]) out_seq.append(array_ops.identity(m, name='out')) return out_seq
Unrolls a layer of LSTM cells forward by the sequence length. The sequence length is determined by the length of x_seq and pad_seq, which must be the same. Args: cell_name: Base name of each cell. weights: Weight matrix with shape LSTMCellWeightsShape. m: Initial m states with shape [batch_size, num_nodes]. c: Initial c states with shape [batch_size, num_nodes]. x_seq: List of inputs, each with shape [batch_size, num_inputs]. The length of the list is the sequence length. pad_seq: List of paddings, each with shape [batch_size, 1]. The length of the list is the sequence length. Each padding value is either 0 or 1, where 1 indicates padding; i.e. the input is shorter than the sequence length. Returns: List of per-sequence-step outputs, each with shape [batch_size, num_nodes]. Raises: ValueError: If len(x_seq) != len(pad_seq).
github-repos
def load(cls, path): with open(path, 'r') as in_file: metadata = json.load(in_file) return cls.from_dict(metadata)
Create a new MLPipeline from a JSON specification. The JSON file format is the same as the one created by the `to_dict` method. Args: path (str): Path of the JSON file to load. Returns: MLPipeline: A new MLPipeline instance with the specification found in the JSON file.
codesearchnet
def write(self, ostream, kmip_version=enums.KMIPVersion.KMIP_1_0): tstream = BytearrayStream() self.revocation_code.write(tstream, kmip_version=kmip_version) if (self.revocation_message is not None): self.revocation_message.write(tstream, kmip_version=kmip_version) self.length = tstream.length() super(RevocationReason, self).write(ostream, kmip_version=kmip_version) ostream.write(tstream.buffer)
Write the data encoding the RevocationReason object to a stream. Args: ostream (Stream): A data stream in which to encode object data, supporting a write method; usually a BytearrayStream object. kmip_version (KMIPVersion): An enumeration defining the KMIP version with which the object will be encoded. Optional, defaults to KMIP 1.0.
codesearchnet
def primitive_wrapper_from_primitive(self, primitive_message: message.Message) -> _primitive_wrappers.PrimitiveWrapper:
Wraps the FHIR protobuf primitive_message to handle parsing/printing. The wrapped FHIR protobuf primitive provides necessary state for printing to the FHIR JSON spec. Args: primitive_message: The FHIR primitive to wrap. Raises: ValueError: In the event that primitive_message is not actually a primitive FHIR type. Returns: A wrapper around primitive_message.
github-repos
def _make_class_weight_map_fn(class_weight): class_ids = list(sorted(class_weight.keys())) expected_class_ids = list(range(len(class_ids))) if class_ids != expected_class_ids: error_msg = 'Expected `class_weight` to be a dict with keys from 0 to one less than the number of classes, found {}'.format(class_weight) raise ValueError(error_msg) class_weight_tensor = tensor_conversion.convert_to_tensor_v2_with_dispatch([class_weight[int(c)] for c in class_ids]) def _class_weights_map_fn(*data): x, y, sw = unpack_x_y_sample_weight(data) if nest.is_nested(y): raise ValueError('`class_weight` is only supported for Models with a single output.') if y.shape.rank > 2: raise ValueError('`class_weight` not supported for 3+ dimensional targets.') y_classes = smart_cond.smart_cond(y.shape.rank == 2 and backend.shape(y)[1] > 1, lambda: backend.argmax(y, axis=1), lambda: math_ops.cast(backend.reshape(y, (-1,)), dtypes.int64)) cw = array_ops.gather_v2(class_weight_tensor, y_classes) if sw is not None: cw = math_ops.cast(cw, sw.dtype) sw, cw = expand_1d((sw, cw)) sw = sw * cw else: sw = cw return (x, y, sw) return _class_weights_map_fn
Applies class weighting to a `Dataset`. The `Dataset` is assumed to be in format `(x, y)` or `(x, y, sw)`, where `y` must be a single `Tensor`. Args: class_weight: A map where the keys are integer class ids and values are the class weights, e.g. `{0: 0.2, 1: 0.6, 2: 0.3}` Returns: A function that can be used with `tf.data.Dataset.map` to apply class weighting.
github-repos
def get_libstdcpp_version(): key = 'libstdcpp_ver' out, err = run_shell_cmd(cmds_all[PLATFORM.lower()][key]) if err and FLAGS.debug: print('Error in detecting libstdc++ version:\n %s' % str(err)) ver = out.split(b'_')[-1].replace(b'\n', b'') return ver
Retrieves version of libstdc++ detected. Returns: String that is the version of libstdc++. e.g. '3.4.25'
github-repos
def Collect( self, knowledge_base, artifact_definition, searcher, file_system): for source in artifact_definition.sources: if source.type_indicator not in ( artifact_definitions.TYPE_INDICATOR_FILE, artifact_definitions.TYPE_INDICATOR_PATH): continue for path in source.paths: path_segments = path.split(source.separator) find_spec = file_system_searcher.FindSpec( location_glob=path_segments[1:], case_sensitive=False) for path_specification in searcher.Find(find_specs=[find_spec]): self._ParsePathSpecification( knowledge_base, searcher, file_system, path_specification, source.separator)
Collects values using a file artifact definition. Args: knowledge_base (KnowledgeBase): to fill with preprocessing information. artifact_definition (artifacts.ArtifactDefinition): artifact definition. searcher (dfvfs.FileSystemSearcher): file system searcher to preprocess the file system. file_system (dfvfs.FileSystem): file system to be preprocessed. Raises: PreProcessFail: if the preprocessing fails.
juraj-google-style
def _run_graph_for_calibration_graph_mode(model_dir: str, tags: Collection[str], representative_dataset_map: rd.RepresentativeDatasetMapping) -> None: _replace_tensors_by_numpy_ndarrays(representative_dataset_map) with ops.Graph().as_default(), session.Session() as sess: meta_graph: meta_graph_pb2.MetaGraphDef = loader_impl.load(sess, tags, export_dir=model_dir) for signature_key, repr_ds in representative_dataset_map.items(): sig_def = meta_graph.signature_def[signature_key] try: _run_function_for_calibration_graph_mode(sess, signature_def=sig_def, representative_dataset=repr_ds) except Exception as ex: raise ValueError(f'Failed to run representative dataset through the function with the signature key: {signature_key}.') from ex
Runs the graph for calibration in graph mode. This function assumes _graph mode_ (used when legacy TF1 is used or when eager mode is explicitly disabled) when running the graph. This step is used in order to collect the statistics in CustomAggregatorOp for quantization using the representative dataset for the actual data provided for inference. Args: model_dir: Path to SavedModel directory. tags: Collection of tags identifying the MetaGraphDef within the SavedModel. representative_dataset_map: A map where signature keys are mapped to corresponding representative datasets. Raises: ValueError: When running the function with the representative dataset fails.
github-repos
def build_polygon_dict(self, path, stroke_color=' if (not isinstance(path, list)): raise AttributeError('To build a map path a list of dictionaries of latitude and logitudes is required') polygon = {'path': path, 'stroke_color': stroke_color, 'stroke_opacity': stroke_opacity, 'stroke_weight': stroke_weight, 'fill_color': fill_color, 'fill_opacity': fill_opacity} return polygon
Set a dictionary with the javascript class Polygon parameters This function sets a default drawing configuration if the user just pass the polygon path, but also allows to set each parameter individually if the user wish so. Args: path (list): A list of latitude and longitude point for the polygon stroke_color (str): Sets the color of the polygon border using hexadecimal color notation stroke_opacity (float): Sets the opacity of the polygon border in percentage. If stroke_opacity = 0, the border is transparent stroke_weight (int): Sets the stroke girth in pixels. fill_color (str): Sets the color of the polygon fill using hexadecimal color notation fill_opacity (float): Sets the opacity of the polygon fill
codesearchnet
def concat_urls(*urls): normalized_urls = filter(bool, [url.strip('/') for url in urls]) joined_urls = '/'.join(normalized_urls) if not joined_urls: return '/' return '/{}/'.format(joined_urls)
Concat Urls Args: *args: (str) Returns: str: urls starting and ending with / merged with /
juraj-google-style
def index_impute2(fn): logger.info("Indexing {} (IMPUTE2)".format(fn)) impute2_index(fn, cols=[0, 1, 2], names=["chrom", "name", "pos"], sep=" ") logger.info("Index generated")
Indexes an IMPUTE2 file. Args: fn (str): The name of the IMPUTE2 file.
juraj-google-style
class XCLIPVisionEncoder(nn.Module): def __init__(self, config: XCLIPConfig): super().__init__() self.config = config self.layers = nn.ModuleList([XCLIPVisionEncoderLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False def forward(self, inputs_embeds, attention_mask: Optional[torch.Tensor]=None, causal_attention_mask: Optional[torch.Tensor]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, return_dict: Optional[bool]=None) -> Union[Tuple, BaseModelOutput]: output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states return_dict = return_dict if return_dict is not None else self.config.use_return_dict encoder_states = () if output_hidden_states else None all_attentions = () if output_attentions else None hidden_states = inputs_embeds for idx, encoder_layer in enumerate(self.layers): if output_hidden_states: encoder_states = encoder_states + (hidden_states,) if self.gradient_checkpointing and self.training: layer_outputs = self._gradient_checkpointing_func(encoder_layer.__call__, hidden_states, attention_mask, causal_attention_mask, output_attentions) else: layer_outputs = encoder_layer(hidden_states, attention_mask, causal_attention_mask, output_attentions=output_attentions) hidden_states = layer_outputs[0] if output_attentions: all_attentions = all_attentions + (layer_outputs[1],) if output_hidden_states: encoder_states = encoder_states + (hidden_states,) if not return_dict: return tuple((v for v in [hidden_states, encoder_states, all_attentions] if v is not None)) return BaseModelOutput(last_hidden_state=hidden_states, hidden_states=encoder_states, attentions=all_attentions)
Transformer encoder consisting of `config.num_hidden_layers` self attention layers. Each layer is a [`XCLIPVisionEncoderLayer`]. Args: config: XCLIPConfig
github-repos
def copy_file_if_newer(src_fs, src_path, dst_fs, dst_path): with manage_fs(src_fs, writeable=False) as _src_fs: with manage_fs(dst_fs, create=True) as _dst_fs: if (_src_fs is _dst_fs): if _source_is_newer(_src_fs, src_path, _dst_fs, dst_path): _src_fs.copy(src_path, dst_path, overwrite=True) return True else: return False else: with _src_fs.lock(), _dst_fs.lock(): if _source_is_newer(_src_fs, src_path, _dst_fs, dst_path): copy_file_internal(_src_fs, src_path, _dst_fs, dst_path) return True else: return False
Copy a file from one filesystem to another, checking times. If the destination exists, and is a file, it will be first truncated. If both source and destination files exist, the copy is executed only if the source file is newer than the destination file. In case modification times of source or destination files are not available, copy is always executed. Arguments: src_fs (FS or str): Source filesystem (instance or URL). src_path (str): Path to a file on the source filesystem. dst_fs (FS or str): Destination filesystem (instance or URL). dst_path (str): Path to a file on the destination filesystem. Returns: bool: `True` if the file copy was executed, `False` otherwise.
codesearchnet
def read(name, default=None, allow_none=False, fallback=None): raw_value = environ.get(name) if ((raw_value is None) and (fallback is not None)): if ((not isinstance(fallback, builtins.list)) and (not isinstance(fallback, builtins.tuple))): fallback = [fallback] for fall in fallback: raw_value = environ.get(fall) if (raw_value is not None): break if (raw_value or (raw_value == '')): return raw_value elif ((default is not None) or allow_none): return default else: raise KeyError('Set the "{0}" environment variable'.format(name))
Read the raw env value. Read the raw environment variable or use the default. If the value is not found and no default is set throw an exception. Args: name: The environment variable name default: The default value to use if no environment variable is found allow_none: If the return value can be `None` (i.e. optional) fallback: A list of fallback env variables to try and read if the primary environment variable is unavailable.
codesearchnet
def retry_until_valid_or_limit_reached(method, limit, validation_fn, sleep_s=1, catch_exceptions=()): assert (limit > 0), 'Limit must be greater than 0' def _execute_method(helper): try: return method() except catch_exceptions: if (not helper.remaining): raise return None helper = RetryHelper((limit - 1)) result = _execute_method(helper) while ((not validation_fn(result)) and helper.retry_if_possible()): time.sleep(sleep_s) result = _execute_method(helper) return result
Executes a method until the retry limit or validation_fn returns True. The method is always called once so the effective lower limit for 'limit' is 1. Passing in a number less than 1 will still result it the method being called once. Args: method: The method to execute should take no arguments. limit: The number of times to try this method. Must be >0. validation_fn: The validation function called on the function result to determine whether to keep looping. sleep_s: The time to sleep in between invocations. catch_exceptions: Tuple of exception types to catch and count as failures. Returns: Whatever the method last returned, implicit False would indicate the method never succeeded.
codesearchnet
def get_special_tokens_mask(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None, already_has_special_tokens: Optional[bool]=False) -> List[int]: if already_has_special_tokens: return super().get_special_tokens_mask(token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True) if token_ids_1 is None: return [0] * len(token_ids_0) + [1] return [0] * len(token_ids_0) + [1] + [0] * len(token_ids_1) + [1]
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
github-repos
def seek_end(fileobj, offset): if offset < 0: raise ValueError if get_size(fileobj) < offset: fileobj.seek(0, 0) else: fileobj.seek(-offset, 2)
Like fileobj.seek(-offset, 2), but will not try to go beyond the start Needed since file objects from BytesIO will not raise IOError and file objects from open() will raise IOError if going to a negative offset. To make things easier for custom implementations, instead of allowing both behaviors, we just don't do it. Args: fileobj (fileobj) offset (int): how many bytes away from the end backwards to seek to Raises: IOError
juraj-google-style
def get_generating_ops(ts): ts = make_list_of_t(ts, allow_graph=False) return [t.op for t in ts]
Return all the generating ops of the tensors in `ts`. Args: ts: a list of `tf.Tensor` Returns: A list of all the generating `tf.Operation` of the tensors in `ts`. Raises: TypeError: if `ts` cannot be converted to a list of `tf.Tensor`.
github-repos
def dict_head(d, N=5): return {k: d[k] for k in list(d.keys())[:N]}
Return the head of a dictionary. It will be random! Default is to return the first 5 key/value pairs in a dictionary. Args: d: Dictionary to get head. N: Number of elements to display. Returns: dict: the first N items of the dictionary.
juraj-google-style
def forward(self, hidden_states: torch.Tensor, attention_mask: Optional[torch.Tensor]=None, layer_head_mask: Optional[torch.Tensor]=None, position_bias: Optional[torch.Tensor]=None, output_attentions: bool=False): residual = hidden_states hidden_states, attn_weights, _ = self.attention(hidden_states=hidden_states, attention_mask=attention_mask, layer_head_mask=layer_head_mask, position_bias=position_bias, output_attentions=output_attentions) hidden_states = self.dropout(hidden_states) hidden_states = residual + hidden_states hidden_states = self.layer_norm(hidden_states) hidden_states = hidden_states + self.feed_forward(hidden_states) hidden_states = self.final_layer_norm(hidden_states) outputs = (hidden_states,) if output_attentions: outputs += (attn_weights,) return outputs
Args: hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, hidden_size)` attention_mask (`torch.FloatTensor`): attention mask of size `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. layer_head_mask (`torch.FloatTensor`): mask for attention heads in a given layer of size `(config.encoder_attention_heads,)`. position_bias (`torch.FloatTensor`): relative position embeddings of size `(seq_len, seq_len, hidden_size // encoder_attention_heads)` output_attentions (`bool`, *optional*): Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned tensors for more detail.
github-repos
def from_voxels(voxels): dimensions = len(voxels[0]) for d in range(len(dimensions)): size.append(max([i[d] for i in voxels])) result = numpy.zeros(dimensions) for v in voxels: result[v] = 1 return result
Converts a voxel list to an ndarray. Arguments: voxels (tuple[]): A list of coordinates indicating coordinates of populated voxels in an ndarray. Returns: numpy.ndarray The result of the transformation.
juraj-google-style
def read(self, length=0, timeout_ms=None): return self._transport.read(length, timeouts.PolledTimeout.from_millis(timeout_ms))
Reads data from the remote end of this stream. Internally, this data will have been contained in AdbMessages, but users of streams shouldn't need to care about the transport mechanism. Args: length: If provided, the number of bytes to read, otherwise all available data will be returned (at least one byte). timeout_ms: Time to wait for a message to come in for this stream, in milliseconds (or as a PolledTimeout object). Returns: Data that was read, or None if the end of the stream was reached. Raises: AdbProtocolError: Received an unexpected wonky non-stream packet (like a CNXN ADB message). AdbStreamClosedError: The stream is already closed. AdbTimeoutError: Timed out waiting for a message.
codesearchnet
def std(self): import math chunk_iter = chunks(self.times, self.bestof) times = list(map(min, chunk_iter)) mean = (sum(times) / len(times)) std = math.sqrt((sum((((t - mean) ** 2) for t in times)) / len(times))) return std
The standard deviation of the best results of each trial. Returns: float: standard deviation of measured seconds Note: As mentioned in the timeit source code, the standard deviation is not often useful. Typically the minimum value is most informative. Example: >>> import math >>> self = Timerit(num=10, verbose=1) >>> self.call(math.factorial, 50) >>> assert self.std() >= 0
codesearchnet
def copy_to(self, new_key, bucket=None): if bucket is None: bucket = self._bucket try: new_info = self._api.objects_copy(self._bucket, self._key, bucket, new_key) except Exception as e: raise e return Item(bucket, new_key, new_info, context=self._context)
Copies this item to the specified new key. Args: new_key: the new key to copy this item to. bucket: the bucket of the new item; if None (the default) use the same bucket. Returns: An Item corresponding to new key. Raises: Exception if there was an error copying the item.
juraj-google-style
def loop_until_timeout_or_not_none(timeout_s, function, sleep_s=1): return loop_until_timeout_or_valid(timeout_s, function, (lambda x: (x is not None)), sleep_s)
Loops until the specified function returns non-None or until a timeout. Args: timeout_s: The number of seconds to wait until a timeout condition is reached. As a convenience, this accepts None to mean never timeout. Can also be passed a PolledTimeout object instead of an integer. function: The function to call each iteration. sleep_s: The number of seconds to wait after calling the function. Returns: Whatever the function returned last.
codesearchnet
def _get_mean_and_median(hist: Hist) -> Tuple[(float, float)]: x = ctypes.c_double(0) q = ctypes.c_double(0.5) hist.ComputeIntegral() hist.GetQuantiles(1, x, q) mean = hist.GetMean() return (mean, x.value)
Retrieve the mean and median from a ROOT histogram. Note: These values are not so trivial to calculate without ROOT, as they are the bin values weighted by the bin content. Args: hist: Histogram from which the values will be extract. Returns: mean, median of the histogram.
codesearchnet
def set_value(self, value, timeout): self.value = value self.expiration = time.clock() * 1000 + timeout
Changes the cached value and updates creation time. Args: value: the new cached value. timeout: time to live for the object in milliseconds Returns: None
juraj-google-style
def _generate_ascii(self, matrix, foreground, background): return '\n'.join([''.join([(foreground if cell else background) for cell in row]) for row in matrix])
Generates an identicon "image" in the ASCII format. The image will just output the matrix used to generate the identicon. Arguments: matrix - Matrix describing which blocks in the identicon should be painted with foreground (background if inverted) colour. foreground - Character which should be used for representing foreground. background - Character which should be used for representing background. Returns: ASCII representation of an identicon image, where one block is one character.
codesearchnet
def __init__(self, **kwargs): try: self.UIStatusMsg = '' self.mac = kwargs.get('EUI') self.handle = None self.AutoDUTEnable = False self._is_net = False self.logStatus = {'stop':'stop', 'running':'running', 'pauseReq':'pauseReq', 'paused':'paused'} self.logThreadStatus = self.logStatus['stop'] self.connectType = (kwargs.get('Param5')).strip().lower() if kwargs.get('Param5') is not None else 'usb' if self.connectType == 'ip': self.dutIpv4 = kwargs.get('TelnetIP') self.dutPort = kwargs.get('TelnetPort') self.port = self.dutIpv4 + ':' + self.dutPort else: self.port = kwargs.get('SerialPort') self.intialize() except Exception, e: ModuleHelper.WriteIntoDebugLogger('initialize() Error: ' + str(e))
initialize the serial port and default network parameters Args: **kwargs: Arbitrary keyword arguments Includes 'EUI' and 'SerialPort'
juraj-google-style
def gen_encoder_output_proposals(self, enc_output, padding_mask, spatial_shapes): batch_size = enc_output.shape[0] proposals = [] _cur = 0 for level, (height, width) in enumerate(spatial_shapes): mask_flatten_ = padding_mask[:, _cur:_cur + height * width].view(batch_size, height, width, 1) valid_height = torch.sum(~mask_flatten_[:, :, 0, 0], 1) valid_width = torch.sum(~mask_flatten_[:, 0, :, 0], 1) grid_y, grid_x = meshgrid(torch.linspace(0, height - 1, height, dtype=enc_output.dtype, device=enc_output.device), torch.linspace(0, width - 1, width, dtype=enc_output.dtype, device=enc_output.device), indexing='ij') grid = torch.cat([grid_x.unsqueeze(-1), grid_y.unsqueeze(-1)], -1) scale = torch.cat([valid_width.unsqueeze(-1), valid_height.unsqueeze(-1)], 1).view(batch_size, 1, 1, 2) grid = (grid.unsqueeze(0).expand(batch_size, -1, -1, -1) + 0.5) / scale width_height = torch.ones_like(grid) * 0.05 * 2.0 ** level proposal = torch.cat((grid, width_height), -1).view(batch_size, -1, 4) proposals.append(proposal) _cur += height * width output_proposals = torch.cat(proposals, 1) output_proposals_valid = ((output_proposals > 0.01) & (output_proposals < 0.99)).all(-1, keepdim=True) output_proposals = torch.log(output_proposals / (1 - output_proposals)) output_proposals = output_proposals.masked_fill(padding_mask.unsqueeze(-1), float('inf')) output_proposals = output_proposals.masked_fill(~output_proposals_valid, float('inf')) object_query = enc_output object_query = object_query.masked_fill(padding_mask.unsqueeze(-1), float(0)) object_query = object_query.masked_fill(~output_proposals_valid, float(0)) object_query = self.enc_output_norm(self.enc_output(object_query)) return (object_query, output_proposals)
Generate the encoder output proposals from encoded enc_output. Args: enc_output (Tensor[batch_size, sequence_length, hidden_size]): Output of the encoder. padding_mask (Tensor[batch_size, sequence_length]): Padding mask for `enc_output`. spatial_shapes (List[Tuple[int, int]]): Spatial shapes of the feature maps. Returns: `tuple(torch.FloatTensor)`: A tuple of feature map and bbox prediction. - object_query (Tensor[batch_size, sequence_length, hidden_size]): Object query features. Later used to directly predict a bounding box. (without the need of a decoder) - output_proposals (Tensor[batch_size, sequence_length, 4]): Normalized proposals, after an inverse sigmoid.
github-repos