code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def cluster_spec(self): if self._override_client: client = self._override_client else: from kubernetes import config as k8sconfig from kubernetes import client as k8sclient k8sconfig.load_kube_config() client = k8sclient.CoreV1Api() cluster_map = {} for tf_job in self._job_to_label_mapping: all_pods = [] for selector in self._job_to_label_mapping[tf_job]: ret = client.list_pod_for_all_namespaces(label_selector=selector) selected_pods = [] for pod in sorted(ret.items, key=lambda x: x.metadata.name): if pod.status.phase == 'Running': selected_pods.append('%s:%s' % (pod.status.host_ip, self._tf_server_port)) else: raise RuntimeError('Pod "%s" is not running; phase: "%s"' % (pod.metadata.name, pod.status.phase)) all_pods.extend(selected_pods) cluster_map[tf_job] = all_pods return server_lib.ClusterSpec(cluster_map)
Returns a ClusterSpec object based on the latest info from Kubernetes. We retrieve the information from the Kubernetes master every time this method is called. Returns: A ClusterSpec containing host information returned from Kubernetes. Raises: RuntimeError: If any of the pods returned by the master is not in the `Running` phase.
github-repos
def check_configuration(ctx, base_key, needed_keys): if (base_key not in ctx.keys()): exit("[{}ERROR{}] missing configuration for '{}'".format(ERROR_COLOR, RESET_COLOR, base_key)) if (ctx.releaser is None): exit("[{}ERROR{}] empty configuration for '{}' found".format(ERROR_COLOR, RESET_COLOR, base_key)) for my_key in needed_keys: if (my_key not in ctx[base_key].keys()): exit("[{}ERROR{}] missing configuration key '{}.{}'".format(ERROR_COLOR, RESET_COLOR, base_key, my_key))
Confrim a valid configuration. Args: ctx (invoke.context): base_key (str): the base configuration key everything is under. needed_keys (list): sub-keys of the base key that are checked to make sure they exist.
codesearchnet
def execute_before(self, sensor_graph, scope_stack): parent = scope_stack[-1] new_scope = Scope("Configuration Scope", sensor_graph, parent.allocator, parent) new_scope.add_identifier('current_slot', self.slot) scope_stack.append(new_scope)
Execute statement before children are executed. Args: sensor_graph (SensorGraph): The sensor graph that we are building or modifying scope_stack (list(Scope)): A stack of nested scopes that may influence how this statement allocates clocks or other stream resources.
juraj-google-style
def format_counts(counts, header=None): counts_dict = {} for (key, val) in counts.items(): key = format_counts_memory(key, header) counts_dict[key] = val return counts_dict
Format a single experiment result coming from backend to present to the Qiskit user. Args: counts (dict): counts histogram of multiple shots header (dict): the experiment header dictionary containing useful information for postprocessing. Returns: dict: a formatted counts
codesearchnet
def to_frame(self, *args): if (sys.version_info < (3, 6, 0)): from collections import OrderedDict impls = OrderedDict() for (name, obj) in self.items(): impls[name] = obj._impl else: impls = get_impls(self) return _to_frame_inner(impls, args)
Convert the cells in the view into a DataFrame object. If ``args`` is not given, this method returns a DataFrame that has an Index or a MultiIndex depending of the number of cells parameters and columns each of which corresponds to each cells included in the view. ``args`` can be given to calculate cells values and limit the DataFrame indexes to the given arguments. The cells in this view may have different number of parameters, but parameters shared among multiple cells must appear in the same position in all the parameter lists. For example, Having ``foo()``, ``bar(x)`` and ``baz(x, y=1)`` is okay because the shared parameter ``x`` is always the first parameter, but this method does not work if the view has ``quz(x, z=2, y=1)`` cells in addition to the first three cells, because ``y`` appears in different positions. Args: args(optional): multiple arguments, or an iterator of arguments to the cells.
codesearchnet
def __init__(self, header, values): assert isinstance(header, Header), \ 'header must be a Ladybug Header object. Got {}'.format(type(header)) assert isinstance(header.analysis_period, AnalysisPeriod), \ 'header of {} must have an analysis_period.'.format(self.__class__.__name__) assert header.analysis_period.st_hour == 0, \ 'analysis_period start hour of {} must be 0. Got {}'.format( self.__class__.__name__, header.analysis_period.st_hour) assert header.analysis_period.end_hour == 23, \ 'analysis_period end hour of {} must be 23. Got {}'.format( self.__class__.__name__, header.analysis_period.end_hour) self._header = header self.values = values self._datetimes = None self._validated_a_period = True
Initialize hourly discontinuous collection. Args: header: A Ladybug Header object. Note that this header must have an AnalysisPeriod on it that aligns with the list of values. values: A list of values. Note that the length of this list must align with the AnalysisPeriod on the header.
juraj-google-style
def prepare_soap_body(self, method, parameters, namespace): tags = [] for name, value in parameters: tag = "<{name}>{value}</{name}>".format( name=name, value=escape("%s" % value, {'"': "&quot;"})) tags.append(tag) wrapped_params = "".join(tags) if namespace is not None: soap_body = ( '<{method} xmlns="{namespace}">' '{params}' '</{method}>'.format( method=method, params=wrapped_params, namespace=namespace )) else: soap_body = ( '<{method}>' '{params}' '</{method}>'.format( method=method, params=wrapped_params )) return soap_body
Prepare the SOAP message body for sending. Args: method (str): The name of the method to call. parameters (list): A list of (name, value) tuples containing the parameters to pass to the method. namespace (str): tThe XML namespace to use for the method. Returns: str: A properly formatted SOAP Body.
juraj-google-style
def __init__(self, group_type, name, **kwargs): self._utils = TcExUtils() self._name = name self._type = group_type self._group_data = {'name': name, 'type': group_type} for arg, value in kwargs.items(): self.add_key_value(arg, value) if kwargs.get('xid') is None: self._group_data['xid'] = str(uuid.uuid4()) self._attributes = [] self._labels = [] self._file_content = None self._tags = [] self._processed = False
Initialize Class Properties. Args: group_type (str): The ThreatConnect define Group type. name (str): The name for this Group. xid (str, kwargs): The external id for this Group.
juraj-google-style
def _skip_parameter_matching(self) -> bool: if self.signature.type_params: return False if self.ctx.options.analyze_annotated: return False return self.signature.has_return_annotation or self.full_name == '__init__'
Check whether we should skip parameter matching. This is use to skip parameter matching for function calls in the context of inference (pyi generation). This is to optimize the case where we don't need to match parameters in cases which the function has explicit type annotations, meaning that we don't need to infer the type. Returns: True if we should skip parameter matching.
github-repos
def load_config(logdir): config_path = logdir and os.path.join(logdir, 'config.yaml') if not config_path or not tf.gfile.Exists(config_path): message = ( 'Cannot resume an existing run since the logging directory does not ' 'contain a configuration file.') raise IOError(message) with tf.gfile.FastGFile(config_path, 'r') as file_: config = yaml.load(file_, Loader=yaml.Loader) message = 'Resume run and write summaries and checkpoints to {}.' tf.logging.info(message.format(config.logdir)) return config
Load a configuration from the log directory. Args: logdir: The logging directory containing the configuration file. Raises: IOError: The logging directory does not contain a configuration file. Returns: Configuration object.
juraj-google-style
def obtain_capture_by_value_ops(dataset): def capture_by_value(op): return op.outputs[0].dtype in TENSOR_TYPES_ALLOWLIST or op.type in OP_TYPES_ALLOWLIST return _traverse(dataset, capture_by_value)
Given an input dataset, finds all allowlisted ops used for construction. Allowlisted ops are stateful ops which are known to be safe to capture by value. Args: dataset: Dataset to find allowlisted stateful ops for. Returns: A list of variant_tensor producing dataset ops used to construct this dataset.
github-repos
def get_custom_objects(): return GLOBAL_CUSTOM_OBJECTS
Retrieves a live reference to the global dictionary of custom objects. Custom objects set using `custom_object_scope()` are not added to the global dictionary of custom objects, and will not appear in the returned dictionary. Example: ```python get_custom_objects().clear() get_custom_objects()['MyObject'] = MyObject ``` Returns: Global dictionary mapping registered class names to classes.
github-repos
def _assign_method(self, resource_class, method_type): method_name = resource_class.get_method_name( resource_class, method_type) valid_status_codes = getattr( resource_class.Meta, 'valid_status_codes', DEFAULT_VALID_STATUS_CODES ) def get(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) def put(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) def post(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) def patch(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) def delete(self, method_type=method_type, method_name=method_name, valid_status_codes=valid_status_codes, resource=resource_class, data=None, uid=None, **kwargs): return self.call_api( method_type, method_name, valid_status_codes, resource, data, uid=uid, **kwargs) method_map = { 'GET': get, 'PUT': put, 'POST': post, 'PATCH': patch, 'DELETE': delete } setattr( self, method_name, types.MethodType(method_map[method_type], self) )
Using reflection, assigns a new method to this class. Args: resource_class: A resource class method_type: The HTTP method type
juraj-google-style
def sendline(self, text): logger.debug("Sending input '{0}' to '{1}'".format(text, self.name)) try: return self._spawn.sendline(text) except pexpect.exceptions.EOF as e: logger.debug('Raising termination exception.') raise TerminationException(instance=self, real_exception=e, output=self.get_output()) except pexpect.exceptions.TIMEOUT as e: logger.debug('Raising timeout exception.') raise TimeoutException(instance=self, real_exception=e, output=self.get_output()) except Exception as e: logger.debug(('Sending input failed: ' + str(e))) raise NestedException(instance=self, real_exception=e, output=self.get_output())
Sends an input line to the running program, including os.linesep. Args: text (str): The input text to be send. Raises: TerminationException: The program terminated before / while / after sending the input. NestedException: An internal problem occured while waiting for the output.
codesearchnet
def ModulePath(module_name): module = importlib.import_module(module_name) path = inspect.getfile(module) if compatibility.PY2: path = path.decode('utf-8') if os.path.basename(path).startswith('__init__.'): path = os.path.dirname(path) if path.endswith('.pyc'): path = (path[:(- 4)] + '.py') return path
Computes a path to the specified module. Args: module_name: A name of the module to get the path for. Returns: A path to the specified module. Raises: ImportError: If specified module cannot be imported.
codesearchnet
def next(self): options = {} if self._buffer_size: options['read_buffer_size'] = self._buffer_size if self._account_id: options['_account_id'] = self._account_id while True: filename = self._next_file() if (filename is None): raise StopIteration() if (self._path_filter and (not self._path_filter.accept(self._slice_ctx, filename))): continue try: start_time = time.time() handle = cloudstorage.open(filename, **options) self._slice_ctx.incr(self.COUNTER_IO_READ_MSEC, (int((time.time() - start_time)) * 1000)) self._slice_ctx.incr(self.COUNTER_FILE_READ) return handle except cloudstorage.NotFoundError: logging.warning('File %s may have been removed. Skipping file.', filename) self._slice_ctx.incr(self.COUNTER_FILE_MISSING)
Returns a handler to the next file. Non existent files will be logged and skipped. The file might have been removed after input splitting. Returns: The next input from this input reader in the form of a cloudstorage ReadBuffer that supports a File-like interface (read, readline, seek, tell, and close). An error may be raised if the file can not be opened. Raises: StopIteration: The list of files has been exhausted.
codesearchnet
def float_value_convert(dictin, dropfailedvalues=False): return key_value_convert(dictin, valuefn=float, dropfailedvalues=dropfailedvalues)
Convert values of dictionary to floats Args: dictin (DictUpperBound): Input dictionary dropfailedvalues (bool): Whether to drop dictionary entries where key conversion fails. Defaults to False. Returns: Dict: Dictionary with values converted to floats
juraj-google-style
def git_ls_remote(self, uri, ref): logger.debug("Invoking git to retrieve commit id for repo %s...", uri) lsremote_output = subprocess.check_output(['git', 'ls-remote', uri, ref]) if b"\t" in lsremote_output: commit_id = lsremote_output.split(b"\t")[0] logger.debug("Matching commit id found: %s", commit_id) return commit_id else: raise ValueError("Ref \"%s\" not found for repo %s." % (ref, uri))
Determine the latest commit id for a given ref. Args: uri (string): git URI ref (string): git ref Returns: str: A commit id
juraj-google-style
def tell(self): self._check_open() return self.position
Tell the file's current offset. Returns: current offset in reading this file. Raises: ``ValueError``: When this stream is closed.
github-repos
def read_stream(self, start_offset=0, byte_count=None): try: return self._api.object_download(self._bucket, self._key, start_offset=start_offset, byte_count=byte_count) except Exception as e: raise e
Reads the content of this object as text. Args: start_offset: the start offset of bytes to read. byte_count: the number of bytes to read. If None, it reads to the end. Returns: The text content within the object. Raises: Exception if there was an error requesting the object's content.
juraj-google-style
def v_cross(u, v): '\n i = u[1]*v[2] - u[2]*v[1]\n j = u[2]*v[0] - u[0]*v[2]\n k = u[0]*v[1] - u[1]*v[0]\n ' i = '(({u1})*({v2}) - ({u2})*({v1}))'.format(u1=u[1], u2=u[2], v1=v[1], v2=v[2]) j = '(({u2})*({v0}) - ({u0})*({v2}))'.format(u0=u[0], u2=u[2], v0=v[0], v2=v[2]) k = '(({u0})*({v1}) - ({u1})*({v0}))'.format(u0=u[0], u1=u[1], v0=v[0], v1=v[1]) return [i, j, k]
muparser cross product function Compute the cross product of two 3x1 vectors Args: u (list or tuple of 3 strings): first vector v (list or tuple of 3 strings): second vector Returns: A list containing a muparser string of the cross product
codesearchnet
def get_tensor_size(self, tensor_name, partial_layout=None, mesh_dimension_to_size=None): return (self.get_tensor_dtype(tensor_name).size * self.get_tensor_num_entries(tensor_name, partial_layout, mesh_dimension_to_size))
The size of a tensor in bytes. If partial_layout is specified, then mesh_dimension_to_size must also be. In this case, the size on a single device is returned. Args: tensor_name: a string, name of a tensor in the graph. partial_layout: an optional {string: string}, from MTF dimension name to mesh dimension name. mesh_dimension_to_size: an optional {string: int}, from mesh dimension name to size. Returns: an integer
juraj-google-style
def calculate_cidr(start_address, end_address): tmp_addrs = [] try: tmp_addrs.extend(summarize_address_range( ip_address(start_address), ip_address(end_address))) except (KeyError, ValueError, TypeError): try: tmp_addrs.extend(summarize_address_range( ip_network(start_address).network_address, ip_network(end_address).network_address)) except AttributeError: tmp_addrs.extend(summarize_address_range( ip_network(start_address).ip, ip_network(end_address).ip)) return [i.__str__() for i in collapse_addresses(tmp_addrs)]
The function to calculate a CIDR range(s) from a start and end IP address. Args: start_address (:obj:`str`): The starting IP address. end_address (:obj:`str`): The ending IP address. Returns: list of str: The calculated CIDR ranges.
juraj-google-style
def _deserialize_audience(audience_map): for audience in audience_map.values(): (condition_structure, condition_list) = condition_helper.loads(audience.conditions) audience.__dict__.update({'conditionStructure': condition_structure, 'conditionList': condition_list}) return audience_map
Helper method to de-serialize and populate audience map with the condition list and structure. Args: audience_map: Dict mapping audience ID to audience object. Returns: Dict additionally consisting of condition list and structure on every audience object.
codesearchnet
def get_default_backend_config(appdirs): return {'store': 'sqlalchemy', 'day_start': datetime.time(5, 30, 0), 'fact_min_delta': 1, 'tmpfile_path': os.path.join(appdirs.user_data_dir, '{}.tmp'.format(appdirs.appname)), 'db_engine': 'sqlite', 'db_path': os.path.join(appdirs.user_data_dir, '{}.sqlite'.format(appdirs.appname))}
Return a default config dictionary. Args: appdirs (HamsterAppDirs): ``HamsterAppDirs`` instance encapsulating the apps details. Returns: dict: Dictionary with a default configuration. Note: Those defaults are independent of the particular config-store.
codesearchnet
def __init__(self, curriculum_obj, batch_size, max_len, ops, token_by_char): self._vocab_dict = collections.defaultdict(lambda: 0) self._vocab_dict[self.UNK] = 0 self._inv_vocab_dict = collections.defaultdict(lambda: self.UNK) self.curriculum_obj = curriculum_obj self._max_seq_length = max_len self._ops = ops self._token_by_char = token_by_char self._batch_size = batch_size num_token_digits = 1 if token_by_char else curriculum_obj.max_length token_list = get_tokens(10 ** num_token_digits) self.vocab_size = 1 for token in self.DEFAULT_START_TOKENS + token_list: if token not in self._vocab_dict: self._vocab_dict[token] = self.vocab_size self._inv_vocab_dict[self.vocab_size] = token self.vocab_size += 1
Creates a TokenDataSource instance. Args: curriculum_obj: (LTECurriculum) determines sample complexity. batch_size: (int) Batch size to generate. max_len: (int) This is the maximum size of any given sample sequence. ops: (list(CodeOp)). Task operations that inherit from CodeOp(). token_by_char: (bool) Whether to tokenize by char ("detokenized") or by keyword, literals and numbers.
juraj-google-style
def get_note(self, noteid, version=None): params_version = "" if version is not None: params_version = '/v/' + str(version) params = '/i/%s%s' % (str(noteid), params_version) request = Request(DATA_URL+params) request.add_header(self.header, self.get_token()) try: response = urllib2.urlopen(request) except HTTPError as e: if e.code == 401: raise SimplenoteLoginFailed('Login to Simplenote API failed! Check Token.') else: return e, -1 except IOError as e: return e, -1 note = json.loads(response.read().decode('utf-8')) note = self.__add_simplenote_api_fields(note, noteid, int(response.info().get("X-Simperium-Version"))) if "tags" in note: note["tags"] = sorted(note["tags"]) return note, 0
Method to get a specific note Arguments: - noteid (string): ID of the note to get - version (int): optional version of the note to get Returns: A tuple `(note, status)` - note (dict): note object - status (int): 0 on success and -1 otherwise
juraj-google-style
def update(self): (data_format, data) = RuuviTagSensor.get_data(self._mac, self._bt_device) if (data == self._data): return self._state self._data = data if (self._data is None): self._state = {} else: self._state = get_decoder(data_format).decode_data(self._data) return self._state
Get lates data from the sensor and update own state. Returns: dict: Latest state
codesearchnet
def run(self, dag): cx_runs = dag.collect_runs(["cx"]) for cx_run in cx_runs: partition = [] chunk = [] for i in range(len(cx_run) - 1): chunk.append(cx_run[i]) qargs0 = cx_run[i].qargs qargs1 = cx_run[i + 1].qargs if qargs0 != qargs1: partition.append(chunk) chunk = [] chunk.append(cx_run[-1]) partition.append(chunk) for chunk in partition: if len(chunk) % 2 == 0: for n in chunk: dag.remove_op_node(n) else: for n in chunk[1:]: dag.remove_op_node(n) return dag
Run one pass of cx cancellation on the circuit Args: dag (DAGCircuit): the directed acyclic graph to run on. Returns: DAGCircuit: Transformed DAG.
juraj-google-style
def all(x, axis=None, keepdims=False): x = math_ops.cast(x, dtypes_module.bool) return math_ops.reduce_all(x, axis, keepdims)
Bitwise reduction (logical AND). Args: x: Tensor or variable. axis: axis along which to perform the reduction. keepdims: whether the drop or broadcast the reduction axes. Returns: A uint8 tensor (0s and 1s).
github-repos
def _get_or_create_arg_by_name(state, name, is_kwarg=False): for arg in state.args + state.kwargs: if arg.name == name: return arg arg = Namespace() arg.name = name arg.type.lines = [] arg.description.lines = [] if is_kwarg: state.kwargs.append(arg) else: state.args.append(arg) return arg
Gets or creates a new Arg. These Arg objects (Namespaces) are turned into the ArgInfo namedtuples returned by parse. Each Arg object is used to collect the name, type, and description of a single argument to the docstring's function. Args: state: The state of the parser. name: The name of the arg to create. is_kwarg: A boolean representing whether the argument is a keyword arg. Returns: The new Arg.
github-repos
def _contiguous_groups( length: int, comparator: Callable[[int, int], bool] ) -> List[Tuple[int, int]]: result = [] start = 0 while start < length: past = start + 1 while past < length and comparator(start, past): past += 1 result.append((start, past)) start = past return result
Splits range(length) into approximate equivalence classes. Args: length: The length of the range to split. comparator: Determines if two indices have approximately equal items. Returns: A list of (inclusive_start, exclusive_end) range endpoints. Each corresponds to a run of approximately-equivalent items.
juraj-google-style
def delete_public_ip(access_token, subscription_id, resource_group, public_ip_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', resource_group, '/providers/Microsoft.Network/publicIPAddresses/', public_ip_name, '?api-version=', NETWORK_API]) return do_delete(endpoint, access_token)
Delete a public ip addresses associated with a resource group. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. public_ip_name (str): Name of the public ip address resource. Returns: HTTP response.
juraj-google-style
def _CheckParserCanProcessFileEntry(self, parser, file_entry): for filter_object in parser.FILTERS: if filter_object.Match(file_entry): return True return False
Determines if a parser can process a file entry. Args: file_entry (dfvfs.FileEntry): file entry. parser (BaseParser): parser. Returns: bool: True if the file entry can be processed by the parser object.
codesearchnet
def _pretty_print(key_val, sep=': ', min_col_width=39, text_width=None): if text_width is None: text_width = get_terminal_size().columns if text_width < min_col_width: min_col_width = text_width ncols = (text_width + 1) colw = (text_width + 1) ncols = min(ncols, len(key_val)) wrapper = TextWrapper(width=colw) lines = [] for key, val in key_val: if len(key) + len(sep) >= colw wrapper.subsequent_indent = ' ' else: wrapper.subsequent_indent = ' ' * (len(key) + len(sep)) lines.extend(wrapper.wrap('{}{}{}'.format(key, sep, val))) chunks = [] for rem_col in range(ncols, 1, -1): isep = ceil(len(lines) / rem_col) while isep < len(lines) and lines[isep][0] == ' ': isep += 1 chunks.append(lines[:isep]) lines = lines[isep:] chunks.append(lines) lines = zip_longest(*chunks, fillvalue='') fmt = '|'.join(['{{:{}}}'.format(colw)] * (ncols - 1)) fmt += '|{}' if ncols > 1 else '{}' print(*(fmt.format(*line) for line in lines), sep='\n')
Print a iterable of key/values Args: key_val (list of (str, str)): the pairs of section names and text. sep (str): separator between section names and text. min_col_width (int): minimal acceptable column width text_width (int): text width to use. If set to None, will try to infer the size of the terminal.
juraj-google-style
def _ExpectedKeysForEntry(self, entry): return [entry.name]
Generate a list of expected cache keys for this type of map. Args: entry: A GroupMapEntry Returns: A list of strings
github-repos
def forward(self, seq_length=None, position=None): if position is None and seq_length is None: raise ValueError('Either position or seq_length must be provided') if position is None: position = torch.arange(seq_length, dtype=torch.float32, device=self.inv_timescales.device).unsqueeze(0) elif position.ndim != 2: raise ValueError(f'position must be 2-dimensional, got shape {position.shape}') scaled_time = position.view(*position.shape, 1) * self.inv_timescales.view(1, 1, -1) signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], dim=2) signal = F.pad(signal, (0, 0, 0, self.embedding_dims % 2)) return signal
Generates a Tensor of sinusoids with different frequencies. Args: seq_length: an optional Python int defining the output sequence length. if the `position` argument is specified. position: [B, seq_length], optional position for each token in the sequence, only required when the sequence is packed. Returns: [B, seqlen, D] if `position` is specified, else [1, seqlen, D]
github-repos
def _HasTable(self, table_name): query = self._HAS_TABLE_QUERY.format(table_name) self._cursor.execute(query) return bool(self._cursor.fetchone())
Determines if a specific table exists. Args: table_name (str): name of the table. Returns: bool: True if the table exists, false otherwise.
juraj-google-style
def handle_simple_responses( self, timeout_ms=None, info_cb=DEFAULT_MESSAGE_CALLBACK): return self._accept_responses('OKAY', info_cb, timeout_ms=timeout_ms)
Accepts normal responses from the device. Args: timeout_ms: Timeout in milliseconds to wait for each response. info_cb: Optional callback for text sent from the bootloader. Returns: OKAY packet's message.
juraj-google-style
def get_spd_dos(self): spd_dos = {} for atom_dos in self.pdos.values(): for (orb, pdos) in atom_dos.items(): orbital_type = _get_orb_type(orb) if (orbital_type not in spd_dos): spd_dos[orbital_type] = pdos else: spd_dos[orbital_type] = add_densities(spd_dos[orbital_type], pdos) return {orb: Dos(self.efermi, self.energies, densities) for (orb, densities) in spd_dos.items()}
Get orbital projected Dos. Returns: dict of {orbital: Dos}, e.g. {"s": Dos object, ...}
codesearchnet
def bbox_transpose(bbox, axis, rows, cols): x_min, y_min, x_max, y_max = bbox if axis != 0 and axis != 1: raise ValueError('Axis must be either 0 or 1.') if axis == 0: bbox = [y_min, x_min, y_max, x_max] if axis == 1: bbox = [1 - y_max, 1 - x_max, 1 - y_min, 1 - x_min] return bbox
Transposes a bounding box along given axis. Args: bbox (tuple): A tuple (x_min, y_min, x_max, y_max). axis (int): 0 - main axis, 1 - secondary axis. rows (int): Image rows. cols (int): Image cols.
juraj-google-style
def cudnn_compatible_gru(units, n_hidden, n_layers=1, trainable_initial_states=False, seq_lengths=None, input_initial_h=None, name='cudnn_gru', reuse=False): with tf.variable_scope(name, reuse=reuse): if trainable_initial_states: init_h = tf.get_variable('init_h', [n_layers, 1, n_hidden]) init_h = tf.tile(init_h, (1, tf.shape(units)[0], 1)) else: init_h = tf.zeros([n_layers, tf.shape(units)[0], n_hidden]) initial_h = (input_initial_h or init_h) with tf.variable_scope('cudnn_gru', reuse=reuse): def single_cell(): return tf.contrib.cudnn_rnn.CudnnCompatibleGRUCell(n_hidden) cell = tf.nn.rnn_cell.MultiRNNCell([single_cell() for _ in range(n_layers)]) units = tf.transpose(units, (1, 0, 2)) (h, h_last) = tf.nn.dynamic_rnn(cell=cell, inputs=units, time_major=True, initial_state=tuple(tf.unstack(initial_h, axis=0))) h = tf.transpose(h, (1, 0, 2)) h_last = h_last[(- 1)] if (seq_lengths is not None): indices = tf.stack([tf.range(tf.shape(h)[0]), (seq_lengths - 1)], axis=1) h_last = tf.gather_nd(h, indices) return (h, h_last)
CuDNN Compatible GRU implementation. It should be used to load models saved with CudnnGRUCell to run on CPU. Args: units: tf.Tensor with dimensions [B x T x F], where B - batch size T - number of tokens F - features n_hidden: dimensionality of hidden state trainable_initial_states: whether to create a special trainable variable to initialize the hidden states of the network or use just zeros seq_lengths: tensor of sequence lengths with dimension [B] n_layers: number of layers input_initial_h: initial hidden state, tensor name: name of the variable scope to use reuse:whether to reuse already initialized variable Returns: h - all hidden states along T dimension, tf.Tensor with dimensionality [B x T x F] h_last - last hidden state, tf.Tensor with dimensionality [B x H]
codesearchnet
def _get(self, feed_item): return self._api().get(profileId=self.profile_id, id=str(feed_item[self._id_field])).execute()
Fetches an item from CM. Args: feed_item: Feed item from Bulkdozer feed representing the item to fetch from CM.
github-repos
def _build_mask_ds(mask, mask_offset): mask_ds = tf.data.Dataset.from_tensor_slices(mask) mask_ds = mask_ds.repeat() mask_ds = mask_ds.skip(mask_offset) return mask_ds
Build the mask dataset to indicate which element to skip. Args: mask: `tf.Tensor`, binary mask to apply to all following elements. This mask should have a length 100. mask_offset: `tf.Tensor`, Integer specifying from how much the mask should be shifted for the first element. Returns: mask_ds: `tf.data.Dataset`, a dataset returning False for examples to skip and True for examples to keep.
juraj-google-style
def decode_message(self, message_type, encoded_message): encoded_message = six.ensure_str(encoded_message) if (not encoded_message.strip()): return message_type() dictionary = json.loads(encoded_message) message = self.__decode_dictionary(message_type, dictionary) message.check_initialized() return message
Merge JSON structure to Message instance. Args: message_type: Message to decode data to. encoded_message: JSON encoded version of message. Returns: Decoded instance of message_type. Raises: ValueError: If encoded_message is not valid JSON. messages.ValidationError if merged message is not initialized.
codesearchnet
def get_version(): if (not INSTALLED): try: with open('version.txt', 'r') as v_fh: return v_fh.read() except Exception: warnings.warn('Unable to resolve package version until installed', UserWarning) return '0.0.0' return p_version.get_version(HERE)
find current version information Returns: (str): version information
codesearchnet
def _assert_obj_type(pub, name='pub', obj_type=DBPublication): if (not isinstance(pub, obj_type)): raise InvalidType(('`%s` have to be instance of %s, not %s!' % (name, obj_type.__name__, pub.__class__.__name__)))
Make sure, that `pub` is instance of the `obj_type`. Args: pub (obj): Instance which will be checked. name (str): Name of the instance. Used in exception. Default `pub`. obj_type (class): Class of which the `pub` should be instance. Default :class:`.DBPublication`. Raises: InvalidType: When the `pub` is not instance of `obj_type`.
codesearchnet
def GetUserinfo(credentials, http=None): http = http or httplib2.Http() url = _GetUserinfoUrl(credentials) response, content = http.request(url) if response.status == http_client.BAD_REQUEST: credentials.refresh(http) url = _GetUserinfoUrl(credentials) response, content = http.request(url) return json.loads(content or '{}')
Get the userinfo associated with the given credentials. This is dependent on the token having either the userinfo.email or userinfo.profile scope for the given token. Args: credentials: (oauth2client.client.Credentials) incoming credentials http: (httplib2.Http, optional) http instance to use Returns: The email address for this token, or None if the required scopes aren't available.
juraj-google-style
def AddRoute(self, short_name, long_name, route_type, route_id=None): if (route_id is None): route_id = util.FindUniqueId(self.routes) route = self._gtfs_factory.Route(short_name=short_name, long_name=long_name, route_type=route_type, route_id=route_id) route.agency_id = self.GetDefaultAgency().agency_id self.AddRouteObject(route) return route
Add a route to this schedule. Args: short_name: Short name of the route, such as "71L" long_name: Full name of the route, such as "NW 21st Ave/St Helens Rd" route_type: A type such as "Tram", "Subway" or "Bus" route_id: id of the route or None, in which case a unique id is picked Returns: A new Route object
codesearchnet
def read_nanopubs(fn: str) -> Iterable[Mapping[str, Any]]: jsonl_flag, json_flag, yaml_flag = False, False, False if fn == "-" or "jsonl" in fn: jsonl_flag = True elif "json" in fn: json_flag = True elif re.search("ya?ml", fn): yaml_flag = True else: log.error("Do not recognize nanopub file format - neither json nor jsonl format.") return {} try: if re.search("gz$", fn): f = gzip.open(fn, "rt") else: try: f = click.open_file(fn, mode="rt") except Exception as e: log.info(f"Can not open file {fn} Error: {e}") quit() if jsonl_flag: for line in f: yield json.loads(line) elif json_flag: nanopubs = json.load(f) for nanopub in nanopubs: yield nanopub elif yaml_flag: nanopubs = yaml.load(f, Loader=yaml.SafeLoader) for nanopub in nanopubs: yield nanopub except Exception as e: log.error(f"Could not open file: {fn}")
Read file and generate nanopubs If filename has *.gz, will read as a gzip file If filename has *.jsonl*, will parsed as a JSONLines file IF filename has *.json*, will be parsed as a JSON file If filename has *.yaml* or *.yml*, will be parsed as a YAML file Args: filename (str): filename to read nanopubs from Returns: Generator[Mapping[str, Any]]: generator of nanopubs in nanopub_bel JSON Schema format
juraj-google-style
def training_loop_hparams_from_scoped_overrides(scoped_overrides, trial_id): trial_hp_overrides = scoped_overrides.values() loop_hp = create_loop_hparams() model_hp_name = trial_hp_overrides.get('loop.generative_model_params', loop_hp.generative_model_params) model_hp = registry.hparams(model_hp_name).parse(FLAGS.hparams) base_algo_params_name = trial_hp_overrides.get('loop.base_algo_params', loop_hp.base_algo_params) algo_hp = registry.hparams(base_algo_params_name) combined_hp = merge_unscoped_hparams(zip(HP_SCOPES, [loop_hp, model_hp, algo_hp])) combined_hp.override_from_dict(trial_hp_overrides) (loop_hp, model_hp, algo_hp) = split_scoped_hparams(HP_SCOPES, combined_hp) model_hp_name = ('model_hp_%s' % str(trial_id)) dynamic_register_hparams(model_hp_name, model_hp) loop_hp.generative_model_params = model_hp_name algo_hp_name = ('algo_hp_%s' % str(trial_id)) dynamic_register_hparams(algo_hp_name, algo_hp) loop_hp.base_algo_params = algo_hp_name return loop_hp
Create HParams suitable for training loop from scoped HParams. Args: scoped_overrides: HParams, with keys all scoped by one of HP_SCOPES. These parameters are overrides for the base HParams created by create_loop_hparams. trial_id: str, trial identifier. This is used to register unique HParams names for the underlying model and ppo HParams. Returns: HParams suitable for passing to training_loop.
codesearchnet
def __deepcopy__(self, memo): with distribute_lib.enter_or_assert_strategy(self._distribute_strategy): new_values = [] for value in self._values: with ops.device(value.device): new_values.append(copy.deepcopy(value, memo)) copied_variable = type(self)(strategy=self._distribute_strategy, values=new_values, aggregation=self._aggregation, var_policy=copy.deepcopy(self._policy, memo)) memo[id(self)] = copied_variable return copied_variable
Perform a deepcopy of the `DistributedVariable`. Unlike the deepcopy of a regular tf.Variable, this keeps the original strategy and devices of the `DistributedVariable`. To avoid confusion with the behavior of deepcopy on a regular `Variable` (which does copy into new devices), we only allow a deepcopy of a `DistributedVariable` within its originating strategy scope. Args: memo: The memoization object for `deepcopy`. Returns: A deep copy of the current `DistributedVariable`. Raises: RuntimeError: If trying to deepcopy into a different strategy.
github-repos
def create_room(self, alias=None, is_public=False, invitees=None): response = self.api.create_room(alias=alias, is_public=is_public, invitees=invitees) return self._mkroom(response["room_id"])
Create a new room on the homeserver. Args: alias (str): The canonical_alias of the room. is_public (bool): The public/private visibility of the room. invitees (str[]): A set of user ids to invite into the room. Returns: Room Raises: MatrixRequestError
juraj-google-style
def load_feature_lists(self, feature_lists): column_names = [] feature_ranges = [] running_feature_count = 0 for list_id in feature_lists: feature_list_names = load_lines(self.features_dir + 'X_train_{}.names'.format(list_id)) column_names.extend(feature_list_names) start_index = running_feature_count end_index = running_feature_count + len(feature_list_names) - 1 running_feature_count += len(feature_list_names) feature_ranges.append([list_id, start_index, end_index]) X_train = np.hstack([ load(self.features_dir + 'X_train_{}.pickle'.format(list_id)) for list_id in feature_lists ]) X_test = np.hstack([ load(self.features_dir + 'X_test_{}.pickle'.format(list_id)) for list_id in feature_lists ]) df_train = pd.DataFrame(X_train, columns=column_names) df_test = pd.DataFrame(X_test, columns=column_names) return df_train, df_test, feature_ranges
Load pickled features for train and test sets, assuming they are saved in the `features` folder along with their column names. Args: feature_lists: A list containing the names of the feature lists to load. Returns: A tuple containing 3 items: train dataframe, test dataframe, and a list describing the index ranges for the feature lists.
juraj-google-style
def foreach_loop(self, context): logger.debug("starting") foreach = context.get_formatted_iterable(self.foreach_items) foreach_length = len(foreach) logger.info(f"foreach decorator will loop {foreach_length} times.") for i in foreach: logger.info(f"foreach: running step {i}") context['i'] = i self.run_conditional_decorators(context) logger.debug(f"foreach: done step {i}") logger.debug(f"foreach decorator looped {foreach_length} times.") logger.debug("done")
Run step once for each item in foreach_items. On each iteration, the invoked step can use context['i'] to get the current iterator value. Args: context: (pypyr.context.Context) The pypyr context. This arg will mutate.
juraj-google-style
def tryload(self, cfgstr=None, on_error='raise'): cfgstr = self._rectify_cfgstr(cfgstr) if self.enabled: try: if self.verbose > 1: self.log('[cacher] tryload fname={}'.format(self.fname)) return self.load(cfgstr) except IOError: if self.verbose > 0: self.log('[cacher] ... {} cache miss'.format(self.fname)) except Exception: if self.verbose > 0: self.log('[cacher] ... failed to load') if on_error == 'raise': raise elif on_error == 'clear': self.clear(cfgstr) return None else: raise KeyError('Unknown method on_error={}'.format(on_error)) else: if self.verbose > 1: self.log('[cacher] ... cache disabled: fname={}'.format(self.fname)) return None
Like load, but returns None if the load fails due to a cache miss. Args: on_error (str): How to handle non-io errors errors. Either raise, which re-raises the exception, or clear which deletes the cache and returns None.
juraj-google-style
def GetPasswdMap(self, since=None): return PasswdUpdateGetter(self.conf).GetUpdates(source=self, search_base=self.conf['base'], search_filter=self.conf['filter'], search_scope=self.conf['scope'], since=since)
Return the passwd map from this source. Args: since: Get data only changed since this timestamp (inclusive) or None for all data. Returns: instance of maps.PasswdMap
github-repos
def get_kpoint_degeneracy(self, kpoint, cartesian=False, tol=1e-2): all_kpts = self.get_sym_eq_kpoints(kpoint, cartesian, tol=tol) if all_kpts is not None: return len(all_kpts)
Returns degeneracy of a given k-point based on structure symmetry Args: kpoint (1x3 array): coordinate of the k-point cartesian (bool): kpoint is in cartesian or fractional coordinates tol (float): tolerance below which coordinates are considered equal Returns: (int or None): degeneracy or None if structure is not available
juraj-google-style
def weights(self): return self._dedup_weights(self._undeduplicated_weights)
Returns the list of all layer variables/weights. Note: This will not track the weights of nested `tf.Modules` that are not themselves Keras layers. Returns: A list of variables.
github-repos
def get_features_for_wav(self, wav_filename, model_settings, sess): desired_samples = model_settings['desired_samples'] input_dict = {self.wav_filename_placeholder_: wav_filename, self.time_shift_padding_placeholder_: [[0, 0], [0, 0]], self.time_shift_offset_placeholder_: [0, 0], self.background_data_placeholder_: np.zeros([desired_samples, 1]), self.background_volume_placeholder_: 0, self.foreground_volume_placeholder_: 1} data_tensor = sess.run([self.output_], feed_dict=input_dict) return data_tensor
Applies the feature transformation process to the input_wav. Runs the feature generation process (generally producing a spectrogram from the input samples) on the WAV file. This can be useful for testing and verifying implementations being run on other platforms. Args: wav_filename: The path to the input audio file. model_settings: Information about the current model being trained. sess: TensorFlow session that was active when processor was created. Returns: Numpy data array containing the generated features.
github-repos
def price(self, market: pmd.ProcessedMarketData, name: Optional[str]=None) -> types.FloatTensor: name = name or self._name + '_price' with tf.name_scope(name): discount_curve = cashflow_streams.get_discount_curve(self._discount_curve_type, market, self._mask) reference_curve = cashflow_streams.get_discount_curve(self._reference_curve_type, market, self._reference_mask) daycount_fractions = tf.expand_dims(self._daycount_fractions, axis=-1) fwd_rate = reference_curve.forward_rate(self._accrual_start_date.expand_dims(axis=-1), self._accrual_end_date.expand_dims(axis=-1), day_count_fraction=daycount_fractions) discount_at_settlement = discount_curve.discount_factor(self._accrual_start_date.expand_dims(axis=-1)) discount_at_settlement = tf.where(daycount_fractions > 0.0, discount_at_settlement, tf.zeros_like(discount_at_settlement)) discount_at_settlement = tf.squeeze(discount_at_settlement, axis=-1) fwd_rate = tf.squeeze(fwd_rate, axis=-1) return self._short_position * discount_at_settlement * self._notional_amount * (fwd_rate - self._fixed_rate) * self._daycount_fractions / (1.0 + self._daycount_fractions * fwd_rate)
Returns the present value of the stream on the valuation date. Args: market: An instance of `ProcessedMarketData`. name: Python str. The name to give to the ops created by this function. Default value: `None` which maps to 'price'. Returns: A `Tensor` of shape `batch_shape` containing the modeled price of each FRA contract based on the input market data.
github-repos
def _calc_dir_size(path): dir_size = 0 for (root, dirs, files) in os.walk(path): for fn in files: full_fn = os.path.join(root, fn) dir_size += os.path.getsize(full_fn) return dir_size
Calculate size of all files in `path`. Args: path (str): Path to the directory. Returns: int: Size of the directory in bytes.
codesearchnet
def delete_s3_bucket(client, resource): if dbconfig.get('enable_delete_s3_buckets', NS_AUDITOR_REQUIRED_TAGS, False): client.delete_bucket(Bucket=resource.id) return (ActionStatus.SUCCEED, resource.metrics())
Delete an S3 bucket This function will try to delete an S3 bucket Args: client (:obj:`boto3.session.Session.client`): A boto3 client object resource (:obj:`Resource`): The resource object to terminate Returns: `ActionStatus`
codesearchnet
def _load_stdlib_versions(self): lines = self._store.load_stdlib_versions() versions = {} for line in lines: line2 = line.split(' if not line2: continue match = re.fullmatch('(.+): (\\d)\\.(\\d+)(?:-(?:(\\d)\\.(\\d+))?)?', line2) assert match module, min_major, min_minor, max_major, max_minor = match.groups() minimum = (int(min_major), int(min_minor)) maximum = (int(max_major), int(max_minor)) if max_major is not None and max_minor is not None else None versions[module] = (minimum, maximum) return versions
Loads the contents of typeshed/stdlib/VERSIONS. VERSIONS lists the stdlib modules with the Python version in which they were first added, in the format `{module}: {min_major}.{min_minor}-` or `{module}: {min_major}.{min_minor}-{max_major}.{max_minor}`. Returns: A mapping from module name to version range in the format {name: ((min_major, min_minor), (max_major, max_minor))} The max tuple can be `None`.
github-repos
def fetch_task_to_run(self): if all((task.is_completed for task in self)): raise StopIteration('All tasks completed.') for task in self: if task.can_run: return task logger.warning('Possible deadlock in fetch_task_to_run!') return None
Returns the first task that is ready to run or None if no task can be submitted at present" Raises: `StopIteration` if all tasks are done.
codesearchnet
def user_is_sponsor(self, user): sponsors = self.get_true_sponsors() for sponsor in sponsors: sp_user = sponsor.user if (sp_user == user): return True return False
Return whether the given user is a sponsor of the activity. Returns: Boolean
codesearchnet
def _readable_flags(transport): if ('flags' not in transport): return None _flag_list = [] flags = transport['flags'] if (flags & dpkt.tcp.TH_SYN): if (flags & dpkt.tcp.TH_ACK): _flag_list.append('syn_ack') else: _flag_list.append('syn') elif (flags & dpkt.tcp.TH_FIN): if (flags & dpkt.tcp.TH_ACK): _flag_list.append('fin_ack') else: _flag_list.append('fin') elif (flags & dpkt.tcp.TH_RST): _flag_list.append('rst') elif (flags & dpkt.tcp.TH_PUSH): _flag_list.append('psh') return _flag_list
Method that turns bit flags into a human readable list Args: transport (dict): transport info, specifically needs a 'flags' key with bit_flags Returns: list: a list of human readable flags (e.g. ['syn_ack', 'fin', 'rst', ...]
codesearchnet
def write(self, data, echo=None): if echo or (echo is None and self.echo): sys.stdout.write(data.decode('latin1')) sys.stdout.flush() self.channel.write(data)
Write data to channel. Args: data(bytes): The data to write to the channel. echo(bool): Whether to echo the written data to stdout. Raises: EOFError: If the channel was closed before all data was sent.
juraj-google-style
def _refresh(self, http): if not self.store: self._do_refresh_request(http) else: self.store.acquire_lock() try: new_cred = self.store.locked_get() if (new_cred and not new_cred.invalid and new_cred.access_token != self.access_token and not new_cred.access_token_expired): logger.info('Updated access_token read from Storage') self._updateFromCredential(new_cred) else: self._do_refresh_request(http) finally: self.store.release_lock()
Refreshes the access_token. This method first checks by reading the Storage object if available. If a refresh is still needed, it holds the Storage lock until the refresh is completed. Args: http: an object to be used to make HTTP requests. Raises: HttpAccessTokenRefreshError: When the refresh fails.
juraj-google-style
def _request(self, method, resource_uri, **kwargs): data = kwargs.get('data') response = method(self.API_BASE_URL + resource_uri, json=data, headers=self.headers) response.raise_for_status() return response.json()
Perform a method on a resource. Args: method: requests.`method` resource_uri: resource endpoint Raises: HTTPError Returns: JSON Response
juraj-google-style
def yaml_to_ordered_dict(stream, loader=yaml.SafeLoader): class OrderedUniqueLoader(loader): '\n Subclasses the given pyYAML `loader` class.\n\n Validates all sibling keys to insure no duplicates.\n\n Returns an OrderedDict instead of a Dict.\n ' NO_DUPE_SIBLINGS = ['stacks', 'class_path'] NO_DUPE_CHILDREN = ['stacks'] def _error_mapping_on_dupe(self, node, node_name): 'check mapping node for dupe children keys.' if isinstance(node, MappingNode): mapping = {} for n in node.value: a = n[0] b = mapping.get(a.value, None) if b: msg = '{} mapping cannot have duplicate keys {} {}' raise ConstructorError(msg.format(node_name, b.start_mark, a.start_mark)) mapping[a.value] = a def _validate_mapping(self, node, deep=False): if (not isinstance(node, MappingNode)): raise ConstructorError(None, None, ('expected a mapping node, but found %s' % node.id), node.start_mark) mapping = OrderedDict() for (key_node, value_node) in node.value: key = self.construct_object(key_node, deep=deep) try: hash(key) except TypeError as exc: raise ConstructorError('while constructing a mapping', node.start_mark, ('found unhashable key (%s)' % exc), key_node.start_mark) if ((key in mapping) and (key in self.NO_DUPE_SIBLINGS)): msg = '{} key cannot have duplicate siblings {} {}' raise ConstructorError(msg.format(key, node.start_mark, key_node.start_mark)) if (key in self.NO_DUPE_CHILDREN): self._error_mapping_on_dupe(value_node, key_node.value) value = self.construct_object(value_node, deep=deep) mapping[key] = value return mapping def construct_mapping(self, node, deep=False): 'Override parent method to use OrderedDict.' if isinstance(node, MappingNode): self.flatten_mapping(node) return self._validate_mapping(node, deep=deep) def construct_yaml_map(self, node): data = OrderedDict() (yield data) value = self.construct_mapping(node) data.update(value) OrderedUniqueLoader.add_constructor(u'tag:yaml.org,2002:map', OrderedUniqueLoader.construct_yaml_map) return yaml.load(stream, OrderedUniqueLoader)
Provides yaml.load alternative with preserved dictionary order. Args: stream (string): YAML string to load. loader (:class:`yaml.loader`): PyYAML loader class. Defaults to safe load. Returns: OrderedDict: Parsed YAML.
codesearchnet
def handle_document(self, item_session: ItemSession, filename: str) -> Actions: self._waiter.reset() action = self.handle_response(item_session) if (action == Actions.NORMAL): self._statistics.increment(item_session.response.body.size()) item_session.set_status(Status.done, filename=filename) return action
Process a successful document response. Returns: A value from :class:`.hook.Actions`.
codesearchnet
def run_build_model(self, num_runs=5, silent=False, force_rerun=False): self.mutation_ddG_avg_outfile = 'Average_{}.fxout'.format(op.splitext(self.repaired_pdb_outfile)[0]) self.mutation_ddG_raw_outfile = 'Raw_{}.fxout'.format(op.splitext(self.repaired_pdb_outfile)[0]) foldx_build_model = 'foldx --command=BuildModel --pdb={} --mutant-file={} --numberOfRuns={}'.format(self.repaired_pdb_outfile, op.basename(self.mutation_infile), num_runs) ssbio.utils.command_runner(shell_command=foldx_build_model, force_rerun_flag=force_rerun, silent=silent, outfile_checker=self.mutation_ddG_avg_outfile, cwd=self.foldx_dir)
Run FoldX BuildModel command with a mutant file input. Original command:: foldx --command=BuildModel --pdb=4bxi_Repair.pdb --mutant-file=individual_list.txt --numberOfRuns=5 Args: num_runs (int): silent (bool): If FoldX output should be silenced from printing to the shell. force_rerun (bool): If FoldX BuildModel should be rerun even if the results file exists.
juraj-google-style
def _compute_numeric_jacobian(x, x_shape, x_data, y, y_shape, delta, extra_feed_dict): if x.dtype == dtypes.bfloat16: x = math_ops.cast(x, dtypes.float32) if y.dtype == dtypes.bfloat16: y = math_ops.cast(y, dtypes.float32) if x_data.dtype == dtypes.bfloat16.as_numpy_dtype: x_data = x_data.astype(np.float32) x_size = _product(x_shape) * (2 if x.dtype.is_complex else 1) y_size = _product(y_shape) * (2 if y.dtype.is_complex else 1) x_dtype = x.dtype.real_dtype.as_numpy_dtype y_dtype = y.dtype.real_dtype.as_numpy_dtype x_data = numpy_compat.np_asarray(x_data, dtype=x.dtype.as_numpy_dtype) scale = numpy_compat.np_asarray(2 * delta, dtype=y_dtype)[()] jacobian = np.zeros((x_size, y_size), dtype=x_dtype) for row in range(x_size): x_pos = x_data.copy() x_neg = x_data.copy() x_pos.ravel().view(x_dtype)[row] += delta y_pos = y.eval(feed_dict=_extra_feeds(extra_feed_dict, {x: x_pos})) x_neg.ravel().view(x_dtype)[row] -= delta y_neg = y.eval(feed_dict=_extra_feeds(extra_feed_dict, {x: x_neg})) diff = (y_pos - y_neg) / scale jacobian[row, :] = diff.ravel().view(y_dtype) logging.vlog(1, 'Numeric Jacobian =\n%s', jacobian) return jacobian
Computes the numeric Jacobian for dy/dx. Computes the numeric Jacobian by slightly perturbing the inputs and measuring the differences on the output. Args: x: the tensor "x". x_shape: the dimensions of x as a tuple or an array of ints. x_data: a numpy array as the input data for x y: the tensor "y". y_shape: the dimensions of y as a tuple or an array of ints. delta: the amount of perturbation we give to the input extra_feed_dict: dict that allows fixing specified tensor values during the jacobian calculation. Returns: A 2-d numpy array representing the Jacobian for dy/dx. It has "x_size" rows and "y_size" columns where "x_size" is the number of elements in x and "y_size" is the number of elements in y.
github-repos
def destroy_cloudwatch_event(app='', env='dev', region=''): session = boto3.Session(profile_name=env, region_name=region) cloudwatch_client = session.client('events') event_rules = get_cloudwatch_event_rule(app_name=app, account=env, region=region) for rule in event_rules: cloudwatch_client.remove_targets(Rule=rule, Ids=[app]) return True
Destroy Cloudwatch event subscription. Args: app (str): Spinnaker Application name. env (str): Deployment environment. region (str): AWS region. Returns: bool: True upon successful completion.
codesearchnet
def __init__(self, path, script, optimized=True): self.path = path self.script = script if optimized: library_path = "%s:%s" % ( os.path.join(path, 'build/optimized'), os.path.join(path, 'build/optimized/lib')) self.environment = { 'LD_LIBRARY_PATH': library_path, 'DYLD_LIBRARY_PATH': library_path} else: library_path = "%s:%s" % (os.path.join(path, 'build'), os.path.join(path, 'build/lib')) self.environment = { 'LD_LIBRARY_PATH': os.path.join(path, 'build'), 'DYLD_LIBRARY_PATH': os.path.join(path, 'build')} self.configure_and_build(path, optimized=optimized) if optimized: build_status_path = os.path.join(path, 'build/optimized/build-status.py') else: build_status_path = os.path.join(path, 'build/build-status.py') try: spec = importlib.util.spec_from_file_location('build_status', build_status_path) build_status = importlib.util.module_from_spec(spec) spec.loader.exec_module(build_status) except (AttributeError): import imp build_status = imp.load_source('build_status', build_status_path) matches = [{'name': program, 'path': os.path.abspath(os.path.join(path, program))} for program in build_status.ns3_runnable_programs if self.script in program] if not matches: raise ValueError("Cannot find %s script" % self.script) match_percentages = map(lambda x: {'name': x['name'], 'path': x['path'], 'percentage': len(self.script)/len(x['name'])}, matches) self.script_executable = max(match_percentages, key=lambda x: x['percentage'])['path'] if optimized and "scratch" in self.script_executable: self.script_executable = os.path.abspath( os.path.join(path, "build/optimized/scratch", self.script))
Initialization function. Args: path (str): absolute path to the ns-3 installation this Runner should lock on. script (str): ns-3 script that will be used by this Runner. optimized (bool): whether this Runner should build ns-3 with the optimized profile.
juraj-google-style
def to_string(cls, error_code): if error_code == cls.COMPARE_ERROR: return 'Error comparing flash content to programming data.' elif error_code == cls.PROGRAM_ERASE_ERROR: return 'Error during program/erase phase.' elif error_code == cls.VERIFICATION_ERROR: return 'Error verifying programmed data.' return super(JLinkFlashErrors, cls).to_string(error_code)
Returns the string message for the given ``error_code``. Args: cls (JLinkFlashErrors): the ``JLinkFlashErrors`` class error_code (int): error code to convert Returns: An error string corresponding to the error code. Raises: ValueError: if the error code is invalid.
juraj-google-style
def stdout(self): if (not self.id): raise WorkflowError('Workflow is not running. Cannot get stdout.') if self.batch_values: raise NotImplementedError('Query Each Workflow Id within the Batch Workflow for stdout.') wf = self.workflow.get(self.id) stdout_list = [] for task in wf['tasks']: stdout_list.append({'id': task['id'], 'taskType': task['taskType'], 'name': task['name'], 'stdout': self.workflow.get_stdout(self.id, task['id'])}) return stdout_list
Get stdout from all the tasks of a workflow. Returns: (list): tasks with their stdout Example: >>> workflow.stdout [ { "id": "4488895771403082552", "taskType": "AOP_Strip_Processor", "name": "Task1", "stdout": "............" } ]
codesearchnet
def to_dict(self, filter=True): result = {} for (k, v) in self: r = _to_dict(v, filter) if r: result[k] = r return result
Returns a dictionary with the values of the model. Note that the values of the leafs are evaluated to python types. Args: filter (bool): If set to ``True``, show only values that have been set. Returns: dict: A dictionary with the values of the model. Example: >>> pretty_print(config.to_dict(filter=True)) >>> { >>> "interfaces": { >>> "interface": { >>> "et1": { >>> "config": { >>> "description": "My description", >>> "mtu": 1500 >>> }, >>> "name": "et1" >>> }, >>> "et2": { >>> "config": { >>> "description": "Another description", >>> "mtu": 9000 >>> }, >>> "name": "et2" >>> } >>> } >>> } >>> }
codesearchnet
def set_input(self, p_name, value): name = self.python_names.get(p_name) if p_name is None or name not in self.get_input_names(): raise ValueError('Invalid input "{}"'.format(p_name)) self.step_inputs[name] = value
Set a Step's input variable to a certain value. The value comes either from a workflow input or output of a previous step. Args: name (str): the name of the Step input value (str): the name of the output variable that provides the value for this input. Raises: ValueError: The name provided is not a valid input name for this Step.
juraj-google-style
def set_mac_address(self, mac_address=None, default=False, disable=False): base_command = 'ip virtual-router mac-address' if ((not default) and (not disable)): if (mac_address is not None): if (not re.match('(?:[a-f0-9]{2}:){5}[a-f0-9]{2}', mac_address)): raise ValueError('mac_address must be formatted like:aa:bb:cc:dd:ee:ff') else: raise ValueError('mac_address must be a properly formatted address string') if (default or (disable and (not mac_address))): current_mac = self._parse_mac_address() if current_mac['mac_address']: base_command = ((base_command + ' ') + current_mac['mac_address']) commands = self.command_builder(base_command, value=mac_address, default=default, disable=disable) return self.configure(commands)
Sets the virtual-router mac address This method will set the switch virtual-router mac address. If a virtual-router mac address already exists it will be overwritten. Args: mac_address (string): The mac address that will be assigned as the virtual-router mac address. This should be in the format, aa:bb:cc:dd:ee:ff. default (bool): Sets the virtual-router mac address to the system default (which is to remove the configuration line). disable (bool): Negates the virtual-router mac address using the system no configuration command Returns: True if the set operation succeeds otherwise False.
codesearchnet
def count_up_to(self, limit): raise NotImplementedError
Increments this variable until it reaches `limit`. When that Op is run it tries to increment the variable by `1`. If incrementing the variable would bring it above `limit` then the Op raises the exception `OutOfRangeError`. If no error is raised, the Op outputs the value of the variable before the increment. This is essentially a shortcut for `count_up_to(self, limit)`. Args: limit: value at which incrementing the variable raises an error. Returns: A `Tensor` that will hold the variable value before the increment. If no other Op modifies this variable, the values produced will all be distinct.
github-repos
def db(self, entity, query_filters="size=10"): if self.entity_api_key == "": return {'status': 'failure', 'response': 'No API key found in request'} historic_url = self.base_url + "api/0.1.0/historicData?" + query_filters historic_headers = { "apikey": self.entity_api_key, "Content-Type": "application/json" } historic_query_data = json.dumps({ "query": { "match": { "key": entity } } }) with self.no_ssl_verification(): r = requests.get(historic_url, data=historic_query_data, headers=historic_headers) response = dict() if "No API key" in str(r.content.decode("utf-8")): response["status"] = "failure" else: r = r.content.decode("utf-8") response = r return response
This function allows an entity to access the historic data. Args: entity (string): Name of the device to listen to query_filters (string): Elastic search response format string example, "pretty=true&size=10"
juraj-google-style
def WritePathHashHistory(self, client_path, hash_entries): client_path_history = ClientPathHistory() for timestamp, hash_entry in iteritems(hash_entries): client_path_history.AddHashEntry(timestamp, hash_entry) self.MultiWritePathHistory({client_path: client_path_history})
Writes a collection of `Hash` observed for particular path. Args: client_path: A `ClientPath` instance. hash_entries: A dictionary with timestamps as keys and `Hash` instances as values.
juraj-google-style
def parse_arguments(argv): parser = argparse.ArgumentParser( description='Runs Preprocessing on structured data.') parser.add_argument('--output-dir', type=str, required=True, help='Google Cloud Storage which to place outputs.') parser.add_argument('--schema-file', type=str, required=False, help=('BigQuery json schema file')) parser.add_argument('--input-file-pattern', type=str, required=False, help='Input CSV file names. May contain a file pattern') parser.add_argument('--bigquery-table', type=str, required=False, help=('project:dataset.table_name')) args = parser.parse_args(args=argv[1:]) if not args.output_dir.startswith('gs: raise ValueError('--output-dir must point to a location on GCS') if args.bigquery_table: if args.schema_file or args.input_file_pattern: raise ValueError('If using --bigquery-table, then --schema-file and ' '--input-file-pattern, ' 'are not needed.') else: if not args.schema_file or not args.input_file_pattern: raise ValueError('If not using --bigquery-table, then --schema-file and ' '--input-file-pattern ' 'are required.') if not args.input_file_pattern.startswith('gs: raise ValueError('--input-file-pattern must point to files on GCS') return args
Parse command line arguments. Args: argv: list of command line arguments, includeing programe name. Returns: An argparse Namespace object. Raises: ValueError: for bad parameters
juraj-google-style
def IsNotNone(*fields, default=None): when_clauses = [ expressions.When( ~expressions.Q(**{field: None}), then=expressions.F(field) ) for field in reversed(fields) ] return expressions.Case( *when_clauses, default=expressions.Value(default), output_field=CharField() )
Selects whichever field is not None, in the specified order. Arguments: fields: The fields to attempt to get a value from, in order. default: The value to return in case all values are None. Returns: A Case-When expression that tries each field and returns the specified default value when all of them are None.
juraj-google-style
def _describe_bitmask(bits: int, table: Dict[(Any, str)], default: str='0') -> str: result = [] for (bit, name) in table.items(): if (bit & bits): result.append(name) if (not result): return default return '|'.join(result)
Returns a bitmask in human readable form. This is a private function, used internally. Args: bits (int): The bitmask to be represented. table (Dict[Any,str]): A reverse lookup table. default (Any): A default return value when bits is 0. Returns: str: A printable version of the bits variable.
codesearchnet
def to_representation(self, instance): updated_program = copy.deepcopy(instance) enterprise_customer_catalog = self.context['enterprise_customer_catalog'] updated_program['enrollment_url'] = enterprise_customer_catalog.get_program_enrollment_url(updated_program['uuid']) for course in updated_program['courses']: course['enrollment_url'] = enterprise_customer_catalog.get_course_enrollment_url(course['key']) for course_run in course['course_runs']: course_run['enrollment_url'] = enterprise_customer_catalog.get_course_run_enrollment_url(course_run['key']) return updated_program
Return the updated program data dictionary. Arguments: instance (dict): The program data. Returns: dict: The updated program data.
codesearchnet
def plot_power_factor_mu(self, temp=600, output='eig', relaxation_time=1e-14, xlim=None): import matplotlib.pyplot as plt plt.figure(figsize=(9, 7)) pf = self._bz.get_power_factor(relaxation_time=relaxation_time, output=output, doping_levels=False)[ temp] plt.semilogy(self._bz.mu_steps, pf, linewidth=3.0) self._plot_bg_limits() self._plot_doping(temp) if output == 'eig': plt.legend(['PF$_1$', 'PF$_2$', 'PF$_3$']) if xlim is None: plt.xlim(-0.5, self._bz.gap + 0.5) else: plt.xlim(xlim) plt.ylabel("Power factor, ($\\mu$W/(mK$^2$))", fontsize=30.0) plt.xlabel("E-E$_f$ (eV)", fontsize=30.0) plt.xticks(fontsize=25) plt.yticks(fontsize=25) plt.tight_layout() return plt
Plot the power factor in function of Fermi level. Semi-log plot Args: temp: the temperature xlim: a list of min and max fermi energy by default (0, and band gap) tau: A relaxation time in s. By default none and the plot is by units of relaxation time Returns: a matplotlib object
juraj-google-style
def bitwise_or(x, y): if any_symbolic_tensors((x, y)): return BitwiseOr().symbolic_call(x, y) return backend.numpy.bitwise_or(x, y)
Compute the bit-wise OR of two arrays element-wise. Computes the bit-wise OR of the underlying binary representation of the integers in the input arrays. This ufunc implements the C/Python operator `|`. Args: x: Input integer tensor. y: Input integer tensor. Returns: Result tensor.
github-repos
def random(cls, components, width=False, colour=None): try: list_of_Decors = [Decor.random(c) for c in [i[0] for i in components.unique if i[0]]] except: try: list_of_Decors = [Decor.random(c) for c in components.copy()] except: list_of_Decors = [Decor.random(components)] if (colour is not None): for d in list_of_Decors: d.colour = colour if width: for (i, d) in enumerate(list_of_Decors): d.width = (i + 1) return cls(list_of_Decors)
Generate a random legend for a given list of components. Args: components (list or Striplog): A list of components. If you pass a Striplog, it will use the primary components. If you pass a component on its own, you will get a random Decor. width (bool): Also generate widths for the components, based on the order in which they are encountered. colour (str): If you want to give the Decors all the same colour, provide a hex string. Returns: Legend or Decor: A legend (or Decor) with random colours. TODO: It might be convenient to have a partial method to generate an 'empty' legend. Might be an easy way for someone to start with a template, since it'll have the components in it already.
codesearchnet
def getLanguage(self, body, ): resourcePath = '/text/detect_language' method = 'POST' queryParams = {} headerParams = {'Accept': 'Application/json', 'Content-Type': 'application/json'} postData = None postData = body response = self.apiClient._callAPI(resourcePath, method, queryParams, postData, headerParams) return language_rest.LanguageRest(**response.json())
Detect the language of a text Args: body, str: Your input text (UTF-8) (required) Returns: LanguageRest
juraj-google-style
def wait_for_tx(self, tx, max_seconds=120): tx_hash = None if isinstance(tx, (str, UInt256)): tx_hash = str(tx) elif isinstance(tx, Transaction): tx_hash = tx.Hash.ToString() else: raise AttributeError(("Supplied tx is type '%s', but must be Transaction or UInt256 or str" % type(tx))) wait_event = Event() time_start = time.time() while True: (_tx, height) = Blockchain.Default().GetTransaction(tx_hash) if (height > (- 1)): return True wait_event.wait(3) seconds_passed = (time.time() - time_start) if (seconds_passed > max_seconds): raise TxNotFoundInBlockchainError(('Transaction with hash %s not found after %s seconds' % (tx_hash, int(seconds_passed))))
Wait for tx to show up on blockchain Args: tx (Transaction or UInt256 or str): Transaction or just the hash max_seconds (float): maximum seconds to wait for tx to show up. default: 120 Returns: True: if transaction was found Raises: AttributeError: if supplied tx is not Transaction or UInt256 or str TxNotFoundInBlockchainError: if tx is not found in blockchain after max_seconds
codesearchnet
def fetch_support_file(name, timestamp_tuple): stored_filename = os.path.join(_subpar_package, 'runtime', name) content = pkgutil.get_data(_subpar_package, 'runtime/' + name) if content is None: raise error.Error("Internal error: Can't find runtime support file [%s]" % name) return stored_resource.StoredContent(stored_filename, timestamp_tuple, content)
Read a file from the runtime package Args: name: filename in runtime package's directory timestamp_tuple: Stored timestamp, as ZipInfo tuple Returns: A StoredResource representing the content of that file
github-repos
def add_defaults_to_kwargs(defaults, **kwargs): defaults = dict(defaults) defaults.update(kwargs) return defaults
Updates `kwargs` with dict of `defaults` Args: defaults: A dictionary of keys and values **kwargs: The kwargs to update. Returns: The updated kwargs.
juraj-google-style
def get_pending_servermanager(): vname = 'CurrentRebootAttempts' key = 'SOFTWARE\\Microsoft\\ServerManager' reg_ret = __utils__['reg.read_value']('HKLM', key, vname) if reg_ret['success']: log.debug('Found key: %s', key) try: if (int(reg_ret['vdata']) > 0): return True except ValueError: pass else: log.debug('Unable to access key: %s', key) return False
Determine whether there are pending Server Manager tasks that require a reboot. .. versionadded:: 2016.11.0 Returns: bool: ``True`` if there are pending Server Manager tasks, otherwise ``False`` CLI Example: .. code-block:: bash salt '*' system.get_pending_servermanager
codesearchnet
def __call__(self, kl_fn): if not callable(kl_fn): raise TypeError('kl_fn must be callable, received: %s' % kl_fn) if self._key in _DIVERGENCES: raise ValueError('KL(%s || %s) has already been registered to: %s' % (self._key[0].__name__, self._key[1].__name__, _DIVERGENCES[self._key])) _DIVERGENCES[self._key] = kl_fn return kl_fn
Perform the KL registration. Args: kl_fn: The function to use for the KL divergence. Returns: kl_fn Raises: TypeError: if kl_fn is not a callable. ValueError: if a KL divergence function has already been registered for the given argument classes.
github-repos
def sine(w, A=1, phi=0, offset=0): from math import sin def f(i): return ((A * sin(((w * i) + phi))) + offset) return partial(force, sequence=_advance(f))
Return a driver function that can advance a sequence of sine values. .. code-block:: none value = A * sin(w*i + phi) + offset Args: w (float) : a frequency for the sine driver A (float) : an amplitude for the sine driver phi (float) : a phase offset to start the sine driver with offset (float) : a global offset to add to the driver values
codesearchnet
def prop(pode, prop): form = pode[0][0] if prop.startswith(form): prop = prop[len(form):] if (prop[0] == ':'): prop = prop[1:] return pode[1]['props'].get(prop)
Return the valu of a given property on the node. Args: pode (tuple): A packed node. prop (str): Property to retrieve. Notes: The prop argument may be the full property name (foo:bar:baz), relative property name (:baz) , or the unadorned property name (baz). Returns:
codesearchnet
def zip_cluster(data, k, init=None, max_iters=100): (genes, cells) = data.shape (init, new_assignments) = kmeans_pp((data + eps), k, centers=init) centers = np.copy(init) M = np.zeros(centers.shape) assignments = new_assignments for c in range(k): (centers[(:, c)], M[(:, c)]) = zip_fit_params_mle(data[(:, (assignments == c))]) for it in range(max_iters): lls = zip_ll(data, centers, M) new_assignments = np.argmax(lls, 1) if np.equal(assignments, new_assignments).all(): return (assignments, centers, M) for c in range(k): (centers[(:, c)], M[(:, c)]) = zip_fit_params_mle(data[(:, (assignments == c))]) assignments = new_assignments return (assignments, centers, M)
Performs hard EM clustering using the zero-inflated Poisson distribution. Args: data (array): A 2d array- genes x cells k (int): Number of clusters init (array, optional): Initial centers - genes x k array. Default: None, use kmeans++ max_iters (int, optional): Maximum number of iterations. Default: 100 Returns: assignments (array): integer assignments of cells to clusters (length cells) L (array): Poisson parameter (genes x k) M (array): zero-inflation parameter (genes x k)
codesearchnet