code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def start_test(self, pipeline): global _TEST_MODE, _TEST_ROOT_PIPELINE_KEY self.start(pipeline, return_task=True) _TEST_MODE = True _TEST_ROOT_PIPELINE_KEY = pipeline._pipeline_key try: self.evaluate_test(pipeline, root=True) finally: _TEST_MODE = False
Starts a pipeline in the test mode. Args: pipeline: The Pipeline instance to test.
juraj-google-style
def createAndStartSwarm(client, clientInfo='', clientKey='', params='', minimumWorkers=None, maximumWorkers=None, alreadyRunning=False): if (minimumWorkers is None): minimumWorkers = Configuration.getInt('nupic.hypersearch.minWorkersPerSwarm') if (maximumWorkers is None): maximumWorkers = Configuration.getInt('nupic.hypersearch.maxWorkersPerSwarm') return ClientJobsDAO.get().jobInsert(client=client, cmdLine='$HYPERSEARCH', clientInfo=clientInfo, clientKey=clientKey, alreadyRunning=alreadyRunning, params=params, minimumWorkers=minimumWorkers, maximumWorkers=maximumWorkers, jobType=ClientJobsDAO.JOB_TYPE_HS)
Create and start a swarm job. Args: client - A string identifying the calling client. There is a small limit for the length of the value. See ClientJobsDAO.CLIENT_MAX_LEN. clientInfo - JSON encoded dict of client specific information. clientKey - Foreign key. Limited in length, see ClientJobsDAO._initTables. params - JSON encoded dict of the parameters for the job. This can be fetched out of the database by the worker processes based on the jobID. minimumWorkers - The minimum workers to allocate to the swarm. Set to None to use the default. maximumWorkers - The maximum workers to allocate to the swarm. Set to None to use the swarm default. Set to 0 to use the maximum scheduler value. alreadyRunning - Insert a job record for an already running process. Used for testing.
codesearchnet
def wrap_rich_text_lines(inp, cols): new_line_indices = [] if not isinstance(inp, RichTextLines): raise ValueError('Invalid type of input screen_output') if not isinstance(cols, int): raise ValueError('Invalid type of input cols') out = RichTextLines([]) row_counter = 0 for i, line in enumerate(inp.lines): new_line_indices.append(out.num_lines()) if i in inp.annotations: out.annotations[row_counter] = inp.annotations[i] if len(line) <= cols: out.lines.append(line) if i in inp.font_attr_segs: out.font_attr_segs[row_counter] = inp.font_attr_segs[i] row_counter += 1 else: wlines = [] osegs = [] if i in inp.font_attr_segs: osegs = inp.font_attr_segs[i] idx = 0 while idx < len(line): if idx + cols > len(line): rlim = len(line) else: rlim = idx + cols wlines.append(line[idx:rlim]) for seg in osegs: if seg[0] < rlim and seg[1] >= idx: if seg[0] >= idx: lb = seg[0] - idx else: lb = 0 if seg[1] < rlim: rb = seg[1] - idx else: rb = rlim - idx if rb > lb: wseg = (lb, rb, seg[2]) if row_counter not in out.font_attr_segs: out.font_attr_segs[row_counter] = [wseg] else: out.font_attr_segs[row_counter].append(wseg) idx += cols row_counter += 1 out.lines.extend(wlines) for key in inp.annotations: if not isinstance(key, int): out.annotations[key] = inp.annotations[key] return (out, new_line_indices)
Wrap RichTextLines according to maximum number of columns. Produces a new RichTextLines object with the text lines, font_attr_segs and annotations properly wrapped. This ought to be used sparingly, as in most cases, command handlers producing RichTextLines outputs should know the screen/panel width via the screen_info kwarg and should produce properly length-limited lines in the output accordingly. Args: inp: Input RichTextLines object. cols: Number of columns, as an int. Returns: 1) A new instance of RichTextLines, with line lengths limited to cols. 2) A list of new (wrapped) line index. For example, if the original input consists of three lines and only the second line is wrapped, and it's wrapped into two lines, this return value will be: [0, 1, 3]. Raises: ValueError: If inputs have invalid types.
github-repos
def obtain_all_bond_lengths(sp1, sp2, default_bl=None): if isinstance(sp1, Element): sp1 = sp1.symbol if isinstance(sp2, Element): sp2 = sp2.symbol syms = tuple(sorted([sp1, sp2])) if syms in bond_lengths: return bond_lengths[syms].copy() elif default_bl is not None: return {1: default_bl} else: raise ValueError("No bond data for elements {} - {}".format(*syms))
Obtain bond lengths for all bond orders from bond length database Args: sp1 (Specie): First specie. sp2 (Specie): Second specie. default_bl: If a particular type of bond does not exist, use this bond length as a default value (bond order = 1). If None, a ValueError will be thrown. Return: A dict mapping bond order to bond length in angstrom
juraj-google-style
def _compute_full_path(self, fn_parent_ref, fn_parent_seq): names = [] root_id = 5 (index, seq) = (fn_parent_ref, fn_parent_seq) is_orphan = False while (index != root_id): try: parent_entry = self[index] if (seq != parent_entry.header.seq_number): is_orphan = True break else: parent_fn_attr = parent_entry.get_main_filename_attr() (index, seq) = (parent_fn_attr.content.parent_ref, parent_fn_attr.content.parent_seq) names.append(parent_fn_attr.content.name) except ValueError as e: is_orphan = True break return (is_orphan, '\\'.join(reversed(names)))
Based on the parent reference and sequence, computes the full path. The majority of the files in a filesystem has a very small amount of parent directories. By definition, a filesystem is expected to have much smaller amount of directories than files. As such we use a function with the minimal amount of arguments to find a parent, that way we can cache the results easily and speed up the overall code. Args: fn_parent_ref (int): Parent reference number fn_parent_seq (int): Parent sequence number Returns: tuple(bool, str): A tuple where the first element is a boolean that is ``True`` if the the file is orphan and ``False`` if not. The second element is a string with the full path without the file name
codesearchnet
def apply_and_name(self, aggregator): reduced_df = self._apply(aggregator) if (len(self.names) != len(reduced_df.columns)): raise IndexError('ColumnFunction creates more columns than it has names for.') reduced_df.columns = self.names return reduced_df
Fetches the row-aggregated input columns for this ColumnFunction. Args: aggregator (Aggregator) Returns: pd.DataFrame: The dataframe has columns with names self.names that were created by this ColumnFunction, and is indexed by the index that was passed to aggregator.aggregate(index).
codesearchnet
def delete_qubits(self, indices): if not isinstance(indices, list): indices = [indices] self._z = np.delete(self._z, indices) self._x = np.delete(self._x, indices) return self
Delete pauli at the indices. Args: indices(list[int]): the indices of to-be-deleted paulis Returns: Pauli: self
juraj-google-style
def find(self, **kwargs): if len(kwargs) != 1: raise ValueError("One and only one keyword argument accepted") key = list(kwargs.keys())[0] value = list(kwargs.values())[0] ret = None for row in self.values(): if row[key] == value: ret = row break return ret
Finds row matching specific field value Args: **kwargs: (**only one argument accepted**) fielname=value, e.g., formula="OH" Returns: list element or None
juraj-google-style
def _load_from_cache_if_available(self, key): if key in self._cache: entity = self._cache[key] if entity is None or entity._key == key: raise tasklets.Return(entity)
Returns a cached Model instance given the entity key if available. Args: key: Key instance. Returns: A Model instance if the key exists in the cache.
juraj-google-style
def browse(self, max_lines=None, headers=None): if self.path.startswith('gs: lines = CsvFile._read_gcs_lines(self.path, max_lines) else: lines = CsvFile._read_local_lines(self.path, max_lines) if len(lines) == 0: return pd.DataFrame(columns=headers) columns_size = len(next(csv.reader([lines[0]], delimiter=self._delimiter))) if headers is None: headers = ['col' + newstr(e) for e in range(columns_size)] if len(headers) != columns_size: raise Exception('Number of columns in CSV do not match number of headers') buf = StringIO() for line in lines: buf.write(line) buf.write('\n') buf.seek(0) df = pd.read_csv(buf, names=headers, delimiter=self._delimiter) for key, col in df.iteritems(): if self._is_probably_categorical(col): df[key] = df[key].astype('category') return df
Try reading specified number of lines from the CSV object. Args: max_lines: max number of lines to read. If None, the whole file is read headers: a list of strings as column names. If None, it will use "col0, col1..." Returns: A pandas DataFrame with the schema inferred from the data. Raises: Exception if the csv object cannot be read or not enough lines to read, or the headers size does not match columns size.
juraj-google-style
def get_diff_coeff(hvec, n=1): hvec = np.array(hvec, dtype=np.float) acc = len(hvec) exp = np.column_stack([np.arange(acc)]*acc) a = np.vstack([hvec] * acc) ** exp b = np.zeros(acc) b[n] = factorial(n) return np.linalg.solve(a, b)
Helper function to find difference coefficients of an derivative on an arbitrary mesh. Args: hvec (1D array-like): sampling stencil n (int): degree of derivative to find
juraj-google-style
def _GetDataTypeMap(self, name): data_type_map = self._data_type_maps.get(name, None) if not data_type_map: data_type_map = self._fabric.CreateDataTypeMap(name) self._data_type_maps[name] = data_type_map return data_type_map
Retrieves a data type map defined by the definition file. The data type maps are cached for reuse. Args: name (str): name of the data type as defined by the definition file. Returns: dtfabric.DataTypeMap: data type map which contains a data type definition, such as a structure, that can be mapped onto binary data.
juraj-google-style
def prefix(self: EventSetOrNode, prefix: str) -> EventSetOrNode: from temporian.core.operators.prefix import prefix as _prefix return _prefix(self, prefix=prefix)
Adds a prefix to the names of the features in an [`EventSet`][temporian.EventSet]. Usage example: ```python >>> a = tp.event_set( ... timestamps=[0, 1], ... features={"f1": [0, 2], "f2": [5, 6]} ... ) >>> b = a * 5 >>> # Prefix before glue to avoid duplicated names >>> c = tp.glue(a.prefix("original_"), b.prefix("result_")) >>> c indexes: ... 'original_f1': [0 2] 'original_f2': [5 6] 'result_f1': [ 0 10] 'result_f2': [25 30] ... ``` Args: prefix: Prefix to add in front of the feature names. Returns: Prefixed EventSet.
github-repos
def check_errors(self, is_global=False): errors = self.global_errors if is_global else self.errors if errors: print('dfTimewolf encountered one or more errors:') for error, critical in errors: print('{0:s} {1:s}'.format('CRITICAL: ' if critical else '', error)) if critical: print('Critical error found. Aborting.') sys.exit(-1)
Checks for errors and exits if any of them are critical. Args: is_global: If True, check the global_errors attribute. If false, check the error attribute.
juraj-google-style
def get_config(model_type: str, feature: str) -> OnnxConfig: return FeaturesManager._SUPPORTED_MODEL_TYPE[model_type][feature]
Gets the OnnxConfig for a model_type and feature combination. Args: model_type (`str`): The model type to retrieve the config for. feature (`str`): The feature to retrieve the config for. Returns: `OnnxConfig`: config for the combination
github-repos
def move_all_files_from_subfolders_to_top(folder_path, delete_subfolders=False, copy=False): for item in os.listdir(folder_path): sub_path = os.path.join(folder_path, item) if os.path.isdir(sub_path): for sub_item in os.listdir(sub_path): src = os.path.join(sub_path, sub_item) target = os.path.join(folder_path, sub_item) if copy: if os.path.isfile(src): shutil.copy(src, target) else: shutil.copytree(src, target) else: shutil.move(src, target) if delete_subfolders: shutil.rmtree(sub_path)
Move all files/folder from all subfolders of `folder_path` on top into `folder_path`. Args: folder_path (str): Path of the folder. delete_subfolders (bool): If True the subfolders are deleted after all items are moved out of it. copy (bool): If True copies the files instead of moving. (default False)
juraj-google-style
def __init__(self, zone, environment): self._zone = zone self._environment = environment self._gcs_dag_location = None
Initializes an instance of a Composer object. Args: zone: Zone in which Composer environment has been created. environment: Name of the Composer environment.
juraj-google-style
def to_proto(self, export_scope=None): if export_scope is None: return self.saver_def if not (self.saver_def.filename_tensor_name.startswith(export_scope) and self.saver_def.save_tensor_name.startswith(export_scope) and self.saver_def.restore_op_name.startswith(export_scope)): return None saver_def = saver_pb2.SaverDef() saver_def.CopyFrom(self.saver_def) saver_def.filename_tensor_name = ops.strip_name_scope(saver_def.filename_tensor_name, export_scope) saver_def.save_tensor_name = ops.strip_name_scope(saver_def.save_tensor_name, export_scope) saver_def.restore_op_name = ops.strip_name_scope(saver_def.restore_op_name, export_scope) return saver_def
Converts this `Saver` to a `SaverDef` protocol buffer. Args: export_scope: Optional `string`. Name scope to remove. Returns: A `SaverDef` protocol buffer.
github-repos
def add_keyword(self, keyword, schema=None, source=None): keyword_dict = self._sourced_dict(source, value=keyword) if (schema is not None): keyword_dict['schema'] = schema self._append_to('keywords', keyword_dict)
Add a keyword. Args: keyword(str): keyword to add. schema(str): schema to which the keyword belongs. source(str): source for the keyword.
codesearchnet
def create_group(self, name): self.project_service.set_auth(self._token_project) return self.project_service.create_group(name)
Create a new group. Args: name (string): Name of the group to create. Returns: (bool): True on success. Raises: requests.HTTPError on failure.
juraj-google-style
def is_collection(return_type: FhirPathDataType) -> bool: return return_type and return_type.cardinality == Cardinality.COLLECTION
Indicates if the return type represents a collection. Args: return_type: The data type to describe. Returns: True if `return_type` represents an element with cardinality greater than one. False otherwise.
github-repos
def timestamp_ids(self, time_precision=0.02): return self.convert_tokens_to_ids(['<|%.2f|>' % (i * time_precision) for i in range(1500 + 1)])
Compute the timestamp token ids for a given precision and save to least-recently used (LRU) cache. Args: time_precision (`float`, *optional*, defaults to 0.02): The time ratio to convert from token to time.
github-repos
def console_map_string_to_font(s: str, fontCharX: int, fontCharY: int) -> None: lib.TCOD_console_map_string_to_font_utf(_unicode(s), fontCharX, fontCharY)
Remap a string of codes to a contiguous set of tiles. Args: s (AnyStr): A string of character codes to map to new values. The null character `'\\x00'` will prematurely end this function. fontCharX (int): The starting X tile coordinate on the loaded tileset. 0 is the leftmost tile. fontCharY (int): The starting Y tile coordinate on the loaded tileset. 0 is the topmost tile.
juraj-google-style
def get(self, *, txid, headers=None): block_list = self.transport.forward_request( method='GET', path=self.path, params={'transaction_id': txid}, headers=headers, ) return block_list[0] if len(block_list) else None
Get the block that contains the given transaction id (``txid``) else return ``None`` Args: txid (str): Transaction id. headers (dict): Optional headers to pass to the request. Returns: :obj:`list` of :obj:`int`: List of block heights.
juraj-google-style
def run_inference(self, batch: Sequence[numpy.ndarray], model: BaseEstimator, inference_args: Optional[dict[str, Any]]=None) -> Iterable[PredictionResult]: predictions = self._model_inference_fn(model, batch, inference_args) return utils._convert_to_result(batch, predictions, model_id=self._model_uri)
Runs inferences on a batch of numpy arrays. Args: batch: A sequence of examples as numpy arrays. They should be single examples. model: A numpy model or pipeline. Must implement predict(X). Where the parameter X is a numpy array. inference_args: Any additional arguments for an inference. Returns: An Iterable of type PredictionResult.
github-repos
def Get(self, request, global_params=None): config = self.GetMethodConfig('Get') return self._RunMethod(config, request, global_params=global_params)
Returns information about a specific job. Job information is available for a six month period after creation. Requires that you're the person who ran the job, or have the Is Owner project role. Args: request: (BigqueryJobsGetRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (Job) The response message.
github-repos
def contact(self, id): try: json = self.skype.conn("POST", "{0}/users/batch/profiles".format(SkypeConnection.API_USER), json={"usernames": [id]}, auth=SkypeConnection.Auth.SkypeToken).json() contact = SkypeContact.fromRaw(self.skype, json[0]) if contact.id not in self.contactIds: self.contactIds.append(contact.id) return self.merge(contact) except SkypeApiException as e: if len(e.args) >= 2 and getattr(e.args[1], "status_code", None) == 403: return None raise
Retrieve all details for a specific contact, including fields such as birthday and mood. Args: id (str): user identifier to lookup Returns: SkypeContact: resulting contact object
juraj-google-style
def get_stored_variation(self, experiment, user_profile): user_id = user_profile.user_id variation_id = user_profile.get_variation_for_experiment(experiment.id) if variation_id: variation = self.config.get_variation_from_id(experiment.key, variation_id) if variation: self.logger.info(('Found a stored decision. User "%s" is in variation "%s" of experiment "%s".' % (user_id, variation.key, experiment.key))) return variation return None
Determine if the user has a stored variation available for the given experiment and return that. Args: experiment: Object representing the experiment for which user is to be bucketed. user_profile: UserProfile object representing the user's profile. Returns: Variation if available. None otherwise.
codesearchnet
def plot_waves(self, ax=None, fontsize=12, **kwargs): (ax, fig, plt) = get_ax_fig_plt(ax) ax.grid(True) ax.set_xlabel('r [Bohr]') ax.set_ylabel('$r\\phi,\\, r\\tilde\\phi\\, [Bohr]^{-\\frac{1}{2}}$') for (state, rfunc) in self.pseudo_partial_waves.items(): ax.plot(rfunc.mesh, (rfunc.mesh * rfunc.values), lw=2, label=('PS-WAVE: ' + state)) for (state, rfunc) in self.ae_partial_waves.items(): ax.plot(rfunc.mesh, (rfunc.mesh * rfunc.values), lw=2, label=('AE-WAVE: ' + state)) ax.legend(loc='best', shadow=True, fontsize=fontsize) return fig
Plot the AE and the pseudo partial waves. Args: ax: matplotlib :class:`Axes` or None if a new figure should be created. fontsize: fontsize for legends and titles Returns: `matplotlib` figure
codesearchnet
def get_all_dataset_names(configuration=None, **kwargs): dataset = Dataset(configuration=configuration) dataset['id'] = 'all dataset names' return dataset._write_to_hdx('list', kwargs, 'id')
Get all dataset names in HDX Args: configuration (Optional[Configuration]): HDX configuration. Defaults to global configuration. **kwargs: See below limit (int): Number of rows to return. Defaults to all dataset names. offset (int): Offset in the complete result for where the set of returned dataset names should begin Returns: List[str]: list of all dataset names in HDX
juraj-google-style
def get(self, group=None, backend=None): from .options import Store, Options keywords = {} groups = (Options._option_groups if (group is None) else [group]) backend = (backend if backend else Store.current_backend) for group in groups: optsobj = Store.lookup_options(backend, self._obj, group) keywords = dict(keywords, **optsobj.kwargs) return Options(**keywords)
Returns the corresponding Options object. Args: group: The options group. Flattens across groups if None. backend: Current backend if None otherwise chosen backend. Returns: Options object associated with the object containing the applied option keywords.
codesearchnet
def line_id(self, lat): if self.grid == 'WAC': line = np.rint(1.0 + self.LINE_PROJECTION_OFFSET - self.A_AXIS_RADIUS * np.pi * lat / (self.MAP_SCALE * 1e-3 * 180)) else: line = np.rint(float(self.LINE_PROJECTION_OFFSET) - float(self.MAP_RESOLUTION) * (lat - float(self.CENTER_LATITUDE))) + 1 return self._control_line(line)
Return the corresponding line Args: lat (int): latitude in degree Returns: Correponding line
juraj-google-style
def pose_inv(pose): pose_inv = np.zeros((4, 4)) pose_inv[(:3, :3)] = pose[(:3, :3)].T pose_inv[(:3, 3)] = (- pose_inv[(:3, :3)].dot(pose[(:3, 3)])) pose_inv[(3, 3)] = 1.0 return pose_inv
Computes the inverse of a homogenous matrix corresponding to the pose of some frame B in frame A. The inverse is the pose of frame A in frame B. Args: pose: numpy array of shape (4,4) for the pose to inverse Returns: numpy array of shape (4,4) for the inverse pose
codesearchnet
def create_position_ids_from_inputs_embeds(self, inputs_embeds): input_shape = inputs_embeds.size()[:-1] sequence_length = input_shape[1] position_ids = torch.arange(self.padding_idx + 1, sequence_length + self.padding_idx + 1, dtype=torch.long, device=inputs_embeds.device) return position_ids.unsqueeze(0).expand(input_shape)
We are provided embeddings directly. We cannot infer which are padded so just generate sequential position ids. Args: inputs_embeds: torch.Tensor Returns: torch.Tensor
github-repos
def CopyFromStringTuple(self, time_elements_tuple): if len(time_elements_tuple) < 7: raise ValueError(( 'Invalid time elements tuple at least 7 elements required,' 'got: {0:d}').format(len(time_elements_tuple))) super(TimeElementsWithFractionOfSecond, self).CopyFromStringTuple( time_elements_tuple) try: fraction_of_second = decimal.Decimal(time_elements_tuple[6]) except (TypeError, ValueError): raise ValueError('Invalid fraction of second value: {0!s}'.format( time_elements_tuple[6])) if fraction_of_second < 0.0 or fraction_of_second >= 1.0: raise ValueError('Fraction of second value: {0:f} out of bounds.'.format( fraction_of_second)) self.fraction_of_second = fraction_of_second
Copies time elements from string-based time elements tuple. Args: time_elements_tuple (Optional[tuple[str, str, str, str, str, str, str]]): time elements, contains year, month, day of month, hours, minutes, seconds and fraction of seconds. Raises: ValueError: if the time elements tuple is invalid.
juraj-google-style
def process_sequence(sequence, rules=None, skip_non_vietnamese=True): result = '' raw = result result_parts = [] if (rules is None): rules = get_telex_definition() accepted_chars = _accepted_chars(rules) for key in sequence: if (key not in accepted_chars): result_parts.append(result) result_parts.append(key) result = '' raw = '' else: (result, raw) = process_key(string=result, key=key, fallback_sequence=raw, rules=rules, skip_non_vietnamese=skip_non_vietnamese) result_parts.append(result) return ''.join(result_parts)
\ Convert a key sequence into a Vietnamese string with diacritical marks. Args: rules (optional): see docstring for process_key(). skip_non_vietnamese (optional): see docstring for process_key(). It even supports continous key sequences connected by separators. i.e. process_sequence('con meof.ddieen') should work.
codesearchnet
def add_mixin(self, mixin): raw = mixin.tokens[0][0].raw() if raw in self._mixins: self._mixins[raw].append(mixin) else: self._mixins[raw] = [mixin]
Add mixin to scope Args: mixin (Mixin): Mixin object
juraj-google-style
def describe(self, **kwargs): description = {'label': self.label, 'details': inspect.cleandoc(self.details), 'type': ('list of {}'.format(self.type) if self.many else self.type), 'spec': self.spec, 'read_only': self.read_only, 'write_only': self.write_only, 'allow_null': self.allow_null} description.update(kwargs) return description
Describe this field instance for purpose of self-documentation. Args: kwargs (dict): dictionary of additional description items for extending default description Returns: dict: dictionary of description items Suggested way for overriding description fields or extending it with additional items is calling super class method with new/overriden fields passed as keyword arguments like following: .. code-block:: python class DummyField(BaseField): def description(self, **kwargs): super().describe(is_dummy=True, **kwargs)
codesearchnet
def create_streaming_endpoint(access_token, name, description='New Streaming Endpoint', scale_units='1'): path = '/StreamingEndpoints' endpoint = ''.join([ams_rest_endpoint, path]) body = (((((('{ \t\t"Id":null, \t\t"Name":"' + name) + '", \t\t"Description":"') + description) + '", \t\t"Created":"0001-01-01T00:00:00", \t\t"LastModified":"0001-01-01T00:00:00", \t\t"State":null, \t\t"HostName":null, \t\t"ScaleUnits":"') + scale_units) + '", \t\t"CrossSiteAccessPolicies":{ \t\t\t"ClientAccessPolicy":"<access-policy><cross-domain-access><policy><allow-from http-request-headers=\\"*\\"><domain uri=\\"http: return do_ams_post(endpoint, path, body, access_token)
Create Media Service Streaming Endpoint. Args: access_token (str): A valid Azure authentication token. name (str): A Media Service Streaming Endpoint Name. description (str): A Media Service Streaming Endpoint Description. scale_units (str): A Media Service Scale Units Number. Returns: HTTP response. JSON body.
codesearchnet
def NewRow(self, value=''): newrow = self.row_class() newrow.row = (self.size + 1) newrow.table = self headers = self._Header() for header in headers: newrow[header] = value return newrow
Fetches a new, empty row, with headers populated. Args: value: Initial value to set each row entry to. Returns: A Row() object.
codesearchnet
def recipe_video(config, auth_read, sheet, tab, project, dataset, table): sheets(config, {'__comment__': 'Copy the tamplate sheet to the users sheet. If it already exists, nothing happens.', 'auth': auth_read, 'template': {'sheet': 'https: video(config, {'__comment__': 'Read video effects and values from sheet and/or bigquery.', 'auth': auth_read, 'sheets': {'sheet': sheet, 'tab': tab}, 'bigquery': {'project': project, 'dataset': dataset, 'table': table}})
Add images, text, and audio to videos. Args: auth_read (authentication) - Credentials used for reading data. sheet (string) - Name or URL of sheet. tab (string) - Name of sheet tab. project (string) - Google Cloud Project Identifier. dataset (string) - Name of dataset. table (string) - Name of table.
github-repos
def encrypt(self, mesg): seqn = next(self._tx_sn) rv = self._tx_tinh.enc(s_msgpack.en((seqn, mesg))) return rv
Wrap a message with a sequence number and encrypt it. Args: mesg: The mesg to encrypt. Returns: bytes: The encrypted message.
codesearchnet
def unpack(self, buff=None, offset=0): band_type = UBInt16(enum_ref=MeterBandType) band_type.unpack(buff, offset) self.__class__ = MeterBandType(band_type.value).find_class() length = UBInt16() length.unpack(buff, offset=offset+2) super().unpack(buff[:offset+length.value], offset)
Unpack *buff* into this object. This method will convert a binary data into a readable value according to the attribute format. Args: buff (bytes): Binary buffer. offset (int): Where to begin unpacking. Raises: :exc:`~.exceptions.UnpackException`: If unpack fails.
juraj-google-style
def list2str(self, l: List, joiner: str) -> str: result = str() for item in l: if isinstance(item, list): result = ((result + self.list2str(item, joiner)) + joiner) elif isinstance(item, dict): result = ((result + self.dict2str(item, joiner)) + joiner) elif item: result = ((result + str(item)) + joiner) return result
Convert list to str as input for tokenizer Args: l (list): list for converting joiner (str): join the elements using this string to separate them. Returns: the value of the list as a string
codesearchnet
def ndtri(p, name="ndtri"): with tf.name_scope(name): p = tf.convert_to_tensor(value=p, name="p") if dtype_util.as_numpy_dtype(p.dtype) not in [np.float32, np.float64]: raise TypeError( "p.dtype=%s is not handled, see docstring for supported types." % p.dtype) return _ndtri(p)
The inverse of the CDF of the Normal distribution function. Returns x such that the area under the pdf from minus infinity to x is equal to p. A piece-wise rational approximation is done for the function. This is a port of the implementation in netlib. Args: p: `Tensor` of type `float32`, `float64`. name: Python string. A name for the operation (default="ndtri"). Returns: x: `Tensor` with `dtype=p.dtype`. Raises: TypeError: if `p` is not floating-type.
juraj-google-style
def __update_cleanup_paths(new_path): cleanup_dirs = settings.CFG['cleanup_paths'].value cleanup_dirs = set(cleanup_dirs) cleanup_dirs.add(new_path) cleanup_dirs = list(cleanup_dirs) settings.CFG['cleanup_paths'] = cleanup_dirs
Add the new path to the list of paths to clean up afterwards. Args: new_path: Path to the directory that need to be cleaned up.
codesearchnet
def pretty_print_fhir_to_json_string_for_analytics(fhir_proto: message.Message, *, indent_size: int=2) -> str: printer = _json_printer.JsonPrinter.pretty_printer_for_analytics(_PRIMITIVE_HANDLER, indent_size=indent_size) return printer.print(fhir_proto)
Returns an Analytic FHIR JSON representation with spaces and newlines. Args: fhir_proto: The proto to serialize into a "pretty" JSON string. indent_size: An integer denoting the size of space indentation for lexical scoping. Defaults to 2. Returns: An Analytic FHIR JSON representation with spaces and newlines.
github-repos
def split_sequence_columns_v2(feature_columns): sequence_columns = [] non_sequence_columns = [] for column in feature_columns: if not isinstance(column, (_TPUEmbeddingColumnV2, _TPUSharedEmbeddingColumnV2)): raise TypeError(f'column must be a _TPUEmbeddingColumnV2 or _TPUSharedEmbeddingColumnV2 but got {type(column)} instead.') if column.is_sequence_column(): sequence_columns.append(column) else: non_sequence_columns.append(column) return (sequence_columns, non_sequence_columns)
Split a list of _TPUEmbeddingColumn into sequence and non-sequence columns. For use in a TPUEstimator model_fn function. E.g. def model_fn(features): sequence_columns, feature_columns = ( tf.tpu.feature_column.split_sequence_columns(feature_columns)) input = tf.feature_column.input_layer( features=features, feature_columns=feature_columns) sequence_features, sequence_lengths = ( tf.contrib.feature_column.sequence_input_layer( features=features, feature_columns=sequence_columns)) Args: feature_columns: A list of _TPUEmbeddingColumns to split. Returns: Two lists of _TPUEmbeddingColumns, the first is the sequence columns and the second is the non-sequence columns.
github-repos
async def reclaim_task(context, task): while True: log.debug(('waiting %s seconds before reclaiming...' % context.config['reclaim_interval'])) (await asyncio.sleep(context.config['reclaim_interval'])) if (task != context.task): return log.debug('Reclaiming task...') try: context.reclaim_task = (await context.temp_queue.reclaimTask(get_task_id(context.claim_task), get_run_id(context.claim_task))) clean_response = deepcopy(context.reclaim_task) clean_response['credentials'] = '{********}' log.debug('Reclaim task response:\n{}'.format(pprint.pformat(clean_response))) except taskcluster.exceptions.TaskclusterRestFailure as exc: if (exc.status_code == 409): log.debug('409: not reclaiming task.') if (context.proc and (task == context.task)): message = 'Killing task after receiving 409 status in reclaim_task' log.warning(message) (await context.proc.stop()) raise ScriptWorkerTaskException(message, exit_code=context.config['invalid_reclaim_status']) break else: raise
Try to reclaim a task from the queue. This is a keepalive / heartbeat. Without it the job will expire and potentially be re-queued. Since this is run async from the task, the task may complete before we run, in which case we'll get a 409 the next time we reclaim. Args: context (scriptworker.context.Context): the scriptworker context Raises: taskcluster.exceptions.TaskclusterRestFailure: on non-409 status_code from taskcluster.aio.Queue.reclaimTask()
codesearchnet
def __init__(self, cell): self._cell = cell
Creates a new StringGaugeCell. Args: cell: A c pointer of TFE_MonitoringStringGaugeCell.
github-repos
def checkUser(self, user): return not self.conn("POST", "{0}/GetCredentialType.srf".format(SkypeConnection.API_MSACC), json={"username": user}).json().get("IfExistsResult")
Query a username or email address to see if a corresponding Microsoft account exists. Args: user (str): username or email address of an account Returns: bool: whether the account exists
juraj-google-style
def head(self, n=10): r = self.__repr__().split('\n') print('\n'.join(r[:n]), end=' ')
Display the top of the file. Args: n (int): Number of lines to display
codesearchnet
def mesh_split(tensor, device_mesh, tensor_split_dims_mapping, use_sharding_op=False, manual_mesh_dims=None, unspecified_dims=None): sharding = mesh_split_sharding(device_mesh, tensor_split_dims_mapping, manual_mesh_dims) return sharding.apply_to_tensor(tensor, use_sharding_op=use_sharding_op, unspecified_dims=unspecified_dims or [])
Returns a tensor that is split along multiple dimensions in a device mesh. Args: tensor: A tf.Tensor to split. device_mesh: An np.ndarray describing the topology of the device mesh and each element is the ID of the device in the topology. tensor_split_dims_mapping: A list of integers that map each tensor axis to the device mesh axis along which it is sharded. Its length is the tensor rank, and tensor_split_dims_mapping[i] is device mesh axis for tensor dimension i. Use -1 for tensor dimensions that are not sharded. use_sharding_op: If true, adds a sharding op to set the sharding. manual_mesh_dims: An optional list of mesh dims for manual subgroups. unspecified_dims: An optional list of dimensions unspecified. Raises: ValueError: The number of tensor split dimensions is larger than device mesh rank.
github-repos
def categorical(logits, num_samples, dtype=None, seed=None, name=None): with ops.name_scope(name, 'categorical', [logits]): return multinomial_categorical_impl(logits, num_samples, dtype, seed)
Draws samples from a categorical distribution. Example: ```python # samples has shape [1, 5], where each value is either 0 or 1 with equal # probability. samples = tf.random.categorical(tf.math.log([[0.5, 0.5]]), 5) ``` Args: logits: 2-D Tensor with shape `[batch_size, num_classes]`. Each slice `[i, :]` represents the unnormalized log-probabilities for all classes. num_samples: 0-D. Number of independent samples to draw for each row slice. dtype: The integer type of the output: `int32` or `int64`. Defaults to `int64`. seed: A Python integer. Used to create a random seed for the distribution. See `tf.random.set_seed` for behavior. name: Optional name for the operation. Returns: The drawn samples of shape `[batch_size, num_samples]`.
github-repos
def resize(self, image: np.ndarray, size: Dict[str, int], size_divisor: int=0, resample: PILImageResampling=PILImageResampling.BILINEAR, data_format=None, input_data_format: Optional[Union[str, ChannelDimension]]=None, **kwargs) -> np.ndarray: max_size = kwargs.pop('max_size', None) size = get_size_dict(size, max_size=max_size, default_to_square=False) if 'shortest_edge' in size and 'longest_edge' in size: size, max_size = (size['shortest_edge'], size['longest_edge']) elif 'height' in size and 'width' in size: size = (size['height'], size['width']) max_size = None else: raise ValueError(f"Size must contain 'height' and 'width' keys or 'shortest_edge' and 'longest_edge' keys. Got {size.keys()}.") size = get_maskformer_resize_output_image_size(image=image, size=size, max_size=max_size, size_divisor=size_divisor, default_to_square=False, input_data_format=input_data_format) image = resize(image, size=size, resample=resample, data_format=data_format, input_data_format=input_data_format, **kwargs) return image
Resize the image to the given size. Size can be min_size (scalar) or `(height, width)` tuple. If size is an int, smaller edge of the image will be matched to this number. Args: image (`np.ndarray`): Image to resize. size (`Dict[str, int]`): The size of the output image. size_divisor (`int`, *optional*, defaults to 0): If `size_divisor` is given, the output image size will be divisible by the number. resample (`PILImageResampling` resampling filter, *optional*, defaults to `PILImageResampling.BILINEAR`): Resampling filter to use when resizing the image. data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format for the output image. If unset, the channel dimension format of the input image is used. input_data_format (`ChannelDimension` or `str`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred.
github-repos
def AddVSSProcessingOptions(self, argument_group): argument_group.add_argument( '--no_vss', '--no-vss', dest='no_vss', action='store_true', default=False, help=( 'Do not scan for Volume Shadow Snapshots (VSS). This means that ' 'Volume Shadow Snapshots (VSS) are not processed.')) argument_group.add_argument( '--vss_only', '--vss-only', dest='vss_only', action='store_true', default=False, help=( 'Do not process the current volume if Volume Shadow Snapshots ' '(VSS) have been selected.')) argument_group.add_argument( '--vss_stores', '--vss-stores', dest='vss_stores', action='store', type=str, default=None, help=( 'Define Volume Shadow Snapshots (VSS) (or stores that need to be ' 'processed. A range of stores can be defined as: "3..5". ' 'Multiple stores can be defined as: "1,3,5" (a list of comma ' 'separated values). Ranges and lists can also be combined as: ' '"1,3..5". The first store is 1. All stores can be defined as: ' '"all".'))
Adds the VSS processing options to the argument group. Args: argument_group (argparse._ArgumentGroup): argparse argument group.
juraj-google-style
def imrescale(img, scale, return_scale=False, interpolation='bilinear'): (h, w) = img.shape[:2] if isinstance(scale, (float, int)): if (scale <= 0): raise ValueError('Invalid scale {}, must be positive.'.format(scale)) scale_factor = scale elif isinstance(scale, tuple): max_long_edge = max(scale) max_short_edge = min(scale) scale_factor = min((max_long_edge / max(h, w)), (max_short_edge / min(h, w))) else: raise TypeError('Scale must be a number or tuple of int, but got {}'.format(type(scale))) new_size = _scale_size((w, h), scale_factor) rescaled_img = imresize(img, new_size, interpolation=interpolation) if return_scale: return (rescaled_img, scale_factor) else: return rescaled_img
Resize image while keeping the aspect ratio. Args: img (ndarray): The input image. scale (float or tuple[int]): The scaling factor or maximum size. If it is a float number, then the image will be rescaled by this factor, else if it is a tuple of 2 integers, then the image will be rescaled as large as possible within the scale. return_scale (bool): Whether to return the scaling factor besides the rescaled image. interpolation (str): Same as :func:`resize`. Returns: ndarray: The rescaled image.
codesearchnet
def setData(self, index, value, role=DTYPE_CHANGE_ROLE): if ((role != DTYPE_CHANGE_ROLE) or (not index.isValid())): return False if (not self.editable()): return False self.layoutAboutToBeChanged.emit() dtype = SupportedDtypes.dtype(value) currentDtype = np.dtype(index.data(role=DTYPE_ROLE)) if (dtype is not None): if (dtype != currentDtype): columnName = self._dataFrame.columns[index.row()] try: if (dtype == np.dtype('<M8[ns]')): if (currentDtype in SupportedDtypes.boolTypes()): raise Exception("Can't convert a boolean value into a datetime value.") self._dataFrame[columnName] = self._dataFrame[columnName].apply(pandas.to_datetime) else: self._dataFrame[columnName] = self._dataFrame[columnName].astype(dtype) self.dtypeChanged.emit(index.row(), dtype) self.layoutChanged.emit() return True except Exception: message = ('Could not change datatype %s of column %s to datatype %s' % (currentDtype, columnName, dtype)) self.changeFailed.emit(message, index, dtype) raise return False
Updates the datatype of a column. The model must be initated with a dataframe already, since valid indexes are necessary. The `value` is a translated description of the data type. The translations can be found at `qtpandas.translation.DTypeTranslator`. If a datatype can not be converted, e.g. datetime to integer, a `NotImplementedError` will be raised. Args: index (QtCore.QModelIndex): The index of the column to be changed. value (str): The description of the new datatype, e.g. `positive kleine ganze Zahl (16 Bit)`. role (Qt.ItemDataRole, optional): The role, which accesses and changes data. Defaults to `DTYPE_CHANGE_ROLE`. Raises: NotImplementedError: If an error during conversion occured. Returns: bool: `True` if the datatype could be changed, `False` if not or if the new datatype equals the old one.
codesearchnet
def drug_matches_criteria(drug: Drug, **criteria: Dict[str, bool]) -> bool: for attribute, value in criteria.items(): if getattr(drug, attribute) != value: return False return True
Determines whether a drug, passed as an instance of :class:`.Drug`, matches the specified criteria. Args: drug: a :class:`.Drug` instance criteria: ``name=value`` pairs to match against the attributes of the :class:`Drug` class. For example, you can include keyword arguments like ``antidepressant=True``.
juraj-google-style
def verify_cot_signatures(chain): for link in chain.links: unsigned_path = link.get_artifact_full_path('public/chain-of-trust.json') ed25519_signature_path = link.get_artifact_full_path('public/chain-of-trust.json.sig') verify_link_ed25519_cot_signature(chain, link, unsigned_path, ed25519_signature_path)
Verify the signatures of the chain of trust artifacts populated in ``download_cot``. Populate each link.cot with the chain of trust json body. Args: chain (ChainOfTrust): the chain of trust to add to. Raises: CoTError: on failure.
juraj-google-style
def dvds_upcoming(self, **kwargs): path = self._get_path('dvds_upcoming') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Gets the upcoming movies from the API. Args: page_limit (optional): number of movies to show per page, default=16 page (optional): results page number, default=1 country (optional): localized data for selected country, default="us" Returns: A dict respresentation of the JSON returned from the API.
juraj-google-style
def post_attention(self, token, x): with tf.control_dependencies([ self.previous_segment.assign(token[0]), self.previous_vals.assign(token[1]), self.previous_bias.assign(token[2]), ]): return tf.identity(x)
Called after self-attention. The memory can be updated here. Args: token: Data returned by pre_attention, which can be used to carry over state related to the current memory operation. x: a Tensor of data after self-attention and feed-forward Returns: a (possibly modified) version of the input x
juraj-google-style
def swo_set_host_buffer_size(self, buf_size): buf = ctypes.c_uint32(buf_size) res = self._dll.JLINKARM_SWO_Control(enums.JLinkSWOCommands.SET_BUFFERSIZE_HOST, ctypes.byref(buf)) if res < 0: raise errors.JLinkException(res) return None
Sets the size of the buffer used by the host to collect SWO data. Args: self (JLink): the ``JLink`` instance buf_size (int): the new size of the host buffer Returns: ``None`` Raises: JLinkException: on error
juraj-google-style
def register(self, command: str, handler: Any): if (not command.startswith('/')): command = f'/{command}' LOG.info('Registering %s to %s', command, handler) self._routes[command].append(handler)
Register a new handler for a specific slash command Args: command: Slash command handler: Callback
codesearchnet
def _ProcessMetadataFile(self, mediator, file_entry): self.processing_status = definitions.STATUS_INDICATOR_EXTRACTING self._event_extractor.ParseFileEntryMetadata(mediator, file_entry) for data_stream in file_entry.data_streams: if self._abort: break self.last_activity_timestamp = time.time() self._event_extractor.ParseMetadataFile( mediator, file_entry, data_stream.name)
Processes a metadata file. Args: mediator (ParserMediator): mediates the interactions between parsers and other components, such as storage and abort signals. file_entry (dfvfs.FileEntry): file entry of the metadata file.
juraj-google-style
def render_root_node_with_subs(root_node, subs): def rec(node, acc): if isinstance(node, e_nodes.EndOfStreamNode): pass elif isinstance(node, e_nodes.OpenStartElementNode): acc.append("<") acc.append(node.tag_name()) for child in node.children(): if isinstance(child, e_nodes.AttributeNode): acc.append(" ") acc.append(validate_name(child.attribute_name().string())) acc.append("=\"") rec(child.attribute_value(), acc) acc.append("\"") acc.append(">") for child in node.children(): rec(child, acc) acc.append("</") acc.append(validate_name(node.tag_name())) acc.append(">\n") elif isinstance(node, e_nodes.CloseStartElementNode): pass elif isinstance(node, e_nodes.CloseEmptyElementNode): pass elif isinstance(node, e_nodes.CloseElementNode): pass elif isinstance(node, e_nodes.ValueNode): acc.append(escape_value(node.children()[0].string())) elif isinstance(node, e_nodes.AttributeNode): pass elif isinstance(node, e_nodes.CDataSectionNode): acc.append("<![CDATA[") acc.append(escape_value(node.cdata())) acc.append("]]>") elif isinstance(node, e_nodes.EntityReferenceNode): acc.append(escape_value(node.entity_reference())) elif isinstance(node, e_nodes.ProcessingInstructionTargetNode): acc.append(escape_value(node.processing_instruction_target())) elif isinstance(node, e_nodes.ProcessingInstructionDataNode): acc.append(escape_value(node.string())) elif isinstance(node, e_nodes.TemplateInstanceNode): raise UnexpectedElementException("TemplateInstanceNode") elif isinstance(node, e_nodes.NormalSubstitutionNode): sub = subs[node.index()] if isinstance(sub, e_nodes.BXmlTypeNode): sub = render_root_node(sub.root()) else: sub = escape_value(sub.string()) acc.append(sub) elif isinstance(node, e_nodes.ConditionalSubstitutionNode): sub = subs[node.index()] if isinstance(sub, e_nodes.BXmlTypeNode): sub = render_root_node(sub.root()) else: sub = escape_value(sub.string()) acc.append(sub) elif isinstance(node, e_nodes.StreamStartNode): pass acc = [] for c in root_node.template().children(): rec(c, acc) return "".join(acc)
render the given root node using the given substitutions into XML. Args: root_node (e_nodes.RootNode): the node to render. subs (list[str]): the substitutions that maybe included in the XML. Returns: str: the rendered XML document.
juraj-google-style
def abs(cls, x: 'TensorFluent') -> 'TensorFluent': return cls._unary_op(x, tf.abs, tf.float32)
Returns a TensorFluent for the abs function. Args: x: The input fluent. Returns: A TensorFluent wrapping the abs function.
codesearchnet
def APFSUnlockVolume(fsapfs_volume, path_spec, key_chain): is_locked = fsapfs_volume.is_locked() if is_locked: password = key_chain.GetCredential(path_spec, 'password') if password: fsapfs_volume.set_password(password) recovery_password = key_chain.GetCredential(path_spec, 'recovery_password') if recovery_password: fsapfs_volume.set_recovery_password(recovery_password) is_locked = not fsapfs_volume.unlock() return not is_locked
Unlocks an APFS volume using the path specification. Args: fsapfs_volume (pyapfs.volume): APFS volume. path_spec (PathSpec): path specification. key_chain (KeyChain): key chain. Returns: bool: True if the volume is unlocked, False otherwise.
juraj-google-style
def download(self, streamed=False, action=None, chunk_size=1024, **kwargs): path = ('/projects/%s/export/download' % self.project_id) result = self.manager.gitlab.http_get(path, streamed=streamed, raw=True, **kwargs) return utils.response_content(result, streamed, action, chunk_size)
Download the archive of a project export. Args: streamed (bool): If True the data will be processed by chunks of `chunk_size` and each chunk is passed to `action` for reatment action (callable): Callable responsible of dealing with chunk of data chunk_size (int): Size of each chunk **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabGetError: If the server failed to perform the request Returns: str: The blob content if streamed is False, None otherwise
codesearchnet
def committed(self, partition): assert (self.config['api_version'] >= (0, 8, 1)), 'Requires >= Kafka 0.8.1' assert (self.config['group_id'] is not None), 'Requires group_id' if (not isinstance(partition, TopicPartition)): raise TypeError('partition must be a TopicPartition namedtuple') if self._subscription.is_assigned(partition): committed = self._subscription.assignment[partition].committed if (committed is None): self._coordinator.refresh_committed_offsets_if_needed() committed = self._subscription.assignment[partition].committed else: commit_map = self._coordinator.fetch_committed_offsets([partition]) if (partition in commit_map): committed = commit_map[partition].offset else: committed = None return committed
Get the last committed offset for the given partition. This offset will be used as the position for the consumer in the event of a failure. This call may block to do a remote call if the partition in question isn't assigned to this consumer or if the consumer hasn't yet initialized its cache of committed offsets. Arguments: partition (TopicPartition): The partition to check. Returns: The last committed offset, or None if there was no prior commit.
codesearchnet
def consume_socket_output(frames, demux=False): if (demux is False): return six.binary_type().join(frames) out = [None, None] for frame in frames: assert (frame != (None, None)) if (frame[0] is not None): if (out[0] is None): out[0] = frame[0] else: out[0] += frame[0] elif (out[1] is None): out[1] = frame[1] else: out[1] += frame[1] return tuple(out)
Iterate through frames read from the socket and return the result. Args: demux (bool): If False, stdout and stderr are multiplexed, and the result is the concatenation of all the frames. If True, the streams are demultiplexed, and the result is a 2-tuple where each item is the concatenation of frames belonging to the same stream.
codesearchnet
def hide_tool(self, context_name, tool_name): data = self._context(context_name) hidden_tools = data["hidden_tools"] if tool_name not in hidden_tools: self._validate_tool(context_name, tool_name) hidden_tools.add(tool_name) self._flush_tools()
Hide a tool so that it is not exposed in the suite. Args: context_name (str): Context containing the tool. tool_name (str): Name of tool to hide.
juraj-google-style
def peek_record(self, model_class, record_id): if self._cache: return self._cache.get_record(model_class.__name__, record_id) else: return None
Return an instance of the model_class from the cache if it is present. Args: model_class (:class:`cinder_data.model.CinderModel`): A subclass of :class:`cinder_data.model.CinderModel` of your chosen model. record_id (int): The id of the record requested. Returns: :class:`cinder_data.model.CinderModel`: An instance of model_class or None.
juraj-google-style
def get_help_commands(server_prefix): datapacks = [] _dir = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__))) for module_name in os.listdir('{}/../'.format(_dir)): if ((not module_name.startswith('_')) and (not module_name.startswith('!'))): help_command = '`{}help {}`'.format(server_prefix, module_name) datapacks.append((module_name, help_command, True)) return datapacks
Get the help commands for all modules Args: server_prefix: The server command prefix Returns: datapacks (list): A list of datapacks for the help commands for all the modules
codesearchnet
def _GetNextLogCountPerToken(token): global _log_counter_per_token _log_counter_per_token[token] = 1 + _log_counter_per_token.get(token, -1) return _log_counter_per_token[token]
Wrapper for _log_counter_per_token. Args: token: The token for which to look up the count. Returns: The number of times this function has been called with *token* as an argument (starting at 0)
juraj-google-style
def _prepare_grid(self, times, grid_step): grid = tf.range(0.0, times[-1], grid_step, dtype=self._dtype) all_times = tf.concat([grid, times], axis=0) mask = tf.concat([tf.zeros_like(grid, dtype=tf.bool), tf.ones_like(times, dtype=tf.bool)], axis=0) perm = tf.argsort(all_times, stable=True) all_times = tf.gather(all_times, perm) mask = tf.gather(mask, perm) return (all_times, mask)
Prepares grid of times for path generation. Args: times: Rank 1 `Tensor` of increasing positive real values. The times at which the path points are to be evaluated. grid_step: Rank 0 real `Tensor`. Maximal distance between points in resulting grid. Returns: Tuple `(all_times, mask)`. `all_times` is 1-D real `Tensor` containing all points from 'times` and whose intervals are at most `grid_step`. `mask` is a boolean 1-D tensor of the same shape as 'all_times', showing which elements of 'all_times' correspond to values from `times`. Guarantees that times[0]=0 and grid_step[0]=False. 'all_times` is sorted ascending and may contain duplicates.
github-repos
def update_score_summary(sender, **kwargs): score = kwargs['instance'] try: score_summary = ScoreSummary.objects.get(student_item=score.student_item) score_summary.latest = score if score.reset: score_summary.highest = score elif (score.to_float() > score_summary.highest.to_float()): score_summary.highest = score score_summary.save() except ScoreSummary.DoesNotExist: ScoreSummary.objects.create(student_item=score.student_item, highest=score, latest=score) except DatabaseError as err: logger.exception(u'Error while updating score summary for student item {}'.format(score.student_item))
Listen for new Scores and update the relevant ScoreSummary. Args: sender: not used Kwargs: instance (Score): The score model whose save triggered this receiver.
codesearchnet
def set_size(self, width, height): if (width is not None): try: width = to_pix(int(width)) except ValueError: pass self.style['width'] = width if (height is not None): try: height = to_pix(int(height)) except ValueError: pass self.style['height'] = height
Set the widget size. Args: width (int or str): An optional width for the widget (es. width=10 or width='10px' or width='10%'). height (int or str): An optional height for the widget (es. height=10 or height='10px' or height='10%').
codesearchnet
def __init__(self, params=None, connection_string=None): if params is None and connection_string is None: raise RuntimeError("Please provide either 'params' or 'connection_string'") if params is not None and connection_string is not None: raise RuntimeError("Please provide only on of 'params' or 'connection_string'") if params is not None: connection_string_no_pw = self.get_connection_string(params=params, hide_password=True) config.logger.info("Client connecting to: " + connection_string_no_pw) connection_string = self.get_connection_string(params=params, hide_password=False) else: config.logger.info("Client connecting to: " + connection_string) self.engine = sa.create_engine(connection_string) if connection_string.startswith('sqlite: def on_connect(conn, _): conn.execute('pragma foreign_keys=ON') from sqlalchemy import event event.listen(self.engine, 'connect', on_connect) self.session_maker = orm.sessionmaker(bind=self.get_engine())
Instantiate a client object A client can be configured either from a parameters dictionary ``params`` or directly from an :mod:`sqlalchemy` connection string ``connection_string``. Exactly one of the two must be provided. Args: params (dict): database configuration, as defined in :mod:`ozelot.config` connection_string (str): :mod:`sqlalchemy` connection string
juraj-google-style
def create(cls, tx_signers, recipients, metadata=None, asset=None): (inputs, outputs) = cls.validate_create(tx_signers, recipients, asset, metadata) return cls(cls.CREATE, {'data': asset}, inputs, outputs, metadata)
A simple way to generate a `CREATE` transaction. Note: This method currently supports the following Cryptoconditions use cases: - Ed25519 - ThresholdSha256 Additionally, it provides support for the following BigchainDB use cases: - Multiple inputs and outputs. Args: tx_signers (:obj:`list` of :obj:`str`): A list of keys that represent the signers of the CREATE Transaction. recipients (:obj:`list` of :obj:`tuple`): A list of ([keys],amount) that represent the recipients of this Transaction. metadata (dict): The metadata to be stored along with the Transaction. asset (dict): The metadata associated with the asset that will be created in this Transaction. Returns: :class:`~bigchaindb.common.transaction.Transaction`
codesearchnet
def list_tags(): codes = _AutoCodes() grouped = set([(k, '/{0}'.format(k), codes[k], codes['/{0}'.format(k)]) for k in codes if (not k.startswith('/'))]) found = [c for r in grouped for c in r[:2]] missing = set([(('', r[0], None, r[1]) if r[0].startswith('/') else (r[0], '', r[1], None)) for r in _AutoCodes().items() if (r[0] not in found)]) grouped |= missing payload = sorted([i for i in grouped if (i[2] is None)], key=(lambda x: x[3])) grouped -= set(payload) payload.extend(sorted([i for i in grouped if (i[2] < 10)], key=(lambda x: x[2]))) grouped -= set(payload) payload.extend(sorted([i for i in grouped if i[0].startswith('auto')], key=(lambda x: x[2]))) grouped -= set(payload) payload.extend(sorted([i for i in grouped if (not i[0].startswith('hi'))], key=(lambda x: x[2]))) grouped -= set(payload) payload.extend(sorted(grouped, key=(lambda x: x[2]))) return tuple(payload)
Lists the available tags. Returns: Tuple of tuples. Child tuples are four items: ('opening tag', 'closing tag', main ansi value, closing ansi value).
codesearchnet
def join_pretty_tensors(tensors, output, join_function=None, name='join'): if (not tensors): raise ValueError('pretty_tensors must be a non-empty sequence.') with output.g.name_scope(name): if (join_function is None): last_dim = (len(tensors[0].shape) - 1) return output.with_tensor(tf.concat(tensors, last_dim)) else: return output.with_tensor(join_function(tensors))
Joins the list of pretty_tensors and sets head of output_pretty_tensor. Args: tensors: A sequence of Layers or SequentialLayerBuilders to join. output: A pretty_tensor to set the head with the result. join_function: A function to join the tensors, defaults to concat on the last dimension. name: A name that is used for the name_scope Returns: The result of calling with_tensor on output Raises: ValueError: if pretty_tensors is None or empty.
codesearchnet
def setup(docker_mount=None, force=False): if not is_ubuntu() and not is_boot2docker(): raise Exception('Head In The Clouds Docker is only supported on Ubuntu') if os.path.exists('dot_dockercfg') and not fabric.contrib.files.exists('~/.dockercfg'): put('dot_dockercfg', '~/.dockercfg') if not fabric.contrib.files.exists('~/.ssh/id_rsa'): fab.run('ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa') if docker_is_installed() and not force: return for attempt in range(3): sudo('wget -qO- https: sudo('sh -c "echo deb http: with settings(warn_only=True): sudo('apt-get update') failed = sudo('apt-get install -y lxc-docker sshpass curl').failed if not failed: break if docker_mount: create_docker_mount(docker_mount)
Prepare a vanilla server by installing docker, curl, and sshpass. If a file called ``dot_dockercfg`` exists in the current working directory, it is uploaded as ``~/.dockercfg``. Args: * docker_mount=None: Partition that will be mounted as /var/lib/docker
juraj-google-style
def label_durations(self, label_list_ids=None): duration = collections.defaultdict(int) for utterance in self.utterances.values(): for label_value, utt_count in utterance.label_total_duration(label_list_ids=label_list_ids).items(): duration[label_value] += utt_count return duration
Return a dictionary containing the total duration, every label-value in this corpus is occurring. Args: label_list_ids (list): If not None, only labels from label-lists with an id contained in this list are considered. Returns: dict: A dictionary containing the total duration with the label-value as key.
juraj-google-style
def ReadClientFullInfo(self, client_id): result = self.MultiReadClientFullInfo([client_id]) try: return result[client_id] except KeyError: raise UnknownClientError(client_id)
Reads full client information for a single client. Args: client_id: A GRR client id string, e.g. "C.ea3b2b71840d6fa7". Returns: A `ClientFullInfo` instance for given client. Raises: UnknownClientError: if no client with such id was found.
juraj-google-style
def convert_x_www_form_urlencoded_to_dict(post_data): if isinstance(post_data, str): converted_dict = {} for k_v in post_data.split('&'): try: (key, value) = k_v.split('=') except ValueError: raise Exception('Invalid x_www_form_urlencoded data format: {}'.format(post_data)) converted_dict[key] = unquote(value) return converted_dict else: return post_data
convert x_www_form_urlencoded data to dict Args: post_data (str): a=1&b=2 Returns: dict: {"a":1, "b":2}
codesearchnet
def _read_config(filename): parser = configparser.RawConfigParser() if (filename and (not parser.read(filename))): sys.stderr.write(("Unable to open configuration file %s. Use --config='' to disable this warning.\n" % filename)) config = {} for (section, defaults) in BASE_CONFIG.items(): if (section == 'patterns'): continue for (name, descr) in defaults.items(): (kind, default) = descr if ((section in parser.sections()) and (name in parser.options(section))): if (kind == 'int'): value = parser.getint(section, name) elif (kind == 'float'): value = parser.getfloat(section, name) elif (kind == 'bool'): value = parser.getboolean(section, name) else: value = parser.get(section, name) else: value = default config[name] = value if ('patterns' in parser.sections()): patterns = [parser.get('patterns', opt) for opt in parser.options('patterns')] else: patterns = DEFAULT_PATTERNS config['patterns'] = patterns return config
Read configuration from the given file. Parsing is performed through the configparser library. Returns: dict: a flattened dict of (option_name, value), using defaults.
codesearchnet
def remove_site(self): params = dict(oxd_id=self.oxd_id) logger.debug('Sending command `remove_site` with params %s', params) response = self.msgr.request('remove_site', **params) logger.debug('Received response: %s', response) if (response['status'] == 'error'): raise OxdServerError(response['data']) return response['data']['oxd_id']
Cleans up the data for the site. Returns: oxd_id if the process was completed without error Raises: OxdServerError if there was an issue with the operation
codesearchnet
def plot_brillouin_zone_from_kpath(kpath, ax=None, **kwargs): lines = [[kpath.kpath['kpoints'][k] for k in p] for p in kpath.kpath['path']] return plot_brillouin_zone(bz_lattice=kpath.prim_rec, lines=lines, ax=ax, labels=kpath.kpath['kpoints'], **kwargs)
Gives the plot (as a matplotlib object) of the symmetry line path in the Brillouin Zone. Args: kpath (HighSymmKpath): a HighSymmKPath object ax: matplotlib :class:`Axes` or None if a new figure should be created. **kwargs: provided by add_fig_kwargs decorator Returns: matplotlib figure
juraj-google-style
def get_model(servoid): data = [] data.append(9) data.append(servoid) data.append(EEP_READ_REQ) data.append(MODEL_NO1_EEP) data.append(BYTE1) send_data(data) rxdata = [] try: rxdata = SERPORT.read(12) return (ord(rxdata[9]) & 255) except: raise HerkulexError('could not communicate with motors')
Get the servo model This function gets the model of the herkules servo, provided its id Args: servoid(int): the id of the servo Returns: int: an integer corresponding to the model number 0x06 for DRS-602 0x04 for DRS-402 0x02 for DRS-202
codesearchnet
def Delete(self, request, global_params=None): config = self.GetMethodConfig('Delete') return self._RunMethod(config, request, global_params=global_params)
Deletes a `WorkerPool`. Args: request: (CloudbuildProjectsLocationsWorkerPoolsDeleteRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (Operation) The response message.
github-repos
def delete_vmss(access_token, subscription_id, resource_group, vmss_name): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', resource_group, '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name, '?api-version=', COMP_API]) return do_delete(endpoint, access_token)
Delete a virtual machine scale set. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. vmss_name (str): Name of the virtual machine scale set. Returns: HTTP response.
codesearchnet
def contains(self, value, equality_comparer=operator.eq): if self.closed(): raise ValueError('Attempt to call contains() on a closed Queryable.') if (not is_callable(equality_comparer)): raise TypeError('contains() parameter equality_comparer={0} is not callable'.format(repr(equality_comparer))) if (equality_comparer is operator.eq): return (value in self._iterable) for item in self: if equality_comparer(value, item): return True return False
Determines whether the sequence contains a particular value. Execution is immediate. Depending on the type of the sequence, all or none of the sequence may be consumed by this operation. Note: This method uses immediate execution. Args: value: The value to test for membership of the sequence Returns: True if value is in the sequence, otherwise False. Raises: ValueError: If the Queryable has been closed.
codesearchnet
def sg_summary_gradient(tensor, gradient, prefix=None, name=None): prefix = ('' if (prefix is None) else (prefix + '/')) name = ((prefix + _pretty_name(tensor)) if (name is None) else (prefix + name)) _scalar((name + '/grad'), tf.reduce_mean(tf.abs(gradient))) _histogram((name + '/grad-h'), tf.abs(gradient))
r"""Register `tensor` to summary report as `gradient` Args: tensor: A `Tensor` to log as gradient gradient: A 0-D `Tensor`. A gradient to log prefix: A `string`. A prefix to display in the tensor board web UI. name: A `string`. A name to display in the tensor board web UI. Returns: None
codesearchnet
def multi_label_train_test_split(y, test_size=0.2): if test_size <= 0 or test_size >= 1: raise ValueError("`test_size` should be between 0 and 1") frac = Fraction(test_size).limit_denominator() test_folds, total_folds = frac.numerator, frac.denominator logger.warn('Inferring test_size as {}/{}. Generating {} folds. The algorithm might fail if denominator is large.' .format(test_folds, total_folds, total_folds)) folds = equal_distribution_folds(y, folds=total_folds) test_indices = np.concatenate(folds[:test_folds]) train_indices = np.concatenate(folds[test_folds:]) return train_indices, test_indices
Creates a test split with roughly the same multi-label distribution in `y`. Args: y: The multi-label outputs. test_size: The test size in [0, 1] Returns: The train and test indices.
juraj-google-style
def set_icon_file(self, filename, rel='icon'): (mimetype, encoding) = mimetypes.guess_type(filename) self.add_child('favicon', ('<link rel="%s" href="%s" type="%s" />' % (rel, filename, mimetype)))
Allows to define an icon for the App Args: filename (str): the resource file name (ie. "/res:myicon.png") rel (str): leave it unchanged (standard "icon")
codesearchnet
def get_policy(observations, hparams, action_space): if (not isinstance(action_space, gym.spaces.Discrete)): raise ValueError('Expecting discrete action space.') obs_shape = common_layers.shape_list(observations) (frame_height, frame_width) = obs_shape[2:4] if (hparams.policy_problem_name == 'dummy_policy_problem_ttt'): tf.logging.info('Using DummyPolicyProblemTTT for the policy.') policy_problem = tic_tac_toe_env.DummyPolicyProblemTTT() else: tf.logging.info('Using DummyPolicyProblem for the policy.') policy_problem = DummyPolicyProblem(action_space, frame_height, frame_width) trainer_lib.add_problem_hparams(hparams, policy_problem) hparams.force_full_predict = True model = registry.model(hparams.policy_network)(hparams, tf.estimator.ModeKeys.TRAIN) try: num_target_frames = hparams.video_num_target_frames except AttributeError: num_target_frames = 1 features = {'inputs': observations, 'input_action': tf.zeros((obs_shape[:2] + [1]), dtype=tf.int32), 'input_reward': tf.zeros((obs_shape[:2] + [1]), dtype=tf.int32), 'targets': tf.zeros(((obs_shape[:1] + [num_target_frames]) + obs_shape[2:])), 'target_action': tf.zeros((obs_shape[:1] + [num_target_frames, 1]), dtype=tf.int32), 'target_reward': tf.zeros((obs_shape[:1] + [num_target_frames, 1]), dtype=tf.int32), 'target_policy': tf.zeros(((obs_shape[:1] + [num_target_frames]) + [action_space.n])), 'target_value': tf.zeros((obs_shape[:1] + [num_target_frames]))} with tf.variable_scope(tf.get_variable_scope(), reuse=tf.AUTO_REUSE): t2t_model.create_dummy_vars() (targets, _) = model(features) return (targets['target_policy'][(:, 0, :)], targets['target_value'][(:, 0)])
Get a policy network. Args: observations: observations hparams: parameters action_space: action space Returns: Tuple (action logits, value).
codesearchnet
def __init__(self, api_login, api_key): self.login = api_login self.key = api_key self.api_url = self.api_base_url.format(api_version=self.api_version)
Initializes OpenLoad instance with given parameters and formats api base url. Args: api_login (str): API Login found in openload.co api_key (str): API Key found in openload.co Returns: None
juraj-google-style
def match_any(patterns, name): if (not patterns): return True return any((match(pattern, name) for pattern in patterns))
Test if a name matches any of a list of patterns. Will return `True` if ``patterns`` is an empty list. Arguments: patterns (list): A list of wildcard pattern, e.g ``["*.py", "*.pyc"]`` name (str): A filename. Returns: bool: `True` if the name matches at least one of the patterns.
codesearchnet
def cas(self, key, value, cas, expire=0, noreply=False): return self._store_cmd(b'cas', {key: value}, expire, noreply, cas)[key]
The memcached "cas" command. Args: key: str, see class docs for details. value: str, see class docs for details. cas: int or str that only contains the characters '0'-'9'. expire: optional int, number of seconds until the item is expired from the cache, or zero for no expiry (the default). noreply: optional bool, False to wait for the reply (the default). Returns: If noreply is True, always returns True. Otherwise returns None if the key didn't exist, False if it existed but had a different cas value and True if it existed and was changed.
codesearchnet