code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def from_dict(cls, metadata): hyperparameters = metadata.get('hyperparameters') tunable = metadata.get('tunable_hyperparameters') pipeline = cls( metadata['primitives'], metadata.get('init_params'), metadata.get('input_names'), metadata.get('output_names'), ) if hyperparameters: pipeline.set_hyperparameters(hyperparameters) if tunable is not None: pipeline._tunable_hyperparameters = tunable return pipeline
Create a new MLPipeline from a dict specification. The dict structure is the same as the one created by the `to_dict` method. Args: metadata (dict): Dictionary containing the pipeline specification. Returns: MLPipeline: A new MLPipeline instance with the details found in the given specification dictionary.
juraj-google-style
def parse(self, sentence, params=None, headers=None): if params is None: params = {} params['input'] = sentence hdrs = {'Accept': 'application/json'} if headers is not None: hdrs.update(headers) url = urljoin(self.server, 'parse') r = requests.get(url, params=params, headers=hdrs) if r.status_code == 200: return _RestResponse(r.json()) else: r.raise_for_status()
Request a parse of *sentence* and return the response. Args: sentence (str): sentence to be parsed params (dict): a dictionary of request parameters headers (dict): a dictionary of additional request headers Returns: A ParseResponse containing the results, if the request was successful. Raises: requests.HTTPError: if the status code was not 200
juraj-google-style
def search_groups(self, group): group_url = ('%s/%s/%s' % (self.url, 'group', group)) response = self.jss.get(group_url) return LDAPGroupsResults(self.jss, response)
Search for LDAP groups. Args: group: Group to search for. It is not entirely clear how the JSS determines the results- are regexes allowed, or globbing? Returns: LDAPGroupsResult object. Raises: JSSGetError if no results are found.
codesearchnet
def _ExtractWindowingInfo(pcoll, fields: Optional[Union[Mapping[str, str], Iterable[str]]]=None): if fields is None: fields = ['timestamp', 'window_start', 'window_end'] if not isinstance(fields, Mapping): if isinstance(fields, Iterable) and (not isinstance(fields, str)): fields = {fld: fld for fld in fields} else: raise TypeError('Fields must be a mapping or iterable of strings, got {fields}') existing_fields = named_fields_from_element_type(pcoll.element_type) new_fields = [] for field, value in fields.items(): if value not in _WINDOWING_INFO_TYPES: raise ValueError(f'{value} is not a valid windowing parameter; must be one of {list(_WINDOWING_INFO_TYPES.keys())}') elif field in existing_fields: raise ValueError(f'Input schema already has a field named {field}.') else: new_fields.append((field, _WINDOWING_INFO_TYPES[value])) def augment_row(row, timestamp=beam.DoFn.TimestampParam, window=beam.DoFn.WindowParam, pane_info=beam.DoFn.PaneInfoParam): as_dict = row._asdict() for field, value in fields.items(): as_dict[field] = _WINDOWING_INFO_EXTRACTORS[value](locals()) return beam.Row(**as_dict) return pcoll | beam.Map(augment_row).with_output_types(row_type.RowTypeConstraint.from_fields(existing_fields + new_fields))
Extracts the implicit windowing information from an element and makes it explicit as field(s) in the element itself. The following windowing parameter values are supported: * `timestamp`: The event timestamp of the current element. * `window_start`: The start of the window iff it is an interval window. * `window_end`: The (exclusive) end of the window. * `window_string`: The string representation of the window. * `window_type`: The type of the window as a string. * `winodw_object`: The actual window object itself, as a Java or Python object. * `pane_info`: A schema'd representation of the current pane info, including its index, whether it was the last firing, etc. As a convenience, a list rather than a mapping of fields may be provided, in which case the fields will be named according to the requested values. Args: fields: A mapping of new field names to various windowing parameters, as documented above. If omitted, defaults to `[timestamp, window_start, window_end]`.
github-repos
def activate(self, user): org_user = self.organization.add_user(user, **self.activation_kwargs()) self.invitee = user self.save() return org_user
Updates the `invitee` value and saves the instance Provided as a way of extending the behavior. Args: user: the newly created user Returns: the linking organization user
codesearchnet
def _set_graph_parents(self, graph_parents): graph_parents = [] if graph_parents is None else graph_parents for i, t in enumerate(graph_parents): if t is None or not (linear_operator_util.is_ref(t) or tensor_util.is_tf_type(t)): raise ValueError('Graph parent item %d is not a Tensor; %s.' % (i, t)) self._graph_parents = graph_parents
Set self._graph_parents. Called during derived class init. This method allows derived classes to set graph_parents, without triggering a deprecation warning (which is invoked if `graph_parents` is passed during `__init__`. Args: graph_parents: Iterable over Tensors.
github-repos
def severity_level(self, value): if value == self._defaults['severityLevel'] and 'severityLevel' in self._values: del self._values['severityLevel'] else: self._values['severityLevel'] = value
The severity_level property. Args: value (int). the property value.
juraj-google-style
def post_process(self, outputs, target_sizes): warnings.warn('`post_process` is deprecated and will be removed in v5 of Transformers, please use `post_process_object_detection` instead, with `threshold=0.` for equivalent results.', FutureWarning) logits, boxes = (outputs.logits, outputs.pred_boxes) if len(logits) != len(target_sizes): raise ValueError('Make sure that you pass in as many target sizes as the batch dimension of the logits') if target_sizes.shape[1] != 2: raise ValueError('Each element of target_sizes must contain the size (h, w) of each image of the batch') probs = torch.max(logits, dim=-1) scores = torch.sigmoid(probs.values) labels = probs.indices boxes = center_to_corners_format(boxes) img_h, img_w = target_sizes.unbind(1) scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1).to(boxes.device) boxes = boxes * scale_fct[:, None, :] results = [{'scores': s, 'labels': l, 'boxes': b} for s, l, b in zip(scores, labels, boxes)] return results
Converts the raw output of [`OwlViTForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Args: outputs ([`OwlViTObjectDetectionOutput`]): Raw outputs of the model. target_sizes (`torch.Tensor` of shape `(batch_size, 2)`): Tensor containing the size (h, w) of each image of the batch. For evaluation, this must be the original image size (before any data augmentation). For visualization, this should be the image size after data augment, but before padding. Returns: `List[Dict]`: A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.
github-repos
def registration_backend(backend=None, namespace=None): backend = (backend or ORGS_REGISTRATION_BACKEND) (class_module, class_name) = backend.rsplit('.', 1) mod = import_module(class_module) return getattr(mod, class_name)(namespace=namespace)
Returns a specified registration backend Args: backend: dotted path to the registration backend class namespace: URL namespace to use Returns: an instance of an RegistrationBackend
codesearchnet
def ratio_split(amount, ratios): ratio_total = sum(ratios) divided_value = (amount / ratio_total) values = [] for ratio in ratios: value = (divided_value * ratio) values.append(value) rounded = [v.quantize(Decimal('0.01')) for v in values] remainders = [(v - rounded[i]) for (i, v) in enumerate(values)] remainder = sum(remainders) rounded[(- 1)] = (rounded[(- 1)] + remainder).quantize(Decimal('0.01')) assert (sum(rounded) == amount) return rounded
Split in_value according to the ratios specified in `ratios` This is special in that it ensures the returned values always sum to in_value (i.e. we avoid losses or gains due to rounding errors). As a result, this method returns a list of `Decimal` values with length equal to that of `ratios`. Examples: .. code-block:: python >>> from hordak.utilities.money import ratio_split >>> from decimal import Decimal >>> ratio_split(Decimal('10'), [Decimal('1'), Decimal('2')]) [Decimal('3.33'), Decimal('6.67')] Note the returned values sum to the original input of ``10``. If we were to do this calculation in a naive fashion then the returned values would likely be ``3.33`` and ``6.66``, which would sum to ``9.99``, thereby loosing ``0.01``. Args: amount (Decimal): The amount to be split ratios (list[Decimal]): The ratios that will determine the split Returns: list(Decimal)
codesearchnet
def _check_boolean(parameter_name, value, parameter_config): if (parameter_config.get('type') != 'boolean'): return if (value.lower() not in ('1', 'true', '0', 'false')): raise errors.BasicTypeParameterError(parameter_name, value, 'boolean')
Checks if a boolean value is valid. This is called by the transform_parameter_value function and shouldn't be called directly. This checks that the string value passed in can be converted to a valid boolean value. Args: parameter_name: A string containing the name of the parameter, which is either just a variable name or the name with the index appended. For example 'var' or 'var[2]'. value: A string containing the value passed in for the parameter. parameter_config: The dictionary containing information specific to the parameter in question. This is retrieved from request.parameters in the method config. Raises: BasicTypeParameterError: If the given value is not a valid boolean value.
codesearchnet
def make_choice_type_function(choices: list) -> Callable[[str], Any]: str_to_choice = {str(choice): choice for choice in choices} return lambda arg: str_to_choice.get(arg, arg)
Creates a mapping function from each choices string representation to the actual value. Used to support multiple value types for a single argument. Args: choices (list): List of choices. Returns: Callable[[str], Any]: Mapping function from string representation to actual value for each choice.
github-repos
def find_vulnerabilities( cfg_list, blackbox_mapping_file, sources_and_sinks_file, interactive=False, nosec_lines=defaultdict(set) ): vulnerabilities = list() definitions = parse(sources_and_sinks_file) with open(blackbox_mapping_file) as infile: blackbox_mapping = json.load(infile) for cfg in cfg_list: find_vulnerabilities_in_cfg( cfg, definitions, Lattice(cfg.nodes), blackbox_mapping, vulnerabilities, interactive, nosec_lines ) if interactive: with open(blackbox_mapping_file, 'w') as outfile: json.dump(blackbox_mapping, outfile, indent=4) return vulnerabilities
Find vulnerabilities in a list of CFGs from a trigger_word_file. Args: cfg_list(list[CFG]): the list of CFGs to scan. blackbox_mapping_file(str) sources_and_sinks_file(str) interactive(bool): determines if we ask the user about blackbox functions not in the mapping file. Returns: A list of vulnerabilities.
juraj-google-style
def _get_nadir_pixel(earth_mask, sector): if sector == FULL_DISC: logger.debug('Computing nadir pixel') rmin, rmax, cmin, cmax = bbox(earth_mask) nadir_row = rmin + (rmax - rmin) nadir_col = cmin + (cmax - cmin) return nadir_row, nadir_col return None, None
Find the nadir pixel Args: earth_mask: Mask identifying earth and space pixels sector: Specifies the scanned sector Returns: nadir row, nadir column
juraj-google-style
def export(self, path, variables_saver=None): proto = saved_model_pb2.SavedModel() proto.CopyFrom(self._proto) assets_map = _make_assets_key_collection(proto, path) self._save_all_assets(path, assets_map) self._save_variables(path, variables_saver) self._save_proto(path, proto)
Exports to SavedModel directory. Args: path: path where to export the SavedModel to. variables_saver: lambda that receives a directory path where to export checkpoints of variables.
juraj-google-style
def _shape_tuple(self) -> NoReturn: raise NotImplementedError()
The shape of this Tensor, as a tuple. This is more performant than tuple(shape().as_list()) as it avoids two list and one object creation. Marked private for now as from an API perspective, it would be better to have a single performant way of getting a shape rather than exposing shape() and shape_tuple() (and heaven forbid, shape_list() etc. as well!). Punting on that for now, but ideally one would work things out and remove the need for this method. Returns: tuple with the shape.
github-repos
def cast(self, value, cast_context) -> Any: del cast_context assert value == self.placeholder_value(PlaceholderContext()), f'Can not cast {value!r} to type {self!r}' return value
Cast value to this type. Args: value: An input value belonging to this TraceType. cast_context: A context reserved for internal/future usage. Returns: The value casted to this TraceType. Raises: AssertionError: When _cast is not overloaded in subclass, the value is returned directly, and it should be the same to self.placeholder_value().
github-repos
def cuts_connections(self, a, b): n = (max(self.indices) + 1) return self.cut_matrix(n)[np.ix_(a, b)].any()
Check if this cut severs any connections from ``a`` to ``b``. Args: a (tuple[int]): A set of nodes. b (tuple[int]): A set of nodes.
codesearchnet
def __init__(self, get_media_files_func, media_cls, extra_files): self._get_media_files_func = get_media_files_func self._media_cls = media_cls self._extra_files = extra_files
Initialize the property. Args: get_media_files_func (callable): The function to call to generate the media files. media_cls (type): The Media class owning the property. extra_files (object): Files listed in the original ``css`` or ``js`` attribute on the Media class.
juraj-google-style
def ensure_dir(path): os.makedirs(os.path.abspath(os.path.dirname(path)), exist_ok=True)
Create all parent directories of path if they don't exist. Args: path. Path-like object. Create parent dirs to this path. Return: None.
codesearchnet
def get_street_from_xy(self, **kwargs): params = {'coordinateX': kwargs.get('longitude'), 'coordinateY': kwargs.get('latitude'), 'Radius': kwargs.get('radius'), 'cultureInfo': util.language_code(kwargs.get('lang'))} result = self.make_request('geo', 'get_street_from_xy', **params) if (not util.check_result(result, 'site')): return (False, 'UNKNOWN ERROR') values = util.response_list(result, 'site') return (True, [emtype.Street(**a) for a in values])
Obtain a list of streets around the specified point. Args: latitude (double): Latitude in decimal degrees. longitude (double): Longitude in decimal degrees. radius (int): Radius (in meters) of the search. lang (str): Language code (*es* or *en*). Returns: Status boolean and parsed response (list[Street]), or message string in case of error.
codesearchnet
def run(argv=None, save_main_session=True, pipeline=None) -> PipelineResult: known_args, pipeline_args = parse_known_args(argv) pipeline_options = PipelineOptions(pipeline_args) pipeline_options.view_as(SetupOptions).save_main_session = save_main_session saved_model_spec = model_spec_pb2.SavedModelSpec(model_path=known_args.model_path) inferece_spec_type = model_spec_pb2.InferenceSpecType(saved_model_spec=saved_model_spec) model_handler = CreateModelHandler(inferece_spec_type) keyed_model_handler = KeyedModelHandler(model_handler) if not pipeline: pipeline = beam.Pipeline(options=pipeline_options) filename_value_pair = pipeline | 'ReadImageNames' >> beam.io.ReadFromText(known_args.input) | 'FilterEmptyLines' >> beam.ParDo(filter_empty_lines) | 'ProcessImageData' >> beam.Map(lambda image_name: read_and_process_image(image_file_name=image_name, path_to_dir=known_args.images_dir)) predictions = filename_value_pair | 'ConvertToExampleProto' >> beam.Map(lambda x: (x[0], convert_image_to_example_proto(x[1]))) | 'TFXRunInference' >> RunInference(keyed_model_handler) | 'PostProcess' >> beam.ParDo(ProcessInferenceToString()) _ = predictions | 'WriteOutputToGCS' >> beam.io.WriteToText(known_args.output, shard_name_template='', append_trailing_newlines=True) result = pipeline.run() result.wait_until_finish() return result
Args: argv: Command line arguments defined for this example. save_main_session: Used for internal testing. test_pipeline: Used for internal testing.
github-repos
def import_object_from_path(path, object): with open(path) as f: return import_object_from_string_code(f.read(), object)
Used to import an object from an absolute path. This function takes an absolute path and imports it as a Python module. It then returns the object with name `object` from the imported module. Args: path (string): Absolute file path of .py file to import object (string): Name of object to extract from imported module
juraj-google-style
def analyze(fqdn, result, argl, argd): package = fqdn.split('.')[0] if package not in _methods: _load_methods(package) if _methods[package] is not None and fqdn in _methods[package]: return _methods[package][fqdn](fqdn, result, *argl, **argd)
Analyzes the result from calling the method with the specified FQDN. Args: fqdn (str): full-qualified name of the method that was called. result: result of calling the method with `fqdn`. argl (tuple): positional arguments passed to the method call. argd (dict): keyword arguments passed to the method call.
juraj-google-style
def Options(items, name): options = {} option_re = re.compile('^%s_(.+)' % name) for item in items: match = option_re.match(item[0]) if match: options[match.group(1)] = FixValue(item[1]) return options
Returns a dict of options specific to an implementation. This is used to retrieve a dict of options for a given implementation. We look for configuration options in the form of name_option and ignore the rest. Args: items: [('key1', 'value1'), ('key2, 'value2'), ...] name: 'foo' Returns: dictionary of option:value pairs
github-repos
def __call__(self, *args, **kwargs): self.kwargs.update(kwargs) if self.data_flow_kernel is None: dfk = DataFlowKernelLoader.dfk() else: dfk = self.data_flow_kernel app_fut = dfk.submit(wrap_error(remote_side_bash_executor), self.func, *args, executors=self.executors, fn_hash=self.func_hash, cache=self.cache, **self.kwargs) out_futs = [DataFuture(app_fut, o, tid=app_fut.tid) for o in kwargs.get('outputs', [])] app_fut._outputs = out_futs return app_fut
Handle the call to a Bash app. Args: - Arbitrary Kwargs: - Arbitrary Returns: If outputs=[...] was a kwarg then: App_fut, [Data_Futures...] else: App_fut
juraj-google-style
def _get_credentials(self): site = self.data[self.hdx_site] username = site.get('username') if username: return (b64decode(username).decode('utf-8'), b64decode(site['password']).decode('utf-8')) else: return None
Return HDX site username and password Returns: Optional[Tuple[str, str]]: HDX site username and password or None
codesearchnet
async def refresh_token(self): (url, headers, body) = self._setup_token_request() request_id = uuid.uuid4() logging.debug(_utils.REQ_LOG_FMT.format(request_id=request_id, method='POST', url=url, kwargs=None)) async with self._session.post(url, headers=headers, data=body) as resp: log_kw = {'request_id': request_id, 'method': 'POST', 'url': resp.url, 'status': resp.status, 'reason': resp.reason} logging.debug(_utils.RESP_LOG_FMT.format(**log_kw)) try: resp.raise_for_status() except aiohttp.ClientResponseError as e: msg = f'[{request_id}] Issue connecting to {resp.url}: {e}' logging.error(msg, exc_info=e) raise exceptions.GCPHTTPResponseError(msg, resp.status) response = (await resp.json()) try: self.token = response['access_token'] except KeyError: msg = '[{request_id}] No access token in response.' logging.error(msg) raise exceptions.GCPAuthError(msg) self.expiry = _client._parse_expiry(response)
Refresh oauth access token attached to this HTTP session. Raises: :exc:`.GCPAuthError`: if no token was found in the response. :exc:`.GCPHTTPError`: if any exception occurred, specifically a :exc:`.GCPHTTPResponseError`, if the exception is associated with a response status code.
codesearchnet
def get_metadata(self, path, include_entities=False, **kwargs): f = self.get_file(path) self.metadata_index.index_file(f.path) if include_entities: entities = f.entities results = entities else: results = {} results.update(self.metadata_index.file_index[path]) return results
Return metadata found in JSON sidecars for the specified file. Args: path (str): Path to the file to get metadata for. include_entities (bool): If True, all available entities extracted from the filename (rather than JSON sidecars) are included in the returned metadata dictionary. kwargs (dict): Optional keyword arguments to pass onto get_nearest(). Returns: A dictionary of key/value pairs extracted from all of the target file's associated JSON sidecars. Notes: A dictionary containing metadata extracted from all matching .json files is returned. In cases where the same key is found in multiple files, the values in files closer to the input filename will take precedence, per the inheritance rules in the BIDS specification.
codesearchnet
def load(fh, single=False): if isinstance(fh, stringtypes): s = open(fh, 'r').read() else: s = fh.read() return loads(s, single=single)
Deserialize :class:`Eds` from a file (handle or filename) Args: fh (str, file): input filename or file object single (bool): if `True`, only return the first Xmrs object Returns: a generator of :class:`Eds` objects (unless the *single* option is `True`)
juraj-google-style
def bestfit_func(self, bestfit_x): bestfit_x = np.array(bestfit_x) if not self.done_bestfit: raise KeyError("Do do_bestfit first") bestfit_y = 0 for idx, val in enumerate(self.fit_args): bestfit_y += val * (bestfit_x ** (self.args.get("degree", 1) - idx)) return bestfit_y
Returns bestfit_y value args: bestfit_x: scalar, array_like x value return: scalar, array_like bestfit y value
juraj-google-style
def matrix_product(mat1, mat2): return np.dot(mat2.T, mat1.T).T
Compute the product of two Fortran contiguous matrices. This is to avoid the overhead of NumPy converting to C-contiguous before computing a matrix product. Does so via ``A B = (B^T A^T)^T`` since ``B^T`` and ``A^T`` will be C-contiguous without a copy, then the product ``P = B^T A^T`` will be C-contiguous and we can return the view ``P^T`` without a copy. Args: mat1 (numpy.ndarray): The left-hand side matrix. mat2 (numpy.ndarray): The right-hand side matrix. Returns: numpy.ndarray: The product of the two matrices.
codesearchnet
def migrate(belstr: str) -> str: bo.ast = bel.lang.partialparse.get_ast_obj(belstr, "2.0.0") return migrate_ast(bo.ast).to_string()
Migrate BEL 1 to 2.0.0 Args: bel: BEL 1 Returns: bel: BEL 2
juraj-google-style
def get_nested_plot_frame(obj, key_map, cached=False): clone = obj.map((lambda x: x)) for (it1, it2) in zip(obj.traverse((lambda x: x)), clone.traverse((lambda x: x))): if isinstance(it1, DynamicMap): with disable_constant(it2.callback): it2.callback.inputs = it1.callback.inputs with item_check(False): return clone.map((lambda x: get_plot_frame(x, key_map, cached=cached)), [DynamicMap, HoloMap], clone=False)
Extracts a single frame from a nested object. Replaces any HoloMap or DynamicMap in the nested data structure, with the item corresponding to the supplied key. Args: obj: Nested Dimensioned object key_map: Dictionary mapping between dimensions and key value cached: Whether to allow looking up key in cache Returns: Nested datastructure where maps are replaced with single frames
codesearchnet
def area_difference(item_a, time_a, item_b, time_b, max_value): size_a = item_a.size(time_a) size_b = item_b.size(time_b) diff = np.sqrt(((size_a - size_b) ** 2)) return (np.minimum(diff, max_value) / float(max_value))
RMS Difference in object areas. Args: item_a: STObject from the first set in ObjectMatcher time_a: Time integer being evaluated item_b: STObject from the second set in ObjectMatcher time_b: Time integer being evaluated max_value: Maximum distance value used as scaling value and upper constraint. Returns: Distance value between 0 and 1.
codesearchnet
def parse_objective_coefficient(entry): for parameter in entry.kinetic_law_reaction_parameters: (pid, name, value, units) = parameter if ((pid == 'OBJECTIVE_COEFFICIENT') or (name == 'OBJECTIVE_COEFFICIENT')): return value return None
Return objective value for reaction entry. Detect objectives that are specified using the non-standardized kinetic law parameters which are used by many pre-FBC SBML models. The objective coefficient is returned for the given reaction, or None if undefined. Args: entry: :class:`SBMLReactionEntry`.
codesearchnet
def __add_kickoff_task(cls, job_config, mapreduce_spec): params = {'mapreduce_id': job_config.job_id} kickoff_task = taskqueue.Task(url=((job_config._base_path + '/kickoffjob_callback/') + job_config.job_id), headers=util._get_task_headers(job_config.job_id), params=params) if job_config._hooks_cls: hooks = job_config._hooks_cls(mapreduce_spec) try: hooks.enqueue_kickoff_task(kickoff_task, job_config.queue_name) return except NotImplementedError: pass kickoff_task.add(job_config.queue_name, transactional=True)
Add kickoff task to taskqueue. Args: job_config: map_job.JobConfig. mapreduce_spec: model.MapreduceSpec,
codesearchnet
def _preload_simple_restoration(self, name): deferred_dependencies_list = self._deferred_dependencies.get(name, ()) if not deferred_dependencies_list: return for checkpoint_position in deferred_dependencies_list: if not checkpoint_position.is_simple_variable(): return None checkpoint_position = max(deferred_dependencies_list, key=lambda restore: restore.checkpoint.restore_uid) return CheckpointInitialValueCallable(checkpoint_position=checkpoint_position)
Return a dependency's value for restore-on-create. Note the restoration is not deleted; if for some reason preload is called and then not assigned to the variable (for example because a custom getter overrides the initializer), the assignment will still happen once the variable is tracked (determined based on checkpoint.restore_uid). Args: name: The object-local name of the dependency holding the variable's value. Returns: An callable for use as a variable's initializer/initial_value, or None if one should not be set (either because there was no variable with this name in the checkpoint or because it needs more complex deserialization). Any non-trivial deserialization will happen when the variable object is tracked.
github-repos
def predict_on_batch(self, x): self._check_call_args('predict_on_batch') _disallow_inside_tf_function('predict_on_batch') with self.distribute_strategy.scope(): iterator = data_adapter.single_batch_iterator(self.distribute_strategy, x) self.predict_function = self.make_predict_function() outputs = self.predict_function(iterator) return tf_utils.sync_to_numpy_or_python_type(outputs)
Returns predictions for a single batch of samples. Args: x: Input data. It could be: - A Numpy array (or array-like), or a list of arrays (in case the model has multiple inputs). - A TensorFlow tensor, or a list of tensors (in case the model has multiple inputs). Returns: Numpy array(s) of predictions. Raises: RuntimeError: If `model.predict_on_batch` is wrapped in `tf.function`. ValueError: In case of mismatch between given number of inputs and expectations of the model.
github-repos
def send(self, config, log, obs_id, beam_id): log.info('Starting Pulsar Data Transfer...') socket = self._ftp.transfercmd('STOR {0}_{1}'.format(obs_id, beam_id)) socket.send(json.dumps(config).encode()) socket.send(bytearray(1000 * 1000)) config['metadata']['name'] = 'candidate_two' socket.send(json.dumps(config).encode()) socket.send(bytearray(1000 * 1000)) socket.close() log.info('Pulsar Data Transfer Completed...')
Send the pulsar data to the ftp server Args: config (dict): Dictionary of settings log (logging.Logger): Python logging object obs_id: observation id beam_id: beam id
juraj-google-style
def load_orthologs(fo: IO, metadata: dict): version = metadata["metadata"]["version"] with timy.Timer("Load Orthologs") as timer: arango_client = arangodb.get_client() belns_db = arangodb.get_belns_handle(arango_client) arangodb.batch_load_docs( belns_db, orthologs_iterator(fo, version), on_duplicate="update" ) log.info( "Load orthologs", elapsed=timer.elapsed, source=metadata["metadata"]["source"], ) remove_old_ortholog_edges = f remove_old_ortholog_nodes = f arangodb.aql_query(belns_db, remove_old_ortholog_edges) arangodb.aql_query(belns_db, remove_old_ortholog_nodes) metadata["_key"] = f"Orthologs_{metadata['metadata']['source']}" try: belns_db.collection(arangodb.belns_metadata_name).insert(metadata) except ArangoError as ae: belns_db.collection(arangodb.belns_metadata_name).replace(metadata)
Load orthologs into ArangoDB Args: fo: file obj - orthologs file metadata: dict containing the metadata for orthologs
juraj-google-style
def set_hash_value(self, key, field, value, pipeline=False): if pipeline: self._pipeline.hset(key, field, str(value)) else: self._db.hset(key, field, str(value))
Set the value of field in a hash stored at key. Args: key (str): key (name) of the hash field (str): Field within the hash to set value: Value to set pipeline (bool): True, start a transaction block. Default false.
juraj-google-style
def __init__(self, processor_configuration): transformer_config = processor_configuration["transformer"] FLAGS.output_dir = transformer_config["model_dir"] usr_dir.import_usr_dir(FLAGS.t2t_usr_dir) data_dir = os.path.expanduser(transformer_config["data_dir"]) self.hparams = trainer_lib.create_hparams( transformer_config["hparams_set"], transformer_config["hparams"], data_dir=data_dir, problem_name=transformer_config["problem"]) decode_hp = decoding.decode_hparams() decode_hp.add_hparam("shards", 1) decode_hp.add_hparam("shard_id", 0) self.estimator = trainer_lib.create_estimator( transformer_config["model"], self.hparams, t2t_trainer.create_run_config(self.hparams), decode_hparams=decode_hp, use_tpu=False) self.source_vocab = self.hparams.problem_hparams.vocabulary["inputs"] self.targets_vocab = self.hparams.problem_hparams.vocabulary["targets"] self.const_array_size = 10000 run_dirs = sorted(glob.glob(os.path.join("/tmp/t2t_server_dump", "run_*"))) for run_dir in run_dirs: shutil.rmtree(run_dir)
Creates the Transformer estimator. Args: processor_configuration: A ProcessorConfiguration protobuffer with the transformer fields populated.
juraj-google-style
def get_dispatcher_event(self, name): e = self.__property_events.get(name) if e is None: e = self.__events[name] return e
Retrieves an Event object by name Args: name (str): The name of the :class:`Event` or :class:`~pydispatch.properties.Property` object to retrieve Returns: The :class:`Event` instance for the event or property definition .. versionadded:: 0.1.0
juraj-google-style
def _get_longest(value_lst: List) -> List: value_lst.sort() result = [] pivot = value_lst[0] start, end = pivot[0], pivot[1] pivot_e = end pivot_s = start for idx, (s, e, v, rule_id, _) in enumerate(value_lst): if s == pivot_s and pivot_e < e: pivot_e = e pivot = value_lst[idx] elif s != pivot_s and pivot_e < e: result.append(pivot) pivot = value_lst[idx] pivot_e = e pivot_s = s result.append(pivot) return result
Get the longest match for overlap Args: value_lst: List Returns: List
juraj-google-style
def _unittest_template(config): output = "def test_parsers():\n" links = dict(map(lambda x: (x["link"], x["vars"]), config)) for link in links.keys(): output += IND + " output += IND + "html = handle_encodnig(\n" output += IND + IND + "_get_source(%s)\n" % repr(link) output += IND + ")\n" output += IND + "dom = dhtmlparser.parseString(html)\n" output += IND + "dhtmlparser.makeDoubleLinked(dom)\n\n" for var in links[link]: content = links[link][var]["data"].strip() output += IND + "%s = %s(dom)\n" % (var, _get_parser_name(var)) if "\n" in content: output += IND output += "assert %s.getContent().strip().split() == %s" % ( var, repr(content.split()) ) else: output += IND + "assert %s.getContent().strip() == %s" % ( var, repr(content) ) output += "\n\n" return output + "\n"
Generate unittests for all of the generated code. Args: config (dict): Original configuration dictionary. See :mod:`~harvester.autoparser.conf_reader` for details. Returns: str: Python code.
juraj-google-style
def get_meta_graph_def(saved_model_dir, tag_set): return saved_model_utils.get_meta_graph_def(saved_model_dir, tag_set)
DEPRECATED: Use saved_model_utils.get_meta_graph_def instead. Gets MetaGraphDef from SavedModel. Returns the MetaGraphDef for the given tag-set and SavedModel directory. Args: saved_model_dir: Directory containing the SavedModel to inspect or execute. tag_set: Group of tag(s) of the MetaGraphDef to load, in string format, separated by ','. For tag-set contains multiple tags, all tags must be passed in. Raises: RuntimeError: An error when the given tag-set does not exist in the SavedModel. Returns: A MetaGraphDef corresponding to the tag-set.
github-repos
def get_organizations(self, permission='read'): success, result = self._read_from_hdx('user', self.data['name'], 'id', self.actions()['listorgs'], permission=permission) organizations = list() if success: for organizationdict in result: organization = hdx.data.organization.Organization.read_from_hdx(organizationdict['id']) organizations.append(organization) return organizations
Get organizations in HDX that this user is a member of. Args: permission (str): Permission to check for. Defaults to 'read'. Returns: List[Organization]: List of organizations in HDX that this user is a member of
juraj-google-style
def sil(msg, version): tc = typecode(msg) if (tc not in [29, 31]): raise RuntimeError(('%s: Not a target state and status messag, or operation status message, expecting TC = 29 or 31' % msg)) msgbin = common.hex2bin(msg) if (tc == 29): SIL = common.bin2int(msgbin[76:78]) elif (tc == 31): SIL = common.bin2int(msgbin[82:84]) try: PE_RCu = uncertainty.SIL[SIL]['PE_RCu'] PE_VPL = uncertainty.SIL[SIL]['PE_VPL'] except KeyError: (PE_RCu, PE_VPL) = (uncertainty.NA, uncertainty.NA) base = 'unknown' if (version == 2): if (tc == 29): SIL_SUP = common.bin2int(msgbin[39]) elif (tc == 31): SIL_SUP = common.bin2int(msgbin[86]) if (SIL_SUP == 0): base = 'hour' elif (SIL_SUP == 1): base = 'sample' return (PE_RCu, PE_VPL, base)
Calculate SIL, Surveillance Integrity Level Args: msg (string): 28 bytes hexadecimal message string with TC = 29, 31 Returns: int or string: Probability of exceeding Horizontal Radius of Containment RCu int or string: Probability of exceeding Vertical Integrity Containment Region VPL string: SIL supplement based on per "hour" or "sample", or 'unknown'
codesearchnet
def callEventGetAllRpc(self, callback_id, event_name):
Calls snippet lib's RPC to get all existing snippet events. Override this method to use this class with various snippet lib implementations. This function gets all existing events in the server with the specified identifier without waiting. Args: callback_id: str, the callback identifier. event_name: str, the callback name. Returns: A list of event dictionaries.
github-repos
def compute_fov(self, x, y, fov='PERMISSIVE', radius=None, light_walls=True, sphere=True, cumulative=False): if (radius is None): radius = 0 if cumulative: fov_copy = self.fov.copy() lib.TCOD_map_compute_fov(self.map_c, x, y, radius, light_walls, _get_fov_type(fov)) if cumulative: self.fov[:] |= fov_copy return zip(*np.where(self.fov))
Compute the field-of-view of this Map and return an iterator of the points touched. Args: x (int): Point of view, x-coordinate. y (int): Point of view, y-coordinate. fov (Text): The type of field-of-view to be used. Available types are: 'BASIC', 'DIAMOND', 'SHADOW', 'RESTRICTIVE', 'PERMISSIVE', 'PERMISSIVE0', 'PERMISSIVE1', ..., 'PERMISSIVE8' radius (Optional[int]): Maximum view distance from the point of view. A value of 0 will give an infinite distance. light_walls (bool): Light up walls, or only the floor. sphere (bool): If True the lit area will be round instead of square. cumulative (bool): If True the lit cells will accumulate instead of being cleared before the computation. Returns: Iterator[Tuple[int, int]]: An iterator of (x, y) points of tiles touched by the field-of-view.
codesearchnet
def getall(self): vrrps = dict() interfaces = re.findall('^interface\\s(\\S+)', self.config, re.M) for interface in interfaces: vrrp = self.get(interface) if vrrp: vrrps.update({interface: vrrp}) return vrrps
Get the vrrp configurations for all interfaces on a node Returns: A dictionary containing the vrrp configurations on the node, keyed by interface.
codesearchnet
def get_headline(self, name): return self._loop.run_coroutine(self._client.get_headline(name))
Get stored messages for a service. Args: name (string): The name of the service to get messages from. Returns: ServiceMessage: the headline or None if no headline has been set
juraj-google-style
def parse_device_list(device_list_str, key): clean_lines = new_str(device_list_str, 'utf-8').strip().split('\n') results = [] for line in clean_lines: tokens = line.strip().split('\t') if ((len(tokens) == 2) and (tokens[1] == key)): results.append(tokens[0]) return results
Parses a byte string representing a list of devices. The string is generated by calling either adb or fastboot. The tokens in each string is tab-separated. Args: device_list_str: Output of adb or fastboot. key: The token that signifies a device in device_list_str. Returns: A list of android device serial numbers.
codesearchnet
def ParseOptions(cls, options, configuration_object): if not isinstance(configuration_object, tools.CLITool): raise errors.BadConfigObject( 'Configuration object is not an instance of CLITool') filter_expression = cls._ParseStringOption(options, 'filter') filter_object = None if filter_expression: filter_object = event_filter.EventObjectFilter() try: filter_object.CompileFilter(filter_expression) except errors.ParseError as exception: raise errors.BadConfigOption(( 'Unable to compile filter expression with error: ' '{0!s}').format(exception)) time_slice_event_time_string = getattr(options, 'slice', None) time_slice_duration = getattr(options, 'slice_size', 5) use_time_slicer = getattr(options, 'slicer', False) if time_slice_event_time_string and use_time_slicer: raise errors.BadConfigOption( 'Time slice and slicer cannot be used at the same time.') time_slice_event_timestamp = None if time_slice_event_time_string: preferred_time_zone = getattr( configuration_object, '_preferred_time_zone', None) or 'UTC' timezone = pytz.timezone(preferred_time_zone) time_slice_event_timestamp = timelib.Timestamp.FromTimeString( time_slice_event_time_string, timezone=timezone) if time_slice_event_timestamp is None: raise errors.BadConfigOption( 'Unsupported time slice event date and time: {0:s}'.format( time_slice_event_time_string)) setattr(configuration_object, '_event_filter_expression', filter_expression) if filter_object: setattr(configuration_object, '_event_filter', filter_object) setattr(configuration_object, '_use_time_slicer', use_time_slicer) if time_slice_event_timestamp is not None or use_time_slicer: time_slice = time_slices.TimeSlice( time_slice_event_timestamp, duration=time_slice_duration) setattr(configuration_object, '_time_slice', time_slice)
Parses and validates options. Args: options (argparse.Namespace): parser options. configuration_object (CLITool): object to be configured by the argument helper. Raises: BadConfigObject: when the configuration object is of the wrong type. BadConfigOption: when a configuration parameter fails validation.
juraj-google-style
def from_networkx(graph, layout_function, **kwargs): from ..models.renderers import GraphRenderer from ..models.graphs import StaticLayoutProvider node_dict = dict() node_attr_keys = [attr_key for node in list(graph.nodes(data=True)) for attr_key in node[1].keys()] node_attr_keys = list(set(node_attr_keys)) for attr_key in node_attr_keys: values = [(node_attr[attr_key] if (attr_key in node_attr.keys()) else None) for (_, node_attr) in graph.nodes(data=True)] values = _handle_sublists(values) node_dict[attr_key] = values if ('index' in node_attr_keys): from warnings import warn warn("Converting node attributes labeled 'index' are skipped. If you want to convert these attributes, please re-label with other names.") node_dict['index'] = list(graph.nodes()) edge_dict = dict() edge_attr_keys = [attr_key for edge in graph.edges(data=True) for attr_key in edge[2].keys()] edge_attr_keys = list(set(edge_attr_keys)) for attr_key in edge_attr_keys: values = [(edge_attr[attr_key] if (attr_key in edge_attr.keys()) else None) for (_, _, edge_attr) in graph.edges(data=True)] values = _handle_sublists(values) edge_dict[attr_key] = values if (('start' in edge_attr_keys) or ('end' in edge_attr_keys)): from warnings import warn warn("Converting edge attributes labeled 'start' or 'end' are skipped. If you want to convert these attributes, please re-label them with other names.") edge_dict['start'] = [x[0] for x in graph.edges()] edge_dict['end'] = [x[1] for x in graph.edges()] node_source = ColumnDataSource(data=node_dict) edge_source = ColumnDataSource(data=edge_dict) graph_renderer = GraphRenderer() graph_renderer.node_renderer.data_source.data = node_source.data graph_renderer.edge_renderer.data_source.data = edge_source.data if callable(layout_function): graph_layout = layout_function(graph, **kwargs) else: graph_layout = layout_function node_keys = graph_renderer.node_renderer.data_source.data['index'] if (set(node_keys) != set(layout_function.keys())): from warnings import warn warn("Node keys in 'layout_function' don't match node keys in the graph. These nodes may not be displayed correctly.") graph_renderer.layout_provider = StaticLayoutProvider(graph_layout=graph_layout) return graph_renderer
Generate a ``GraphRenderer`` from a ``networkx.Graph`` object and networkx layout function. Any keyword arguments will be passed to the layout function. Only two dimensional layouts are supported. Args: graph (networkx.Graph) : a networkx graph to render layout_function (function or dict) : a networkx layout function or mapping of node keys to positions. The position is a two element sequence containing the x and y coordinate. Returns: instance (GraphRenderer) .. note:: Node and edge attributes may be lists or tuples. However, a given attribute must either have *all* lists or tuple values, or *all* scalar values, for nodes or edges it is defined on. .. warning:: Node attributes labeled 'index' and edge attributes labeled 'start' or 'end' are ignored. If you want to convert these attributes, please re-label them to other names. Raises: ValueError
codesearchnet
def showRemoveColumnDialog(self, triggered): if triggered: model = self.tableView.model() if model is not None: columns = model.dataFrameColumns() dialog = RemoveAttributesDialog(columns, self) dialog.accepted.connect(self.removeColumns) dialog.rejected.connect(self.uncheckButton) dialog.show()
Display the dialog to remove column(s) from the model. This method is also a slot. Args: triggered (bool): If the corresponding button was activated, the dialog will be created and shown.
juraj-google-style
def _prepare_variables(self): self._moving_averager = tf.train.ExponentialMovingAverage(decay=self._beta, zero_debias=self._zero_debias) prepare_variables_op = [] self._grad_squared = [] self._grad_norm_squared = [] for (v, g) in zip(self._vars, self._grad): if (g is None): continue with tf.colocate_with(v): self._grad_squared.append(tf.square(g)) self._grad_norm_squared = [tf.reduce_sum(g_sq) for g_sq in self._grad_squared] if self._sparsity_debias: avg_op_sparsity = self._grad_sparsity() prepare_variables_op.append(avg_op_sparsity) avg_op = self._moving_averager.apply(self._grad_norm_squared) with tf.control_dependencies([avg_op]): self._grad_norm_squared_avg = [self._moving_averager.average(val) for val in self._grad_norm_squared] self._grad_norm_squared = tf.add_n(self._grad_norm_squared) self._grad_norm_squared_avg = tf.add_n(self._grad_norm_squared_avg) prepare_variables_op.append(avg_op) return tf.group(*prepare_variables_op)
Prepare Variables for YellowFin. Returns: Grad**2, Norm, Norm**2, Mean(Norm**2) ops
codesearchnet
def groups_replies(self, *, channel: str, thread_ts: str, **kwargs) -> SlackResponse: self._validate_xoxp_token() kwargs.update({"channel": channel, "thread_ts": thread_ts}) return self.api_call("groups.replies", http_verb="GET", params=kwargs)
Retrieve a thread of messages posted to a private channel Args: channel (str): The channel id. e.g. 'C1234567890' thread_ts (str): The timestamp of an existing message with 0 or more replies. e.g. '1234567890.123456'
juraj-google-style
def compute_fans(shape): shape = tuple(shape) if len(shape) < 1: fan_in = fan_out = 1 elif len(shape) == 1: fan_in = fan_out = shape[0] elif len(shape) == 2: fan_in = shape[0] fan_out = shape[1] else: receptive_field_size = 1 for dim in shape[:-2]: receptive_field_size *= dim fan_in = shape[-2] * receptive_field_size fan_out = shape[-1] * receptive_field_size return (int(fan_in), int(fan_out))
Computes the number of input and output units for a weight shape. Args: shape: Integer shape tuple. Returns: A tuple of integer scalars: `(fan_in, fan_out)`.
github-repos
def _best_effort_input_batch_size(flat_input): for input_ in flat_input: shape = input_.shape if shape.rank is None: continue if shape.rank < 2: raise ValueError(f'Input tensor should have rank >= 2. Received input={input_} of rank {shape.rank}') batch_size = shape.dims[1].value if batch_size is not None: return batch_size return array_ops.shape(flat_input[0])[1]
Get static input batch size if available, with fallback to the dynamic one. Args: flat_input: An iterable of time major input Tensors of shape `[max_time, batch_size, ...]`. All inputs should have compatible batch sizes. Returns: The batch size in Python integer if available, or a scalar Tensor otherwise. Raises: ValueError: if there is any input with an invalid shape.
github-repos
def _parse_redistribution(self, config): redistributions = list() regexp = 'redistribute .*' matches = re.findall(regexp, config) for line in matches: ospf_redist = line.split() if (len(ospf_redist) == 2): protocol = ospf_redist[1] redistributions.append(dict(protocol=protocol)) if (len(ospf_redist) == 4): protocol = ospf_redist[1] route_map_name = ospf_redist[3] redistributions.append(dict(protocol=protocol, route_map=route_map_name)) return dict(redistributions=redistributions)
Parses config file for the OSPF router ID Args: config (str): Running configuration Returns: list: dict: keys: protocol (str) route-map (optional) (str)
codesearchnet
def _update_example(self, request): if request.method != 'POST': return http_util.Respond(request, {'error': 'invalid non-POST request'}, 'application/json', code=405) example_json = request.form['example'] index = int(request.form['index']) if index >= len(self.examples): return http_util.Respond(request, {'error': 'invalid index provided'}, 'application/json', code=400) new_example = self.example_class() json_format.Parse(example_json, new_example) self.examples[index] = new_example self.updated_example_indices.add(index) self.generate_sprite([ex.SerializeToString() for ex in self.examples]) return http_util.Respond(request, {}, 'application/json')
Updates the specified example. Args: request: A request that should contain 'index' and 'example'. Returns: An empty response.
juraj-google-style
def reset(self): self._reset_ptr[0] = True self._commands.clear() for _ in range((self._pre_start_steps + 1)): self.tick() return self._default_state_fn()
Resets the environment, and returns the state. If it is a single agent environment, it returns that state for that agent. Otherwise, it returns a dict from agent name to state. Returns: tuple or dict: For single agent environment, returns the same as `step`. For multi-agent environment, returns the same as `tick`.
codesearchnet
def isset(name): def wrapped(func): @functools.wraps(func) def _decorator(*args, **kwargs): if core.isset(name): return func(*args, **kwargs) return _decorator return wrapped
Only execute the function if the variable is set. Args: name: The name of the environment variable Returns: The function return value or `None` if the function was skipped.
juraj-google-style
def decode(self, decoder_input_ids, encoder_outputs, encoder_attention_mask: Optional[jnp.ndarray]=None, decoder_attention_mask: Optional[jnp.ndarray]=None, decoder_position_ids: Optional[jnp.ndarray]=None, past_key_values: Optional[dict]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, return_dict: Optional[bool]=None, train: bool=False, params: Optional[dict]=None, dropout_rng: PRNGKey=None): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states return_dict = return_dict if return_dict is not None else self.config.return_dict encoder_hidden_states = encoder_outputs[0] if encoder_attention_mask is None: batch_size, sequence_length = encoder_hidden_states.shape[:2] encoder_attention_mask = jnp.ones((batch_size, sequence_length)) batch_size, sequence_length = decoder_input_ids.shape if decoder_attention_mask is None: decoder_attention_mask = jnp.ones((batch_size, sequence_length)) if decoder_position_ids is None: if past_key_values is not None: raise ValueError('Make sure to provide `decoder_position_ids` when passing `past_key_values`.') decoder_position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)) rngs = {} if dropout_rng is not None: rngs['dropout'] = dropout_rng inputs = {'params': params or self.params} if past_key_values: inputs['cache'] = past_key_values mutable = ['cache'] else: mutable = False def _decoder_forward(module, decoder_input_ids, decoder_attention_mask, decoder_position_ids, **kwargs): decoder_module = module._get_decoder_module() outputs = decoder_module(decoder_input_ids, decoder_attention_mask, decoder_position_ids, **kwargs) hidden_states = outputs[0] if self.config.tie_word_embeddings: shared_embedding = module.model.variables['params']['shared']['embedding'] lm_logits = module.lm_head.apply({'params': {'kernel': shared_embedding.T}}, hidden_states) else: lm_logits = module.lm_head(hidden_states) lm_logits += module.final_logits_bias.astype(self.dtype) return (lm_logits, outputs) outputs = self.module.apply(inputs, decoder_input_ids=jnp.array(decoder_input_ids, dtype='i4'), decoder_attention_mask=jnp.array(decoder_attention_mask, dtype='i4'), decoder_position_ids=jnp.array(decoder_position_ids, dtype='i4'), encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=jnp.array(encoder_attention_mask, dtype='i4'), output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, deterministic=not train, rngs=rngs, mutable=mutable, method=_decoder_forward) if past_key_values is None: lm_logits, decoder_outputs = outputs else: (lm_logits, decoder_outputs), past = outputs if return_dict: outputs = FlaxCausalLMOutputWithCrossAttentions(logits=lm_logits, hidden_states=decoder_outputs.hidden_states, attentions=decoder_outputs.attentions, cross_attentions=decoder_outputs.cross_attentions) else: outputs = (lm_logits,) + decoder_outputs[1:] if past_key_values is not None and return_dict: outputs['past_key_values'] = unfreeze(past['cache']) return outputs elif past_key_values is not None and (not return_dict): outputs = outputs[:1] + (unfreeze(past['cache']),) + outputs[1:] return outputs
Returns: Example: ```python >>> import jax.numpy as jnp >>> from transformers import AutoTokenizer, FlaxBartForConditionalGeneration >>> model = FlaxBartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="jax") >>> encoder_outputs = model.encode(**inputs) >>> decoder_start_token_id = model.config.decoder_start_token_id >>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id >>> outputs = model.decode(decoder_input_ids, encoder_outputs) >>> logits = outputs.logits ```
github-repos
def allDecisions(self, result, **values): data = self.__getDecision(result, multiple=True, **values) data = [data[value] for value in result] if (len(data) == 1): return data[0] else: return data
Joust like self.decision but for multiple finded values. Returns: Arrays of arrays of finded elements or if finds only one mach, array of strings.
codesearchnet
def check(schema, data, trace=False): if trace == True: trace = 1 else: trace = None return _check(schema, data, trace=trace)
Verify some json. Args: schema - the description of a general-case 'valid' json object. data - the json data to verify. Returns: bool: True if data matches the schema, False otherwise. Raises: TypeError: If the schema is of an unknown data type. ValueError: If the schema contains a string with an invalid value. If the schema attempts to reference a non-existent named schema.
juraj-google-style
def rewrite_grad_indexed_slices(grads, body_grad_graph, loop_vars, forward_inputs): inputs_with_grads = [t for g, t in zip(grads, forward_inputs) if g is not None] structured_outputs = body_grad_graph.structured_outputs[3:] for forward_input, output in zip(inputs_with_grads, structured_outputs): if not isinstance(output, indexed_slices.IndexedSlices): continue if forward_input.dtype == dtypes.resource: loop_vars = _rewrite_input_as_indexed_slices(body_grad_graph, output, forward_input, loop_vars) else: _rewrite_output_as_tensor(body_grad_graph, output) return loop_vars
Handles special case of IndexedSlices returned from while gradient. Some gradient functions return IndexedSlices instead of a Tensor (e.g. the gradient of Gather ops). When this happens in the gradient of a while body, the resulting gradient body function will have mismatched inputs and outputs, since the input is a single Tensor, but the IndexedSlices gets unnested into three output Tensors. This function fixes this by rewriting the gradient body to have three inputs to match the three outputs, i.e., it effectively converts the input Tensor into an input IndexedSlices. It also returns new `loop_vars` to reflect the new inputs. Args: grads: the input gradient Tensors to the while gradient computation. body_grad_graph: _WhileBodyGradFuncGraph. loop_vars: list of Tensors. The inputs to body_grad_graph. forward_inputs: list of Tensors. The (flat) inputs to the forward-pass While op. Returns: The new loop_vars to pass to body_grad_graph.
github-repos
def rollaxis(vari, axis, start=0): if isinstance(vari, Poly): core_old = vari.A.copy() core_new = {} for key in vari.keys: core_new[key] = rollaxis(core_old[key], axis, start) return Poly(core_new, vari.dim, None, vari.dtype) return numpy.rollaxis(vari, axis, start)
Roll the specified axis backwards, until it lies in a given position. Args: vari (chaospy.poly.base.Poly, numpy.ndarray): Input array or polynomial. axis (int): The axis to roll backwards. The positions of the other axes do not change relative to one another. start (int): The axis is rolled until it lies before thes position.
codesearchnet
async def register(*address_list, cluster=None, loop=None): loop = loop or asyncio.get_event_loop() for address in address_list: host, port = address.rsplit(':', 1) node = Node(address=(host, int(port)), loop=loop) await node.start() for address in cluster: host, port = address.rsplit(':', 1) port = int(port) if (host, port) != (node.host, node.port): node.update_cluster((host, port))
Start Raft node (server) Args: address_list — 127.0.0.1:8000 [, 127.0.0.1:8001 ...] cluster — [127.0.0.1:8001, 127.0.0.1:8002, ...]
juraj-google-style
def parse_tddft(self): start_tag = 'Convergence criterion met' end_tag = 'Excited state energy' singlet_tag = 'singlet excited' triplet_tag = 'triplet excited' state = 'singlet' inside = False lines = self.raw.split('\n') roots = {'singlet': [], 'triplet': []} while lines: line = lines.pop(0).strip() if (start_tag in line): inside = True elif (end_tag in line): inside = False elif (singlet_tag in line): state = 'singlet' elif (triplet_tag in line): state = 'triplet' elif (inside and ('Root' in line) and ('eV' in line)): toks = line.split() roots[state].append({'energy': float(toks[(- 2)])}) elif (inside and ('Dipole Oscillator Strength' in line)): osc = float(line.split()[(- 1)]) roots[state][(- 1)]['osc_strength'] = osc return roots
Parses TDDFT roots. Adapted from nw_spectrum.py script. Returns: { "singlet": [ { "energy": float, "osc_strength: float } ], "triplet": [ { "energy": float } ] }
codesearchnet
def install(self, package: str, option: str = '-r') -> None: if not os.path.isfile(package): raise FileNotFoundError(f'{package!r} does not exist.') for i in option: if i not in '-lrtsdg': raise ValueError(f'There is no option named: {option!r}.') self._execute('-s', self.device_sn, 'install', option, package)
Push package to the device and install it. Args: option: -l: forward lock application -r: replace existing application -t: allow test packages -s: install application on sdcard -d: allow version code downgrade (debuggable packages only) -g: grant all runtime permissions
juraj-google-style
def __init__(self, details): if not isinstance(details, dict): raise ValueError('details') if '__array__' not in details: raise KeyError('__array__') if not isinstance(details['__array__'], dict): details['__array__'] = { "type": details['__array__'] } if not 'type' in details['__array__']: self._type = 'unique' elif details['__array__']['type'] not in self._VALID_ARRAY: self._type = 'unique' sys.stderr.write('"' + str(details['__array__']['type']) + '" is not a valid type for __array__, assuming "unique"') else: self._type = details['__array__']['type'] self._minimum = None self._maximum = None if 'minimum' in details['__array__'] \ or 'maximum' in details['__array__']: self.minmax( ('minimum' in details['__array__'] and details['__array__']['minimum'] or None), ('maximum' in details['__array__'] and details['__array__']['maximum'] or None) ) if '__optional__' in details: bOptional = details['__optional__'] del details['__optional__'] elif 'optional' in details['__array__']: bOptional = details['__array__']['optional'] else: bOptional = None del details['__array__'] self._node = _child(details) if bOptional: details['__optional__'] = sOptional super(ArrayNode, self).__init__(details, 'ArrayNode')
Constructor Initialises the instance Arguments: details {dict} -- Details describing the type of values allowed for the node Raises: KeyError ValueError Returns: ArrayNode
juraj-google-style
def _dms_formatter(latitude, longitude, mode, unistr=False): if unistr: chars = ('°', '′', '″') else: chars = ('°', "'", '"') latitude_dms = tuple(map(abs, utils.to_dms(latitude, mode))) longitude_dms = tuple(map(abs, utils.to_dms(longitude, mode))) text = [] if mode == 'dms': text.append('%%02i%s%%02i%s%%02i%s' % chars % latitude_dms) else: text.append('%%02i%s%%05.2f%s' % chars[:2] % latitude_dms) text.append('S' if latitude < 0 else 'N') if mode == 'dms': text.append(', %%03i%s%%02i%s%%02i%s' % chars % longitude_dms) else: text.append(', %%03i%s%%05.2f%s' % chars[:2] % longitude_dms) text.append('W' if longitude < 0 else 'E') return text
Generate a human readable DM/DMS location string. Args: latitude (float): Location's latitude longitude (float): Location's longitude mode (str): Coordinate formatting system to use unistr (bool): Whether to use extended character set
juraj-google-style
def exit(self, code=None, msg=None): if code is None: code = self.tcex.exit_code if code == 3: self.tcex.log.info(u'Changing exit code from 3 to 0.') code = 0 elif code not in [0, 1]: code = 1 self.tcex.exit(code, msg)
Playbook wrapper on TcEx exit method Playbooks do not support partial failures so we change the exit method from 3 to 1 and call it a partial success instead. Args: code (Optional [integer]): The exit code value for the app.
juraj-google-style
def random_get_int(rnd: Optional[tcod.random.Random], mi: int, ma: int) -> int: return int( lib.TCOD_random_get_int(rnd.random_c if rnd else ffi.NULL, mi, ma) )
Return a random integer in the range: ``mi`` <= n <= ``ma``. The result is affected by calls to :any:`random_set_distribution`. Args: rnd (Optional[Random]): A Random instance, or None to use the default. low (int): The lower bound of the random range, inclusive. high (int): The upper bound of the random range, inclusive. Returns: int: A random integer in the range ``mi`` <= n <= ``ma``.
juraj-google-style
def configuration_from_paths(paths, strict=True): for path in paths: cfg = configfile_from_path(path, strict=strict).config return cfg
Get a Configuration object based on multiple file paths. Args: paths (iter of str): An iterable of file paths which identify config files on the system. strict (bool): Whether or not to parse the files in strict mode. Returns: confpy.core.config.Configuration: The loaded configuration object. Raises: NamespaceNotRegistered: If a file contains a namespace which is not defined. OptionNotRegistered: If a file contains an option which is not defined but resides under a valid namespace. UnrecognizedFileExtension: If there is no loader for a path.
codesearchnet
def cache_json(filename): def cache_decorator(cacheable_function): @wraps(cacheable_function) def cache_wrapper(*args, **kwargs): path = (CACHE_DIRECTORY + filename) check_create_folder(path) if os.path.exists(path): with open(path) as infile: return json.load(infile) else: function_output = cacheable_function(*args, **kwargs) with open(path, 'w') as outfile: json.dump(function_output, outfile) return function_output return cache_wrapper return cache_decorator
Caches the JSON-serializable output of the function to a given file Args: filename (str) The filename (sans directory) to store the output Returns: decorator, applicable to a function that produces JSON-serializable output
codesearchnet
def get_tf_dtype(self, allowed_set=None): if allowed_set: index = self.get_int(0, len(allowed_set) - 1) if allowed_set[index] not in _TF_DTYPES: raise tf.errors.InvalidArgumentError(None, None, 'Given dtype {} is not accepted.'.format(allowed_set[index])) return allowed_set[index] else: index = self.get_int(0, len(_TF_DTYPES) - 1) return _TF_DTYPES[index]
Return a random tensorflow dtype. Args: allowed_set: An allowlisted set of dtypes to choose from instead of all of them. Returns: A random type from the list containing all TensorFlow types.
github-repos
def deploy_ray_func(func, partition, kwargs): try: result = func(partition, **kwargs) except Exception: result = func(partition.to_pandas(), **kwargs) if isinstance(result, pandas.Series): result = pandas.DataFrame(result).T if isinstance(result, pandas.DataFrame): return pyarrow.Table.from_pandas(result) return result
Deploy a function to a partition in Ray. Args: func: The function to apply. partition: The partition to apply the function to. kwargs: A dictionary of keyword arguments for the function. Returns: The result of the function.
juraj-google-style
def _flush(self, buffer, start, end): buffer_size = len(buffer) if (not buffer_size): return with self._size_lock: if (end > self._size): with _handle_azure_exception(): self._resize(content_length=end, **self._client_kwargs) self._reset_head() if (buffer_size > self.MAX_FLUSH_SIZE): futures = [] for part_start in range(0, buffer_size, self.MAX_FLUSH_SIZE): buffer_part = buffer[part_start:(part_start + self.MAX_FLUSH_SIZE)] if (not len(buffer_part)): break start_range = (start + part_start) futures.append(self._workers.submit(self._update_range, data=buffer_part.tobytes(), start_range=start_range, end_range=((start_range + len(buffer_part)) - 1), **self._client_kwargs)) with _handle_azure_exception(): for future in _as_completed(futures): future.result() else: with _handle_azure_exception(): self._update_range(data=buffer.tobytes(), start_range=start, end_range=(end - 1), **self._client_kwargs)
Flush the write buffer of the stream if applicable. Args: buffer (memoryview): Buffer content. start (int): Start of buffer position to flush. Supported only with page blobs. end (int): End of buffer position to flush. Supported only with page blobs.
codesearchnet
def ConsumeRange(self, start, end): old = self.CurrentRange() if (old is None): return if (old.start > start): if (old.start < end): raise RuntimeError('Block end too high.') return if (old.start < start): raise RuntimeError('Block start too high.') if (old.end == end): del self.ranges[0] elif (old.end > end): self.ranges[0] = Range(end, old.end) else: raise RuntimeError('Block length exceeds range.')
Consumes an entire range, or part thereof. If the finger has no ranges left, or the curent range start is higher than the end of the consumed block, nothing happens. Otherwise, the current range is adjusted for the consumed block, or removed, if the entire block is consumed. For things to work, the consumed range and the current finger starts must be equal, and the length of the consumed range may not exceed the length of the current range. Args: start: Beginning of range to be consumed. end: First offset after the consumed range (end + 1). Raises: RuntimeError: if the start position of the consumed range is higher than the start of the current range in the finger, or if the consumed range cuts accross block boundaries.
codesearchnet
def get_ssh_client(ip_addr, ssh_key=None, host_name=None, ssh_tries=None, propagate_fail=True, username='root', password='123456'): host_name = (host_name or ip_addr) with LogTask(('Get ssh client for %s' % host_name), level='debug', propagate_fail=propagate_fail): ssh_timeout = int(config.get('ssh_timeout')) if (ssh_tries is None): ssh_tries = int(config.get('ssh_tries', 10)) start_time = time.time() client = paramiko.SSHClient() client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) while (ssh_tries > 0): try: client.connect(ip_addr, username=username, password=password, key_filename=ssh_key, timeout=ssh_timeout) break except (socket.error, socket.timeout) as err: LOGGER.debug('Socket error connecting to %s: %s', host_name, err) except paramiko.ssh_exception.SSHException as err: LOGGER.debug('SSH error connecting to %s: %s', host_name, err) except EOFError as err: LOGGER.debug('EOFError connecting to %s: %s', host_name, err) ssh_tries -= 1 LOGGER.debug('Still got %d tries for %s', ssh_tries, host_name) time.sleep(1) else: end_time = time.time() raise LagoSSHTimeoutException(('Timed out (in %d s) trying to ssh to %s' % ((end_time - start_time), host_name))) return client
Get a connected SSH client Args: ip_addr(str): IP address of the endpoint ssh_key(str or list of str): Path to a file which contains the private key hotname(str): The hostname of the endpoint ssh_tries(int): The number of attempts to connect to the endpoint propagate_fail(bool): If set to true, this event will be in the log and fail the outer stage. Otherwise, it will be discarded. username(str): The username to authenticate with password(str): Used for password authentication or for private key decryption Raises: :exc:`~LagoSSHTimeoutException`: If the client failed to connect after "ssh_tries"
codesearchnet
def run(argv=None, save_main_session=True, test_pipeline=None) -> PipelineResult: known_args, pipeline_args = parse_known_args(argv) pipeline_options = PipelineOptions(pipeline_args) pipeline_options.view_as(SetupOptions).save_main_session = save_main_session model_handler = GeminiModelHandler(model_name='gemini-2.0-flash-001', request_fn=generate_from_string, api_key=known_args.api_key, project=known_args.project, location=known_args.location) pipeline = test_pipeline if not test_pipeline: pipeline = beam.Pipeline(options=pipeline_options) prompts = ['What is 5+2?', 'Who is the protagonist of Lord of the Rings?', 'What is the air-speed velocity of a laden swallow?'] read_prompts = pipeline | 'Get prompt' >> beam.Create(prompts) predictions = read_prompts | 'RunInference' >> RunInference(model_handler) processed = predictions | 'PostProcess' >> beam.ParDo(PostProcessor()) _ = processed | 'PrintOutput' >> beam.Map(print) _ = processed | 'WriteOutput' >> beam.io.WriteToText(known_args.output, shard_name_template='', append_trailing_newlines=True) result = pipeline.run() result.wait_until_finish() return result
Args: argv: Command line arguments defined for this example. save_main_session: Used for internal testing. test_pipeline: Used for internal testing.
github-repos
def ExamineEvent(self, mediator, event): self._EnsureRequesterStarted() path_spec = event.pathspec event_identifiers = self._event_identifiers_by_pathspec[path_spec] event_identifier = event.GetIdentifier() event_identifiers.append(event_identifier) if ((event.data_type not in self.DATA_TYPES) or (not self._analyzer.lookup_hash)): return lookup_hash = '{0:s}_hash'.format(self._analyzer.lookup_hash) lookup_hash = getattr(event, lookup_hash, None) if (not lookup_hash): display_name = mediator.GetDisplayNameForPathSpec(path_spec) logger.warning('Lookup hash attribute: {0:s}_hash missing from event that originated from: {1:s}.'.format(self._analyzer.lookup_hash, display_name)) return path_specs = self._hash_pathspecs[lookup_hash] path_specs.append(path_spec) if (len(path_specs) == 1): self.hash_queue.put(lookup_hash)
Evaluates whether an event contains the right data for a hash lookup. Args: mediator (AnalysisMediator): mediates interactions between analysis plugins and other components, such as storage and dfvfs. event (EventObject): event.
codesearchnet
def all_tokens(self, delimiter=' ', label_list_ids=None): tokens = set() for utterance in self.utterances.values(): tokens = tokens.union(utterance.all_tokens(delimiter=delimiter, label_list_ids=label_list_ids)) return tokens
Return a list of all tokens occurring in one of the labels in the corpus. Args: delimiter (str): The delimiter used to split labels into tokens (see :meth:`audiomate.annotations.Label.tokenized`). label_list_ids (list): If not None, only labels from label-lists with an idx contained in this list are considered. Returns: :class:`set`: A set of distinct tokens.
codesearchnet
def node(self, name, label=None, _attributes=None, **attrs): name = self._quote(name) attr_list = self._attr_list(label, attrs, _attributes) line = self._node % (name, attr_list) self.body.append(line)
Create a node. Args: name: Unique identifier for the node inside the source. label: Caption to be displayed (defaults to the node ``name``). attrs: Any additional node attributes (must be strings).
juraj-google-style
def begin_run_group(project): from benchbuild.utils.db import create_run_group from datetime import datetime group, session = create_run_group(project) group.begin = datetime.now() group.status = 'running' session.commit() return group, session
Begin a run_group in the database. A run_group groups a set of runs for a given project. This models a series of runs that form a complete binary runtime test. Args: project: The project we begin a new run_group for. Returns: ``(group, session)`` where group is the created group in the database and session is the database session this group lives in.
juraj-google-style
def seek_to_beginning(self, *partitions): if not all([isinstance(p, TopicPartition) for p in partitions]): raise TypeError('partitions must be TopicPartition namedtuples') if not partitions: partitions = self._subscription.assigned_partitions() assert partitions, 'No partitions are currently assigned' else: for p in partitions: assert p in self._subscription.assigned_partitions(), 'Unassigned partition' for tp in partitions: log.debug("Seeking to beginning of partition %s", tp) self._subscription.need_offset_reset(tp, OffsetResetStrategy.EARLIEST)
Seek to the oldest available offset for partitions. Arguments: *partitions: Optionally provide specific TopicPartitions, otherwise default to all assigned partitions. Raises: AssertionError: If any partition is not currently assigned, or if no partitions are assigned.
juraj-google-style
def filter_framework_files(files: List[Union[str, os.PathLike]], frameworks: Optional[List[str]]=None) -> List[Union[str, os.PathLike]]: if frameworks is None: frameworks = get_default_frameworks() framework_to_file = {} others = [] for f in files: parts = Path(f).name.split('_') if 'modeling' not in parts: others.append(f) continue if 'tf' in parts: framework_to_file['tf'] = f elif 'flax' in parts: framework_to_file['flax'] = f else: framework_to_file['pt'] = f return [framework_to_file[f] for f in frameworks if f in framework_to_file] + others
Filter a list of files to only keep the ones corresponding to a list of frameworks. Args: files (`List[Union[str, os.PathLike]]`): The list of files to filter. frameworks (`List[str]`, *optional*): The list of allowed frameworks. Returns: `List[Union[str, os.PathLike]]`: The list of filtered files.
github-repos
def _CropAndResizeGrad(op: ops.Operation, grad): image = op.inputs[0] if image.get_shape().is_fully_defined(): image_shape = image.get_shape().as_list() else: image_shape = array_ops.shape(image) allowed_types = [dtypes.float16, dtypes.float32, dtypes.float64] if op.inputs[0].dtype in allowed_types: grad0 = gen_image_ops.crop_and_resize_grad_image(grad, op.inputs[1], op.inputs[2], image_shape, T=op.get_attr('T'), method=op.get_attr('method')) else: grad0 = None grad1 = gen_image_ops.crop_and_resize_grad_boxes(grad, op.inputs[0], op.inputs[1], op.inputs[2]) return [grad0, grad1, None, None]
The derivatives for crop_and_resize. We back-propagate to the image only when the input image tensor has floating point dtype but we always back-propagate to the input boxes tensor. Args: op: The CropAndResize op. grad: The tensor representing the gradient w.r.t. the output. Returns: The gradients w.r.t. the input image, boxes, as well as the always-None gradients w.r.t. box_ind and crop_size.
github-repos
def is_polar(self, tol_dipole_per_unit_area=0.001): dip_per_unit_area = (self.dipole / self.surface_area) return (np.linalg.norm(dip_per_unit_area) > tol_dipole_per_unit_area)
Checks whether the surface is polar by computing the dipole per unit area. Note that the Slab must be oxidation state-decorated for this to work properly. Otherwise, the Slab will always be non-polar. Args: tol_dipole_per_unit_area (float): A tolerance. If the dipole magnitude per unit area is less than this value, the Slab is considered non-polar. Defaults to 1e-3, which is usually pretty good. Normalized dipole per unit area is used as it is more reliable than using the total, which tends to be larger for slabs with larger surface areas.
codesearchnet
def correct_absolute_refs(self, construction_table): c_table = construction_table.copy() abs_refs = constants.absolute_refs problem_index = self.check_absolute_refs(c_table) for i in problem_index: order_of_refs = iter(permutations(abs_refs.keys())) finished = False while not finished: if self._has_valid_abs_ref(i, c_table): finished = True else: row = c_table.index.get_loc(i) c_table.iloc[row, row:] = next(order_of_refs)[row:3] return c_table
Reindexe construction_table if linear reference in first three rows present. Uses :meth:`~Cartesian.check_absolute_refs` to obtain the problematic indices. Args: construction_table (pd.DataFrame): Returns: pd.DataFrame: Appropiately renamed construction table.
juraj-google-style
def _get_gcc_major_version(path_to_gcc: str) -> int: logging.info('Running echo __GNUC__ | %s -E -P -', path_to_gcc) gcc_version_proc = subprocess.run([path_to_gcc, '-E', '-P', '-'], input='__GNUC__', check=True, capture_output=True, text=True) major_version = int(gcc_version_proc.stdout) logging.info('%s reports major version %s.', path_to_gcc, major_version) return major_version
Gets the major version of the gcc at `path_to_gcc`. Args: path_to_gcc: Path to a gcc executable Returns: The major version.
github-repos
def search(pattern): def match(napp): username = napp.get('username', napp.get('author')) strings = ['{}/{}'.format(username, napp.get('name')), napp.get('description')] + napp.get('tags') return any(pattern.match(string) for string in strings) napps = NAppsClient().get_napps() return [napp for napp in napps if match(napp)]
Search all server NApps matching pattern. Args: pattern (str): Python regular expression.
juraj-google-style
def get_push_pop(): push = copy.deepcopy(PUSH) pop = copy.deepcopy(POP) anno.setanno(push, 'pop', pop) anno.setanno(push, 'gen_push', True) anno.setanno(pop, 'push', push) op_id = _generate_op_id() return (push, pop, op_id)
Create pop and push nodes that are linked. Returns: A push and pop node which have `push_func` and `pop_func` annotations respectively, identifying them as such. They also have a `pop` and `push` annotation respectively, which links the push node to the pop node and vice versa.
codesearchnet
def process_file(self, in_filename, out_filename, no_change_to_outfile_on_error=False): with open(in_filename, 'r') as in_file, tempfile.NamedTemporaryFile('w', delete=False) as temp_file: ret = self.process_opened_file(in_filename, in_file, out_filename, temp_file) if no_change_to_outfile_on_error and ret[0] == 0: os.remove(temp_file.name) else: shutil.move(temp_file.name, out_filename) return ret
Process the given python file for incompatible changes. Args: in_filename: filename to parse out_filename: output file to write to no_change_to_outfile_on_error: not modify the output file on errors Returns: A tuple representing number of files processed, log of actions, errors
github-repos
def add_messages(self, validation): if not isinstance(validation, Validation): raise TypeError("Argument must be of type Validation") self.messages.extend(validation.messages)
Adds all the messages in the specified `Validation` object to this instance's messages array. Args: validation (Validation): An object containing the messages to add to this instance's messages.
juraj-google-style
def cache(self, domain, data_type, ttl_minutes=None, mapping=None): from .tcex_cache import TcExCache return TcExCache(self, domain, data_type, ttl_minutes, mapping)
Get instance of the Cache module. Args: domain (str): The domain can be either "system", "organization", or "local". When using "organization" the data store can be accessed by any Application in the entire org, while "local" access is restricted to the App writing the data. The "system" option should not be used in almost all cases. data_type (str): The data type descriptor (e.g., tc:whois:cache). ttl_minutes (int): The number of minutes the cache is valid. Returns: object: An instance of the Cache Class.
codesearchnet