code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def compute_output_signature(self, input_signature): def check_type_return_shape(s): if not isinstance(s, tensor_lib.TensorSpec): raise TypeError('Only TensorSpec signature types are supported, but saw signature entry: {}.'.format(s)) return s.shape input_shape = nest.map_structure(check_type_return_shape, input_signature) output_shape = self.compute_output_shape(input_shape) dtype = self._compute_dtype if dtype is None: input_dtypes = [s.dtype for s in nest.flatten(input_signature)] dtype = input_dtypes[0] return nest.map_structure(lambda s: tensor_lib.TensorSpec(dtype=dtype, shape=s), output_shape)
Compute the output tensor signature of the layer based on the inputs. Unlike a TensorShape object, a TensorSpec object contains both shape and dtype information for a tensor. This method allows layers to provide output dtype information if it is different from the input dtype. For any layer that doesn't implement this function, the framework will fall back to use `compute_output_shape`, and will assume that the output dtype matches the input dtype. Args: input_signature: Single TensorSpec or nested structure of TensorSpec objects, describing a candidate input for the layer. Returns: Single TensorSpec or nested structure of TensorSpec objects, describing how the layer would transform the provided input. Raises: TypeError: If input_signature contains a non-TensorSpec object.
github-repos
def link_to_storage(self, sensor_log): if (self.walker is not None): self._sensor_log.destroy_walker(self.walker) self.walker = None self.walker = sensor_log.create_walker(self.selector) self._sensor_log = sensor_log
Attach this DataStreamer to an underlying SensorLog. Calling this method is required if you want to use this DataStreamer to generate reports from the underlying data in the SensorLog. You can call it multiple times and it will unlink itself from any previous SensorLog each time. Args: sensor_log (SensorLog): Actually create a StreamWalker to go along with this streamer so that we can check if it's triggered.
codesearchnet
def publish(self, data): if self.entity_api_key == "": return {'status': 'failure', 'response': 'No API key found in request'} publish_url = self.base_url + "api/0.1.0/publish" publish_headers = {"apikey": self.entity_api_key} publish_data = { "exchange": "amq.topic", "key": str(self.entity_id), "body": str(data) } with self.no_ssl_verification(): r = requests.post(publish_url, json.dumps(publish_data), headers=publish_headers) response = dict() if "No API key" in str(r.content.decode("utf-8")): response["status"] = "failure" r = json.loads(r.content.decode("utf-8"))['message'] elif 'publish message ok' in str(r.content.decode("utf-8")): response["status"] = "success" r = r.content.decode("utf-8") else: response["status"] = "failure" r = r.content.decode("utf-8") response["response"] = str(r) return response
This function allows an entity to publish data to the middleware. Args: data (string): contents to be published by this entity.
juraj-google-style
def description(self): for e in self: if isinstance(e, Description): return e.value raise NoSuchAnnotation
Obtain the description associated with the element. Raises: :class:`NoSuchAnnotation` if there is no associated description.
codesearchnet
def __init__(self, name, requires, at_least_one, optional): self.name = name self.requires = requires self.at_least_one = at_least_one self.optional = optional
Create Intent object Args: name(str): Name for Intent requires(list): Entities that are required at_least_one(list): One of these Entities are required optional(list): Optional Entities used by the intent
juraj-google-style
def _required_idiom(tag_name, index, notfoundmsg): cond = '' if (index > 0): cond = (' or len(el) - 1 < %d' % index) tag_name = str(tag_name) output = (IND + ('if not el%s:\n' % cond)) output += ((IND + IND) + 'raise UserWarning(\n') output += (((IND + IND) + IND) + ('%s +\n' % repr((notfoundmsg.strip() + '\n')))) output += ((((IND + IND) + IND) + repr(('Tag name: ' + tag_name))) + " + '\\n' +\n") output += (((IND + IND) + IND) + "'El:' + str(el) + '\\n' +\n") output += (((IND + IND) + IND) + "'Dom:' + str(dom)\n") output += ((IND + IND) + ')\n\n') return ((output + IND) + ('el = el[%d]\n\n' % index))
Generate code, which make sure that `tag_name` has enoug items. Args: tag_name (str): Name of the container. index (int): Index of the item you want to obtain from container. notfoundmsg (str): Raise :class:`.UserWarning` with debug data and following message. Returns: str: Python code.
codesearchnet
def RecursiveDownload(dir_obj, target_dir, max_depth=10, depth=1, overwrite=False, max_threads=10): if not isinstance(dir_obj, aff4.AFF4Volume): return thread_pool = threadpool.ThreadPool.Factory("Downloader", max_threads) thread_pool.Start() for sub_file_entry in dir_obj.OpenChildren(): path_elements = [target_dir] sub_target_dir = u"/".join(path_elements) try: if isinstance(sub_file_entry, aff4.AFF4Stream): args = (sub_file_entry.urn, sub_target_dir, sub_file_entry.token, overwrite) thread_pool.AddTask( target=CopyAFF4ToLocal, args=args, name="Downloader") elif "Container" in sub_file_entry.behaviours: if depth >= max_depth: continue try: os.makedirs(sub_target_dir) except OSError: pass RecursiveDownload( sub_file_entry, sub_target_dir, overwrite=overwrite, depth=depth + 1) except IOError: logging.exception("Unable to download %s", sub_file_entry.urn) finally: sub_file_entry.Close() if depth <= 1: thread_pool.Stop(join_timeout=THREADPOOL_JOIN_TIMEOUT)
Recursively downloads a file entry to the target path. Args: dir_obj: An aff4 object that contains children. target_dir: Full path of the directory to write to. max_depth: Depth to download to. 1 means just the directory itself. depth: Current depth of recursion. overwrite: Should we overwrite files that exist. max_threads: Use this many threads to do the downloads.
juraj-google-style
def shape(x): if any_symbolic_tensors((x,)): return x.shape return backend.core.shape(x)
Gets the shape of the tensor input. Note: On the TensorFlow backend, when `x` is a `tf.Tensor` with dynamic shape, dimensions which are dynamic in the context of a compiled function will have a `tf.Tensor` value instead of a static integer value. Args: x: A tensor. This function will try to access the `shape` attribute of the input tensor. Returns: A tuple of integers or None values, indicating the shape of the input tensor. Example: >>> x = keras.ops.zeros((8, 12)) >>> keras.ops.shape(x) (8, 12)
github-repos
def combine(*rnf_profiles): for rnf_profile in rnf_profiles: self.prefix_width = max(self.prefix_width, rnf_profile.prefix_width) self.read_tuple_id_width = max(self.read_tuple_id_width, rnf_profile.read_tuple_id_width) self.genome_id_width = max(self.genome_id_width, rnf_profile.genome_id_width) self.chr_id_width = max(self.chr_id_width, rnf_profile.chr_id_width) self.coor_width = max(self.coor_width, rnf_profile.coor_width)
Combine more profiles and set their maximal values. Args: *rnf_profiles (rnftools.rnfformat.RnfProfile): RNF profile.
juraj-google-style
def get_shortest_distance(self, other): coords = ['x', 'y', 'z'] pos1 = self.loc[:, coords].values pos2 = other.loc[:, coords].values D = self._jit_pairwise_distances(pos1, pos2) i, j = np.unravel_index(D.argmin(), D.shape) d = D[i, j] i, j = dict(enumerate(self.index))[i], dict(enumerate(other.index))[j] return i, j, d
Calculate the shortest distance between self and other Args: Cartesian: other Returns: tuple: Returns a tuple ``i, j, d`` with the following meaning: ``i``: The index on self that minimises the pairwise distance. ``j``: The index on other that minimises the pairwise distance. ``d``: The distance between self and other. (float)
juraj-google-style
def _block_orth(self, p1, p2, p3): p1_shape = p1.shape.as_list() if p1_shape != p2.shape.as_list() or p1_shape != p3.shape.as_list(): raise ValueError(f'The dimension of the matrices must be the same. Received p1.shape={p1.shape}, p2.shape={p2.shape} and p3.shape={p3.shape}.') n = p1_shape[0] eye = linalg_ops_impl.eye(n, dtype=self.dtype) kernel2x2x2 = {} def matmul(p1, p2, p3): return math_ops.matmul(math_ops.matmul(p1, p2), p3) def cast(i, p): return i * p + (1 - i) * (eye - p) for i in [0, 1]: for j in [0, 1]: for k in [0, 1]: kernel2x2x2[i, j, k] = matmul(cast(i, p1), cast(j, p2), cast(k, p3)) return kernel2x2x2
Construct a 3 x 3 kernel. Used to construct orthgonal kernel. Args: p1: A symmetric projection matrix. p2: A symmetric projection matrix. p3: A symmetric projection matrix. Returns: A 2 x 2 x 2 kernel. Raises: ValueError: If the dimensions of p1, p2 and p3 are different.
github-repos
def _build_frange_part(start, stop, stride, zfill=0): if stop is None: return '' pad_start = pad(start, zfill) pad_stop = pad(stop, zfill) if stride is None or start == stop: return '{0}'.format(pad_start) elif abs(stride) == 1: return '{0}-{1}'.format(pad_start, pad_stop) else: return '{0}-{1}x{2}'.format(pad_start, pad_stop, stride)
Private method: builds a proper and padded frame range string. Args: start (int): first frame stop (int or None): last frame stride (int or None): increment zfill (int): width for zero padding Returns: str:
juraj-google-style
def __init__(self, output_mediator): super(MySQL4n6TimeOutputModule, self).__init__(output_mediator) self._connection = None self._count = None self._cursor = None self._dbname = 'log2timeline' self._host = 'localhost' self._password = 'forensic' self._port = None self._user = 'root'
Initializes the output module object. Args: output_mediator (OutputMediator): mediates interactions between output modules and other components, such as storage and dfvfs.
juraj-google-style
def __init__(self, success, uid, *, payload=None): self.success = success self.uid = uid self.payload = payload if payload is not None else {}
Initialise the response object. Args: success (bool): True if the request was successful. uid (str): Unique response id. payload (dict): A dictionary with the response data.
juraj-google-style
def __init__(self, tpu_cluster_resolver=None, device_assignment=None): logging.warning('`tf.distribute.experimental.TPUStrategy` is deprecated, please use the non-experimental symbol `tf.distribute.TPUStrategy` instead.') super().__init__(TPUExtended(self, tpu_cluster_resolver, device_assignment=device_assignment)) distribute_lib.distribution_strategy_gauge.get_cell('V2').set('TPUStrategy') distribute_lib.distribution_strategy_replica_gauge.get_cell('num_workers').set(self.extended.num_hosts) distribute_lib.distribution_strategy_replica_gauge.get_cell('num_replicas_per_worker').set(self.extended.num_replicas_per_host) self._enable_packed_variable_in_eager_mode = True
Synchronous training in TPU donuts or Pods. Args: tpu_cluster_resolver: A tf.distribute.cluster_resolver.TPUClusterResolver, which provides information about the TPU cluster. device_assignment: Optional `tf.tpu.experimental.DeviceAssignment` to specify the placement of replicas on the TPU cluster.
github-repos
def authenticated_request(self, endpoint, method='GET', params=None, data=None): headers = { 'X-Access-Token' : self.access_token, 'X-Client-ID' : self.client_id } return self.api.request(endpoint, method=method, headers=headers, params=params, data=data)
Send a request to the given Wunderlist API with 'X-Access-Token' and 'X-Client-ID' headers and ensure the response code is as expected given the request type Params: endpoint -- API endpoint to send request to Keyword Args: method -- GET, PUT, PATCH, DELETE, etc. params -- parameters to encode in the request data -- data to send with the request
juraj-google-style
def parse(cls, version_string, partial=False, coerce=False): if not version_string: raise ValueError('Invalid empty version string: %r' % version_string) if partial: version_re = cls.partial_version_re else: version_re = cls.version_re match = version_re.match(version_string) if not match: raise ValueError('Invalid version string: %r' % version_string) major, minor, patch, prerelease, build = match.groups() if _has_leading_zero(major): raise ValueError("Invalid leading zero in major: %r" % version_string) if _has_leading_zero(minor): raise ValueError("Invalid leading zero in minor: %r" % version_string) if _has_leading_zero(patch): raise ValueError("Invalid leading zero in patch: %r" % version_string) major = int(major) minor = cls._coerce(minor, partial) patch = cls._coerce(patch, partial) if prerelease is None: if partial and (build is None): return (major, minor, patch, None, None) else: prerelease = () elif prerelease == '': prerelease = () else: prerelease = tuple(prerelease.split('.')) cls._validate_identifiers(prerelease, allow_leading_zeroes=False) if build is None: if partial: build = None else: build = () elif build == '': build = () else: build = tuple(build.split('.')) cls._validate_identifiers(build, allow_leading_zeroes=True) return (major, minor, patch, prerelease, build)
Parse a version string into a Version() object. Args: version_string (str), the version string to parse partial (bool), whether to accept incomplete input coerce (bool), whether to try to map the passed in string into a valid Version.
juraj-google-style
def post_process_semantic_segmentation(self, outputs, target_sizes: Optional[List[Tuple]]=None): logits = outputs.logits if target_sizes is not None: if len(logits) != len(target_sizes): raise ValueError('Make sure that you pass in as many target sizes as the batch dimension of the logits') if is_torch_tensor(target_sizes): target_sizes = target_sizes.numpy() semantic_segmentation = [] for idx in range(len(logits)): resized_logits = torch.nn.functional.interpolate(logits[idx].unsqueeze(dim=0), size=target_sizes[idx], mode='bilinear', align_corners=False) semantic_map = resized_logits[0].argmax(dim=0) semantic_segmentation.append(semantic_map) else: semantic_segmentation = logits.argmax(dim=1) semantic_segmentation = [semantic_segmentation[i] for i in range(semantic_segmentation.shape[0])] return semantic_segmentation
Converts the output of [`BeitForSemanticSegmentation`] into semantic segmentation maps. Only supports PyTorch. Args: outputs ([`BeitForSemanticSegmentation`]): Raw outputs of the model. target_sizes (`List[Tuple]` of length `batch_size`, *optional*): List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized. Returns: semantic_segmentation: `List[torch.Tensor]` of length `batch_size`, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if `target_sizes` is specified). Each entry of each `torch.Tensor` correspond to a semantic class id.
github-repos
def save_as_json(total: list, name='data.json', sort_by: str=None, no_duplicate=False, order='asc'): if sort_by: reverse = (order == 'desc') total = sorted(total, key=itemgetter(sort_by), reverse=reverse) if no_duplicate: total = [key for (key, _) in groupby(total)] data = json.dumps(total, ensure_ascii=False) Path(name).write_text(data, encoding='utf-8')
Save what you crawled as a json file. Args: total (list): Total of data you crawled. name (str, optional): Defaults to 'data.json'. The name of the file. sort_by (str, optional): Defaults to None. Sort items by a specific key. no_duplicate (bool, optional): Defaults to False. If True, it will remove duplicated data. order (str, optional): Defaults to 'asc'. The opposite option is 'desc'.
codesearchnet
def request_via_socket(sock, search_target): msgparts = dict(HOST=MCAST_IP_PORT, MAN='"ssdp:discover"', MX='3', ST=search_target) msg = encode_request('M-SEARCH * HTTP/1.1', **msgparts) sock.sendto(msg, (MCAST_IP, MCAST_PORT))
Send an SSDP search request via the provided socket. Args: sock: A socket suitable for use to send a broadcast message - preferably one created by :py:func:`make_socket`. search_target (string): A :term:`resource type` target to search for.
juraj-google-style
def AddTrip(self, schedule=None, headsign=None, service_period=None, trip_id=None): if schedule is None: assert self._schedule is not None schedule = self._schedule if trip_id is None: trip_id = util.FindUniqueId(schedule.trips) if service_period is None: service_period = schedule.GetDefaultServicePeriod() trip_class = self.GetGtfsFactory().Trip trip_obj = trip_class(route=self, headsign=headsign, service_period=service_period, trip_id=trip_id) schedule.AddTripObject(trip_obj) return trip_obj
Add a trip to this route. Args: schedule: a Schedule object which will hold the new trip or None to use the schedule of this route. headsign: headsign of the trip as a string service_period: a ServicePeriod object or None to use schedule.GetDefaultServicePeriod() trip_id: optional trip_id for the new trip Returns: a new Trip object
juraj-google-style
def _write_to_hdx(self, action, data, id_field_name, file_to_upload=None): file = None try: if file_to_upload: file = open(file_to_upload, 'rb') files = [('upload', file)] else: files = None return self.configuration.call_remoteckan(self.actions()[action], data, files=files) except Exception as e: raisefrom(HDXError, 'Failed when trying to %s %s! (POST)' % (action, data[id_field_name]), e) finally: if file_to_upload and file: file.close()
Creates or updates an HDX object in HDX and return HDX object metadata dict Args: action (str): Action to perform eg. 'create', 'update' data (Dict): Data to write to HDX id_field_name (str): Name of field containing HDX object identifier or None file_to_upload (Optional[str]): File to upload to HDX Returns: Dict: HDX object metadata
juraj-google-style
def _update_old_module(old_module: types.ModuleType, new_module: types.ModuleType) -> None: old_module.__dict__.clear() old_module.__dict__.update(new_module.__dict__)
Mutate the old module version with the new dict. This also try to update the class, functions,... from the old module (so instances are updated in-place). Args: old_module: Old module to update new_module: New module
github-repos
def proba2onehot(proba: [list, np.ndarray], confident_threshold: float, classes: [list, np.ndarray]) -> np.ndarray: return labels2onehot(proba2labels(proba, confident_threshold, classes), classes)
Convert vectors of probabilities to one-hot representations using confident threshold Args: proba: samples where each sample is a vector of probabilities to belong with given classes confident_threshold: boundary of probability to belong with a class classes: array of classes' names Returns: 2d array with one-hot representation of given samples
juraj-google-style
def handle_result(self, completed_bundle: '_Bundle', completed_timers, result: 'TransformResult'): with self._lock: committed_bundles, unprocessed_bundles = self._commit_bundles(result.uncommitted_output_bundles, result.unprocessed_bundles) self._metrics.commit_logical(completed_bundle, result.logical_metric_updates) self._update_side_inputs_container(committed_bundles, result) tasks = self._watermark_manager.update_watermarks(completed_bundle, result.transform, completed_timers, committed_bundles, unprocessed_bundles, result.keyed_watermark_holds, self._side_inputs_container) self._pending_unblocked_tasks.extend(tasks) if result.counters: for counter in result.counters: merged_counter = self._counter_factory.get_counter(counter.name, counter.combine_fn) merged_counter.accumulator.merge([counter.accumulator]) existing_keyed_state = self._transform_keyed_states[result.transform] for k, v in result.partial_keyed_state.items(): existing_keyed_state[k] = v return committed_bundles
Handle the provided result produced after evaluating the input bundle. Handle the provided TransformResult, produced after evaluating the provided committed bundle (potentially None, if the result of a root PTransform). The result is the output of running the transform contained in the TransformResult on the contents of the provided bundle. Args: completed_bundle: the bundle that was processed to produce the result. completed_timers: the timers that were delivered to produce the completed_bundle. result: the ``TransformResult`` of evaluating the input bundle Returns: the committed bundles contained within the handled result.
github-repos
def get_auth(self, key, is_list=False, is_optional=False, is_secret=False, is_local=False, default=None, options=None): if is_list: return self._get_typed_list_value(key=key, target_type=AuthSpec, type_convert=self.parse_auth_spec, is_optional=is_optional, is_secret=is_secret, is_local=is_local, default=default, options=options) return self._get_typed_value(key=key, target_type=AuthSpec, type_convert=self.parse_auth_spec, is_optional=is_optional, is_secret=is_secret, is_local=is_local, default=default, options=options)
Get a the value corresponding to the key and converts it to `AuthSpec`. Args key: the dict key. is_list: If this is one element or a list of elements. is_optional: To raise an error if key was not found. is_secret: If the key is a secret. is_local: If the key is a local to this service. default: default value if is_optional is True. options: list/tuple if provided, the value must be one of these values. Returns: `str`: value corresponding to the key.
codesearchnet
def _CreateOutputModule(self, options): formatter_mediator = formatters_mediator.FormatterMediator( data_location=self._data_location) try: formatter_mediator.SetPreferredLanguageIdentifier( self._preferred_language) except (KeyError, TypeError) as exception: raise RuntimeError(exception) mediator = output_mediator.OutputMediator( self._knowledge_base, formatter_mediator, preferred_encoding=self.preferred_encoding) mediator.SetTimezone(self._preferred_time_zone) try: output_module = output_manager.OutputManager.NewOutputModule( self._output_format, mediator) except (KeyError, ValueError) as exception: raise RuntimeError( 'Unable to create output module with error: {0!s}'.format( exception)) if output_manager.OutputManager.IsLinearOutputModule(self._output_format): output_file_object = open(self._output_filename, 'wb') output_writer = tools.FileObjectOutputWriter(output_file_object) output_module.SetOutputWriter(output_writer) helpers_manager.ArgumentHelperManager.ParseOptions(options, output_module) missing_parameters = output_module.GetMissingArguments() while missing_parameters: for parameter in missing_parameters: value = self._PromptUserForInput( 'Missing parameter {0:s} for output module'.format(parameter)) if value is None: logger.warning( 'Unable to set the missing parameter for: {0:s}'.format( parameter)) continue setattr(options, parameter, value) helpers_manager.ArgumentHelperManager.ParseOptions( options, output_module) missing_parameters = output_module.GetMissingArguments() return output_module
Creates the output module. Args: options (argparse.Namespace): command line arguments. Returns: OutputModule: output module. Raises: RuntimeError: if the output module cannot be created.
juraj-google-style
def setup(self, keywords=None): self._keywords = keywords self._output_path = tempfile.mkdtemp()
Sets up the _keywords attribute. Args: keywords: pipe separated list of keyword to search
codesearchnet
def Relay(self, inventory): inventory = InvPayload(type=inventory.InventoryType, hashes=[inventory.Hash.ToBytes()]) m = Message("inv", inventory) self.SendSerializedMessage(m) return True
Wrap the inventory in a InvPayload object and send it over the write to the remote node. Args: inventory: Returns: bool: True (fixed)
juraj-google-style
def _ParsePerformanceOptions(self, options): self._buffer_size = getattr(options, 'buffer_size', 0) if self._buffer_size: try: if self._buffer_size[-1].lower() == 'm': self._buffer_size = int(self._buffer_size[:-1], 10) self._buffer_size *= self._BYTES_IN_A_MIB else: self._buffer_size = int(self._buffer_size, 10) except ValueError: raise errors.BadConfigOption( 'Invalid buffer size: {0!s}.'.format(self._buffer_size)) self._queue_size = self.ParseNumericOption(options, 'queue_size')
Parses the performance options. Args: options (argparse.Namespace): command line arguments. Raises: BadConfigOption: if the options are invalid.
juraj-google-style
def attention_mask_ignore_padding(inputs, dtype=tf.float32): inputs = rename_length_to_memory_length(inputs) return mtf.cast(mtf.equal(inputs, 0), dtype) * -1e9
Bias for encoder-decoder attention. Args: inputs: a mtf.Tensor with shape [..., length_dim] dtype: a tf.dtype Returns: a mtf.Tensor with shape [..., memory_length_dim]
juraj-google-style
def iplot_state_paulivec(rho, figsize=None, slider=False, show_legend=False): html_template = Template('\n <p>\n <div id="paulivec_$divNumber"></div>\n </p>\n ') javascript_template = Template('\n <script>\n requirejs.config({\n paths: {\n qVisualization: "https: rho = _validate_input_state(rho) if (figsize is None): figsize = (7, 5) options = {'width': figsize[0], 'height': figsize[1], 'slider': int(slider), 'show_legend': int(show_legend)} div_number = str(time.time()) div_number = re.sub('[.]', '', div_number) data_to_plot = [] rho_data = process_data(rho) data_to_plot.append(dict(data=rho_data)) html = html_template.substitute({'divNumber': div_number}) javascript = javascript_template.substitute({'divNumber': div_number, 'executions': data_to_plot, 'options': options}) display(HTML((html + javascript)))
Create a paulivec representation. Graphical representation of the input array. Args: rho (array): State vector or density matrix. figsize (tuple): Figure size in pixels. slider (bool): activate slider show_legend (bool): show legend of graph content
codesearchnet
def plot_time_series(self, f_start=None, f_stop=None, if_id=0, logged=True, orientation='h', MJD_time=False, **kwargs): ax = plt.gca() (plot_f, plot_data) = self.grab_data(f_start, f_stop, if_id) if (logged and (self.header[b'nbits'] >= 8)): plot_data = db(plot_data) if (len(plot_data.shape) > 1): plot_data = plot_data.mean(axis=1) else: plot_data = plot_data.mean() extent = self._calc_extent(plot_f=plot_f, plot_t=self.timestamps, MJD_time=MJD_time) plot_t = np.linspace(extent[2], extent[3], len(self.timestamps)) if MJD_time: tlabel = 'Time [MJD]' else: tlabel = 'Time [s]' if logged: plabel = 'Power [dB]' else: plabel = 'Power [counts]' if ('v' in orientation): plt.plot(plot_data, plot_t, **kwargs) plt.xlabel(plabel) else: plt.plot(plot_t, plot_data, **kwargs) plt.xlabel(tlabel) plt.ylabel(plabel) ax.autoscale(axis='both', tight=True)
Plot the time series. Args: f_start (float): start frequency, in MHz f_stop (float): stop frequency, in MHz logged (bool): Plot in linear (False) or dB units (True), kwargs: keyword args to be passed to matplotlib imshow()
codesearchnet
def encode_field(self, field, value): if isinstance(field, messages.BytesField): if field.repeated: value = [base64.b64encode(byte) for byte in value] else: value = base64.b64encode(value) elif isinstance(field, message_types.DateTimeField): if field.repeated: value = [i.isoformat() for i in value] else: value = value.isoformat() return value
Encode a python field value to a JSON value. Args: field: A ProtoRPC field instance. value: A python value supported by field. Returns: A JSON serializable value appropriate for field.
juraj-google-style
def safe_datetime_cast(self, col): casted_dates = pd.to_datetime(col[self.col_name], format=self.date_format, errors='coerce') if len(casted_dates[casted_dates.isnull()]): slice_ = casted_dates.isnull() & ~col[self.col_name].isnull() col[slice_][self.col_name].apply(self.strptime_format) return casted_dates
Parses string values into datetime. Args: col(pandas.DataFrame): Data to transform. Returns: pandas.Series
juraj-google-style
def strip_path_prefix(ipath, prefix): if prefix is None: return ipath return ipath[len(prefix):] if ipath.startswith(prefix) else ipath
Strip prefix from path. Args: ipath: input path prefix: the prefix to remove, if it is found in :ipath: Examples: >>> strip_path_prefix("/foo/bar", "/bar") '/foo/bar' >>> strip_path_prefix("/foo/bar", "/") 'foo/bar' >>> strip_path_prefix("/foo/bar", "/foo") '/bar' >>> strip_path_prefix("/foo/bar", "None") '/foo/bar'
juraj-google-style
def _convert_as_saved_model(self): temp_dir = tempfile.mkdtemp() try: graph_def, input_tensors, output_tensors = self._convert_keras_to_saved_model(temp_dir) if self.saved_model_dir: return super(TFLiteKerasModelConverterV2, self).convert(graph_def, input_tensors, output_tensors) finally: shutil.rmtree(temp_dir, True)
Converts a Keras model as a saved model. Returns: The converted data in serialized format.
github-repos
def process(self, element: tuple[str, prediction_log_pb2.PredictionLog]) -> Iterable[str]: filename, predict_log = (element[0], element[1].predict_log) output_value = predict_log.response.outputs output_tensor = tf.io.decode_raw(output_value['output_0'].tensor_content, out_type=tf.float32) max_index_output_tensor = tf.math.argmax(output_tensor, axis=0) yield (filename + ',' + str(tf.get_static_value(max_index_output_tensor)))
Args: element: Tuple of str, and PredictionLog. Inference can be parsed from prediction_log returns: str of filename and inference.
github-repos
def __init__(self, app): self.app = app flask_secret_key = app.config.get('SECRET_KEY', None) if not flask_secret_key: raise ConfigError('Config setting SECRET_KEY is missing.') key = flask_secret_key.encode() if len(key)<32: print('WARNING: Flask-User TokenManager: SECRET_KEY is shorter than 32 bytes.') key = key + b' '*32 key32 = key[:32] base64_key32 = base64.urlsafe_b64encode(key32) from cryptography.fernet import Fernet self.fernet = Fernet(base64_key32)
Check config settings and initialize the Fernet encryption cypher. Fernet is basically AES128 in CBC mode, with a timestamp and a signature. Args: app(Flask): The Flask application instance.
juraj-google-style
def _exists(self, path): return self._hdfs_client.status(path, strict=False) is not None
Returns True if path exists as a file or directory in HDFS. Args: path: String in the form /...
github-repos
def format_message(self, evr_hist_data): size_formatter_info = {'s': (- 1), 'c': 1, 'i': 4, 'd': 4, 'u': 4, 'x': 4, 'hh': 1, 'h': 2, 'l': 4, 'll': 8, 'f': 8, 'g': 8, 'e': 8} type_formatter_info = {'c': 'U{}', 'i': 'MSB_I{}', 'd': 'MSB_I{}', 'u': 'MSB_U{}', 'f': 'MSB_D{}', 'e': 'MSB_D{}', 'g': 'MSB_D{}', 'x': 'MSB_U{}'} formatters = re.findall('%(?:\\d+\\$)?([cdieEfgGosuxXhlL]+)', self._message) cur_byte_index = 0 data_chunks = [] for f in formatters: f_size_char = f_type = f[(- 1)] if (len(f) > 1): f_size_char = f[:(- 1)] fsize = size_formatter_info[f_size_char.lower()] try: if (f_type != 's'): end_index = (cur_byte_index + fsize) fstr = type_formatter_info[f_type.lower()].format((fsize * 8)) if ((fsize == 1) and ('MSB_' in fstr)): fstr = fstr[4:] d = dtype.PrimitiveType(fstr).decode(evr_hist_data[cur_byte_index:end_index]) else: end_index = str(evr_hist_data).index('\x00', cur_byte_index) d = str(evr_hist_data[cur_byte_index:end_index]) data_chunks.append(d) except: msg = 'Unable to format EVR Message with data {}'.format(evr_hist_data) log.error(msg) raise ValueError(msg) cur_byte_index = end_index if (f == 's'): cur_byte_index += 1 if (len(formatters) == 0): return self._message else: msg = self._message for f in formatters: if (len(f) > 1): msg = msg.replace('%{}'.format(f), '%{}'.format(f[(- 1)])) return (msg % tuple(data_chunks))
Format EVR message with EVR data Given a byte array of EVR data, format the EVR's message attribute printf format strings and split the byte array into appropriately sized chunks. Supports most format strings containing length and type fields. Args: evr_hist_data: A bytearray of EVR data. Bytes are expected to be in MSB ordering. Example formatting:: # This is the character '!', string 'Foo', and int '4279317316' bytearray([0x21, 0x46, 0x6f, 0x6f, 0x00, 0xff, 0x11, 0x33, 0x44]) Returns: The EVR's message string formatted with the EVR data or the unformatted EVR message string if there are no valid format strings present in it. Raises: ValueError: When the bytearray cannot be fully processed with the specified format strings. This is usually a result of the expected data length and the byte array length not matching.
codesearchnet
def _remove_overlap_sub(self, also_remove_contiguous: bool) -> bool: for i in range(len(self.intervals)): for j in range(i + 1, len(self.intervals)): first = self.intervals[i] second = self.intervals[j] if also_remove_contiguous: test = first.contiguous(second) else: test = first.overlaps(second) if test: newint = first.union(second) self.intervals.pop(j) self.intervals.pop(i) self.intervals.append(newint) return True return False
Called by :meth:`remove_overlap`. Removes the first overlap found. Args: also_remove_contiguous: treat contiguous (as well as overlapping) intervals as worthy of merging? Returns: bool: ``True`` if an overlap was removed; ``False`` otherwise
juraj-google-style
def merge_classes(self, instances): classes = {v.cls for v in instances if v.cls != self.empty} return self.merge_values(sorted(classes, key=lambda cls: cls.full_name))
Merge the classes of the given instances. Args: instances: An iterable of instances. Returns: An abstract.BaseValue created by merging the instances' classes.
github-repos
def wait_for_contract(self, contract_address_hex, timeout=None): contract_address = decode_hex(contract_address_hex) start_time = time.time() result = self._raiden.chain.client.web3.eth.getCode( to_checksum_address(contract_address), ) current_time = time.time() while not result: if timeout and start_time + timeout > current_time: return False result = self._raiden.chain.client.web3.eth.getCode( to_checksum_address(contract_address), ) gevent.sleep(0.5) current_time = time.time() return len(result) > 0
Wait until a contract is mined Args: contract_address_hex (string): hex encoded address of the contract timeout (int): time to wait for the contract to get mined Returns: True if the contract got mined, false otherwise
juraj-google-style
def __init__(self, layer, trainable): self._trainable = trainable self._layer = layer if self._layer is not None and (not hasattr(self._layer, '_resources')): self._layer._resources = data_structures.Mapping() self._cols_to_vars_map = collections.defaultdict(lambda: {}) self._cols_to_resources_map = collections.defaultdict(lambda: {})
Creates an _StateManagerImpl object. Args: layer: The input layer this state manager is associated with. trainable: Whether by default, variables created are trainable or not.
github-repos
def get_request_feature(self, name): if '[]' in name: return self.request.query_params.getlist( name) if name in self.features else None elif '{}' in name: return self._extract_object_params( name) if name in self.features else {} else: return self.request.query_params.get( name) if name in self.features else None
Parses the request for a particular feature. Arguments: name: A feature name. Returns: A feature parsed from the URL if the feature is supported, or None.
juraj-google-style
def _SimpleDecoder(wire_type, decode_value): def SpecificDecoder(field_number, is_repeated, is_packed, key, new_default): if is_packed: local_DecodeVarint = _DecodeVarint def DecodePackedField(buffer, pos, end, message, field_dict): value = field_dict.get(key) if value is None: value = field_dict.setdefault(key, new_default(message)) (endpoint, pos) = local_DecodeVarint(buffer, pos) endpoint += pos if endpoint > end: raise _DecodeError('Truncated message.') while pos < endpoint: (element, pos) = decode_value(buffer, pos) value.append(element) if pos > endpoint: del value[-1] raise _DecodeError('Packed element was truncated.') return pos return DecodePackedField elif is_repeated: tag_bytes = encoder.TagBytes(field_number, wire_type) tag_len = len(tag_bytes) def DecodeRepeatedField(buffer, pos, end, message, field_dict): value = field_dict.get(key) if value is None: value = field_dict.setdefault(key, new_default(message)) while 1: (element, new_pos) = decode_value(buffer, pos) value.append(element) pos = new_pos + tag_len if buffer[new_pos:pos] != tag_bytes or new_pos >= end: if new_pos > end: raise _DecodeError('Truncated message.') return new_pos return DecodeRepeatedField else: def DecodeField(buffer, pos, end, message, field_dict): (field_dict[key], pos) = decode_value(buffer, pos) if pos > end: del field_dict[key] raise _DecodeError('Truncated message.') return pos return DecodeField return SpecificDecoder
Return a constructor for a decoder for fields of a particular type. Args: wire_type: The field's wire type. decode_value: A function which decodes an individual value, e.g. _DecodeVarint()
juraj-google-style
def compress(self, counts_limit): if self.payload: varint_len = counts_limit * (self.word_size + 1) encode_buf = (c_byte * (payload_header_size + varint_len))() varint_len = encode(addressof(self.counts), counts_limit, self.word_size, addressof(encode_buf) + payload_header_size, varint_len) self.payload.payload_len = varint_len ctypes.memmove(addressof(encode_buf), addressof(self.payload), payload_header_size) cdata = zlib.compress(ctypes.string_at(encode_buf, payload_header_size + varint_len)) return cdata raise RuntimeError('No payload to compress')
Compress this payload instance Args: counts_limit how many counters should be encoded starting from index 0 (can be 0), Return: the compressed payload (python string)
juraj-google-style
def reflection(n1, n2): r = (abs(((n1 - n2) / (n1 + n2))) ** 2) return r
Calculate the power reflection at the interface of two refractive index materials. Args: n1 (float): Refractive index of material 1. n2 (float): Refractive index of material 2. Returns: float: The percentage of reflected power.
codesearchnet
def get_soft_device_placement(): return context.context().soft_device_placement
Return status of soft device placement flag. If enabled, ops can be placed on different devices than the device explicitly assigned by the user. This potentially has a large performance cost due to an increase in data communication between devices. Some cases where soft_device_placement would modify device assignment are: 1. no GPU/TPU implementation for the OP 2. no GPU devices are known or registered 3. need to co-locate with reftype input(s) which are from CPU 4. an OP can not be compiled by XLA. Common for TPU which always requires the XLA compiler. For TPUs, if this option is true, a feature called automatic outside compilation is enabled. Automatic outside compilation will move uncompilable ops within a TPU program to instead run on the host. This can be used when encountering compilation failures due to unsupported ops. Returns: A boolean indicating if soft placement is enabled.
github-repos
def add_metric(self, labels, value, timestamp=None): self.samples.append(Sample( self.name + '_info', dict(dict(zip(self._labelnames, labels)), **value), 1, timestamp, ))
Add a metric to the metric family. Args: labels: A list of label values value: A dict of labels
juraj-google-style
def _parallel_part_processors(part_processors: Sequence[PartProcessorWithMatchFn]) -> PartProcessorFn: async def part_processor(content: ProcessorPart) -> AsyncIterable[ProcessorPart]: output_queue = asyncio.Queue() processors = [] match_fns = [] passthrough_fallback = False passthrough_always = False for p in part_processors: if p is PASSTHROUGH_FALLBACK: passthrough_fallback = True continue if p is PASSTHROUGH_ALWAYS: passthrough_always = True continue processors.append(_CaptureReservedSubstreams(output_queue, p)) match_fns.append(p.match) parallel_processor = _CaptureReservedSubstreams(output_queue, map_processor.parallel_part_functions(processors, match_fns, with_default_output=passthrough_fallback, with_always_output=passthrough_always)) content = parallel_processor(content) create_task(_enqueue_content(content, output_queue)) while (part := (await output_queue.get())) is not None: yield part output_queue.task_done() return part_processor
Combine **part processors** in parallel. Adds debug and status streams to the output. NOTE: Substreams debug and status are yielded immediately instead of passing them to the next processor. Args: part_processors: sequence of part processors to compute concurrently. Returns: Part processor that computes the output of the provided sequence of part processors concurrently.
github-repos
def _merge_hdx_update(self, object_type, id_field_name, file_to_upload=None, **kwargs): merge_two_dictionaries(self.data, self.old_data) if ('batch_mode' in kwargs): self.data['batch_mode'] = kwargs['batch_mode'] if ('skip_validation' in kwargs): self.data['skip_validation'] = kwargs['skip_validation'] ignore_field = self.configuration[('%s' % object_type)].get('ignore_on_update') self.check_required_fields(ignore_fields=[ignore_field]) operation = kwargs.get('operation', 'update') self._save_to_hdx(operation, id_field_name, file_to_upload)
Helper method to check if HDX object exists and update it Args: object_type (str): Description of HDX object type (for messages) id_field_name (str): Name of field containing HDX object identifier file_to_upload (Optional[str]): File to upload to HDX **kwargs: See below operation (string): Operation to perform eg. patch. Defaults to update. Returns: None
codesearchnet
def __init__(self, mean, volatility, dtype=None, name=None): self._name = name or 'geometric_brownian_motion' with tf.name_scope(self._name): self._mean, self._mean_is_constant = pw.convert_to_tensor_or_func(mean, dtype=dtype, name='mean') self._dtype = dtype or self._mean.dtype self._volatility, self._volatility_is_constant = pw.convert_to_tensor_or_func(volatility, dtype=self._dtype, name='volatility') self._volatility_squared = self._volatility_squared_from_volatility(self._volatility, self._volatility_is_constant, dtype=self._dtype, name='volatility_squared') self._dim = 1
Initializes the Geometric Brownian Motion. Args: mean: A real `Tensor` broadcastable to `batch_shape + [1]` or an instance of left-continuous `PiecewiseConstantFunc` with `batch_shape + [1]` dimensions. Here `batch_shape` represents a batch of independent GBMs. Corresponds to the mean drift of the Ito process. volatility: A real `Tensor` broadcastable to `batch_shape + [1]` or an instance of left-continuous `PiecewiseConstantFunc` of the same `dtype` and `batch_shape` as set by `mean`. Corresponds to the volatility of the process and should be positive. dtype: The default dtype to use when converting values to `Tensor`s. Default value: `None` which means that default dtypes inferred from `mean` is used. name: Python string. The name to give to the ops created by this class. Default value: `None` which maps to the default name 'geometric_brownian_motion'.
github-repos
def make_pose(translation, rotation): pose = np.zeros((4, 4)) pose[:3, :3] = rotation pose[:3, 3] = translation pose[3, 3] = 1.0 return pose
Makes a homogenous pose matrix from a translation vector and a rotation matrix. Args: translation: a 3-dim iterable rotation: a 3x3 matrix Returns: pose: a 4x4 homogenous matrix
juraj-google-style
def _parse_hparams(hparams): prefixes = ['agent_', 'optimizer_', 'runner_', 'replay_buffer_'] ret = [] for prefix in prefixes: ret_dict = {} for key in hparams.values(): if (prefix in key): par_name = key[len(prefix):] ret_dict[par_name] = hparams.get(key) ret.append(ret_dict) return ret
Split hparams, based on key prefixes. Args: hparams: hyperparameters Returns: Tuple of hparams for respectably: agent, optimizer, runner, replay_buffer.
codesearchnet
def CollectFromWindowsRegistry( cls, artifacts_registry, knowledge_base, searcher): for preprocess_plugin in cls._windows_registry_plugins.values(): artifact_definition = artifacts_registry.GetDefinitionByName( preprocess_plugin.ARTIFACT_DEFINITION_NAME) if not artifact_definition: logger.warning('Missing artifact definition: {0:s}'.format( preprocess_plugin.ARTIFACT_DEFINITION_NAME)) continue logger.debug('Running Windows Registry preprocessor plugin: {0:s}'.format( preprocess_plugin.ARTIFACT_DEFINITION_NAME)) try: preprocess_plugin.Collect(knowledge_base, artifact_definition, searcher) except (IOError, errors.PreProcessFail) as exception: logger.warning(( 'Unable to collect value from artifact definition: {0:s} ' 'with error: {1!s}').format( preprocess_plugin.ARTIFACT_DEFINITION_NAME, exception))
Collects values from Windows Registry values. Args: artifacts_registry (artifacts.ArtifactDefinitionsRegistry): artifacts definitions registry. knowledge_base (KnowledgeBase): to fill with preprocessing information. searcher (dfwinreg.WinRegistrySearcher): Windows Registry searcher to preprocess the Windows Registry.
juraj-google-style
def __init__(self, *args, **kwargs): if "widget" not in kwargs: kwargs["widget"] = PasswordStrengthInput(render_value=False) super(PasswordField, self).__init__(*args, **kwargs)
Init method. Args: *args (): Django's args for a form field. **kwargs (): Django's kwargs for a form field.
juraj-google-style
def retrieve_data_from_config(msg, cfg): msg_type = msg.__class__.__name__.lower() for attr in msg: if ((getattr(msg, attr) is None) and (attr in cfg.data[msg.profile][msg_type])): setattr(msg, attr, cfg.data[msg.profile][msg_type][attr])
Update msg attrs with values from the profile configuration if the msg.attr=None, else leave it alone. Args: :msg: (Message class) an instance of a message class. :cfg: (jsonconfig.Config) config instance.
codesearchnet
def _CheckForOutOfOrderStepAndMaybePurge(self, event): if event.step < self.most_recent_step and event.HasField('summary'): self._Purge(event, by_tags=True)
Check for out-of-order event.step and discard expired events for tags. Check if the event is out of order relative to the global most recent step. If it is, purge outdated summaries for tags that the event contains. Args: event: The event to use as reference. If the event is out-of-order, all events with the same tags, but with a greater event.step will be purged.
juraj-google-style
def remove_container(self, container, v=False, link=False, force=False): params = {'v': v, 'link': link, 'force': force} res = self._delete(self._url('/containers/{0}', container), params=params) self._raise_for_status(res)
Remove a container. Similar to the ``docker rm`` command. Args: container (str): The container to remove v (bool): Remove the volumes associated with the container link (bool): Remove the specified link and not the underlying container force (bool): Force the removal of a running container (uses ``SIGKILL``) Raises: :py:class:`docker.errors.APIError` If the server returns an error.
codesearchnet
def transpose(self, name=None): if name is None: name = self.module_name + "_transpose" return AddBias(output_shape=lambda: self._input_shape, bias_dims=self._bias_dims, initializers=self._initializers, regularizers=self._regularizers, name=name)
Returns transposed `AddBias` module. Args: name: Optional string assigning name of transpose module. The default name is constructed by appending "_transpose" to `self.module_name`. Returns: Transposed `AddBias` module.
juraj-google-style
def download(url): filepath = get_file(fname='tmp.zip', origin=url, extract=True) base_dir = os.path.dirname(filepath) weights_file = os.path.join(base_dir, 'weights.h5') params_file = os.path.join(base_dir, 'params.json') preprocessor_file = os.path.join(base_dir, 'preprocessor.pickle') return (weights_file, params_file, preprocessor_file)
Download a trained weights, config and preprocessor. Args: url (str): target url.
codesearchnet
def make_quadratic(poly, strength, vartype=None, bqm=None): if (bqm is None): if (vartype is None): raise ValueError('one of vartype and bqm must be provided') bqm = BinaryQuadraticModel.empty(vartype) else: if (not isinstance(bqm, BinaryQuadraticModel)): raise TypeError('create_using must be a BinaryQuadraticModel') if ((vartype is not None) and (vartype is not bqm.vartype)): raise ValueError('one of vartype and create_using must be provided') bqm.info['reduction'] = {} new_poly = {} for (term, bias) in iteritems(poly): if (len(term) == 0): bqm.add_offset(bias) elif (len(term) == 1): (v,) = term bqm.add_variable(v, bias) else: new_poly[term] = bias return _reduce_degree(bqm, new_poly, vartype, strength)
Create a binary quadratic model from a higher order polynomial. Args: poly (dict): Polynomial as a dict of form {term: bias, ...}, where `term` is a tuple of variables and `bias` the associated bias. strength (float): Strength of the reduction constraint. Insufficient strength can result in the binary quadratic model not having the same minimizations as the polynomial. vartype (:class:`.Vartype`, optional): Vartype of the polynomial. If `bqm` is provided, vartype is not required. bqm (:class:`.BinaryQuadraticModel`, optional): The terms of the reduced polynomial are added to this binary quadratic model. If not provided, a new binary quadratic model is created. Returns: :class:`.BinaryQuadraticModel` Examples: >>> poly = {(0,): -1, (1,): 1, (2,): 1.5, (0, 1): -1, (0, 1, 2): -2} >>> bqm = dimod.make_quadratic(poly, 5.0, dimod.SPIN)
codesearchnet
def get_newest(blocks, layout_blocks): layout_temp = list(layout_blocks) for i in range(0, len(layout_temp)): for k in range(0, len(layout_blocks)): if blocks[layout_temp[i]].ec_hdr.image_seq != blocks[layout_blocks[k]].ec_hdr.image_seq: continue if blocks[layout_temp[i]].leb_num != blocks[layout_blocks[k]].leb_num: continue if blocks[layout_temp[i]].vid_hdr.sqnum > blocks[layout_blocks[k]].vid_hdr.sqnum: del layout_blocks[k] break return layout_blocks
Filter out old layout blocks from list Arguments: List:blocks -- List of block objects List:layout_blocks -- List of layout block indexes Returns: List -- Newest layout blocks in list
juraj-google-style
class Chunk: content: Content id: str = field(default_factory=lambda: str(uuid.uuid4())) index: int = 0 metadata: Dict[str, Any] = field(default_factory=dict) embedding: Optional[Embedding] = None
Represents a chunk of embeddable content with metadata. Args: content: The actual content of the chunk id: Unique identifier for the chunk index: Index of this chunk within the original document metadata: Additional metadata about the chunk (e.g., document source) embedding: Vector embeddings of the content
github-repos
def get_excel_workbook(api_data, result_info_key, identifier_keys): cleaned_data = [] for item_data in api_data: result_info = item_data.pop(result_info_key, {}) cleaned_item_data = {} if ('meta' in item_data): meta = item_data.pop('meta') cleaned_item_data['meta'] = meta for key in item_data: cleaned_item_data[key] = item_data[key]['result'] cleaned_item_data[result_info_key] = result_info cleaned_data.append(cleaned_item_data) data_list = copy.deepcopy(cleaned_data) workbook = openpyxl.Workbook() write_worksheets(workbook, data_list, result_info_key, identifier_keys) return workbook
Generates an Excel workbook object given api_data returned by the Analytics API Args: api_data: Analytics API data as a list of dicts (one per identifier) result_info_key: the key in api_data dicts that contains the data results identifier_keys: the list of keys used as requested identifiers (address, zipcode, block_id, etc) Returns: raw excel file data
codesearchnet
def cleanup(self): with LogTask('Stop prefix'): self.stop() with LogTask('Tag prefix as uninitialized'): os.unlink(self.paths.prefix_lagofile())
Stops any running entities in the prefix and uninitializes it, usually you want to do this if you are going to remove the prefix afterwards Returns: None
codesearchnet
def publish(self, topic, dct): get_logger().info("Publishing message {} on routing key " "{}...".format(dct, topic)) self._channel.basic_publish( exchange=self.exchange, routing_key=topic, body=json.dumps(dct) )
Send a dict with internal routing key to the exchange. Args: topic: topic to publish the message to dct: dict object to send
juraj-google-style
def auto_convert_cell(flagable, cell, position, worksheet, flags, units, parens_as_neg=True): conversion = cell if isinstance(cell, (int, float)): pass elif isinstance(cell, basestring): if (not cell): conversion = None else: conversion = auto_convert_string_cell(flagable, cell, position, worksheet, flags, units, parens_as_neg=parens_as_neg) elif (cell != None): flagable.flag_change(flags, 'warning', position, worksheet, flagable.FLAGS['unknown-to-string']) conversion = str(cell) if (not conversion): conversion = None else: pass return conversion
Performs a first step conversion of the cell to check it's type or try to convert if a valid conversion exists. Args: parens_as_neg: Converts numerics surrounded by parens to negative values
codesearchnet
def from_file(cls, filename, *, strict=True): config = cls() config.load_from_file(filename, strict=strict) return config
Create a new Config object from a configuration file. Args: filename (str): The location and name of the configuration file. strict (bool): If true raises a ConfigLoadError when the configuration cannot be found. Returns: An instance of the Config class. Raises: ConfigLoadError: If the configuration cannot be found.
juraj-google-style
def create_assembly_instance(self, assembly_uri, part_uri, configuration): payload = {'documentId': part_uri['did'], 'elementId': part_uri['eid'], 'versionId': part_uri['wvm'], 'isAssembly': False, 'isWholePartStudio': True, 'configuration': self.encode_configuration(part_uri['did'], part_uri['eid'], configuration)} return self._api.request('post', (((((((('/api/assemblies/d/' + assembly_uri['did']) + '/') + assembly_uri['wvm_type']) + '/') + assembly_uri['wvm']) + '/e/') + assembly_uri['eid']) + '/instances'), body=payload)
Insert a configurable part into an assembly. Args: - assembly (dict): eid, wid, and did of the assembly into which will be inserted - part (dict): eid and did of the configurable part - configuration (dict): the configuration Returns: - requests.Response: Onshape response data
codesearchnet
def get_realtime_urls(admin_view_func=lambda x: x): from .widgets import REALTIME_WIDGETS return [url(w.url_regex, admin_view_func(w.as_view()), name=w.url_name) for w in REALTIME_WIDGETS]
Get the URL for real-time widgets. Args: admin_view_func (callable): an admin_view method from an AdminSite instance. By default: identity. Returns: list: the list of the real-time URLs as django's ``url()``.
juraj-google-style
class JSON_To_BigQuery(json.JSONEncoder): def default(self, obj): if isinstance(obj, bytes): return base64.standard_b64encode(obj).decode('ascii') elif isinstance(obj, datetime.datetime): return obj.strftime('%s %s' % (self.BIGQUERY_DATE_FORMAT, self.BIGQUERY_TIME_FORMAT)) elif isinstance(obj, datetime.date): return obj.strftime(self.BIGQUERY_DATE_FORMAT) elif isinstance(obj, datetime.time): return obj.strftime(self.BIGQUERY_TIME_FORMAT) elif isinstance(obj, map): return list(obj) else: return super(JSON_To_BigQuery, self).default(obj)
Translate complex Python objects into BigQuery formats where json does not have defaults. Usage: json.dumps(..., cls=JSON_To_BigQuery) Currently translates: bytes -> base64 detetime - > str dete - > str time - > str Args: obj - any json dumps parameter without a default handler Returns: Always a string version of the object passed in.
github-repos
def get_help(func): help_text = '' if isinstance(func, dict): name = context_name(func) help_text = (('\n' + name) + '\n\n') doc = inspect.getdoc(func) if (doc is not None): doc = inspect.cleandoc(doc) help_text += (doc + '\n') return help_text sig = func.metadata.signature() doc = inspect.getdoc(func) if (doc is not None): doc = inspect.cleandoc(doc) help_text += (('\n' + sig) + '\n\n') if (doc is not None): help_text += (doc + '\n') if inspect.isclass(func): func = func.__init__ if func.metadata.load_from_doc: return help_text help_text += '\nArguments:\n' for (key, info) in func.metadata.annotated_params.items(): type_name = info.type_name desc = '' if (info.desc is not None): desc = info.desc help_text += (' - %s (%s): %s\n' % (key, type_name, desc)) return help_text
Return usage information about a context or function. For contexts, just return the context name and its docstring For functions, return the function signature as well as its argument types. Args: func (callable): An annotated callable function Returns: str: The formatted help text
codesearchnet
class ScoreAggregation(AggregationFn, _AggModelIdMixin, _SourcePredictionMixin): def __init__(self, agg_func: Callable[[Iterable[float]], float], agg_model_id: Optional[str]=None, include_source_predictions: bool=False): self._agg = agg_func _AggModelIdMixin.__init__(self, agg_model_id) _SourcePredictionMixin.__init__(self, include_source_predictions) def apply(self, predictions: Iterable[AnomalyPrediction]) -> AnomalyPrediction: result_dict: dict[str, Any] = {} _AggModelIdMixin.add_model_id(self, result_dict) _SourcePredictionMixin.add_source_predictions(self, result_dict, predictions) scores = [prediction.score for prediction in predictions if prediction.score is not None and (not math.isnan(prediction.score))] if len(scores) > 0: result_dict['score'] = self._agg(scores) elif all(map(lambda x: x.score is None, predictions)): result_dict['score'] = None else: result_dict['score'] = float('NaN') return AnomalyPrediction(**result_dict)
Aggregates anomaly predictions based on their scores. This is an abstract base class for `AggregationFn`s that combine multiple `AnomalyPrediction` objects into a single `AnomalyPrediction` based on the scores of the input predictions. Args: agg_func (Callable[[Iterable[float]], float]): A function that aggregates a collection of anomaly scores (floats) into a single score. agg_model_id (Optional[str]): The model id used in aggregated predictions. Defaults to None. include_source_predictions (bool): If True, include the input predictions in the `source_predictions` of the output. Defaults to False.
github-repos
def get_speaker_info(self, refresh=False, timeout=None): if (self.speaker_info and (refresh is False)): return self.speaker_info else: response = requests.get((('http: dom = XML.fromstring(response.content) device = dom.find('{urn:schemas-upnp-org:device-1-0}device') if (device is not None): self.speaker_info['zone_name'] = device.findtext('{urn:schemas-upnp-org:device-1-0}roomName') self.speaker_info['player_icon'] = device.findtext('{urn:schemas-upnp-org:device-1-0}iconList/{urn:schemas-upnp-org:device-1-0}icon/{urn:schemas-upnp-org:device-1-0}url') self.speaker_info['uid'] = self.uid self.speaker_info['serial_number'] = device.findtext('{urn:schemas-upnp-org:device-1-0}serialNum') self.speaker_info['software_version'] = device.findtext('{urn:schemas-upnp-org:device-1-0}softwareVersion') self.speaker_info['hardware_version'] = device.findtext('{urn:schemas-upnp-org:device-1-0}hardwareVersion') self.speaker_info['model_number'] = device.findtext('{urn:schemas-upnp-org:device-1-0}modelNumber') self.speaker_info['model_name'] = device.findtext('{urn:schemas-upnp-org:device-1-0}modelName') self.speaker_info['display_version'] = device.findtext('{urn:schemas-upnp-org:device-1-0}displayVersion') mac = self.speaker_info['serial_number'].split(':')[0] self.speaker_info['mac_address'] = mac return self.speaker_info return None
Get information about the Sonos speaker. Arguments: refresh(bool): Refresh the speaker info cache. timeout: How long to wait for the server to send data before giving up, as a float, or a `(connect timeout, read timeout)` tuple e.g. (3, 5). Default is no timeout. Returns: dict: Information about the Sonos speaker, such as the UID, MAC Address, and Zone Name.
codesearchnet
def _IDW(self, latitude, longitude, radius=1): tile = self.get_file(latitude, longitude) if (tile is None): return None return tile._InverseDistanceWeighted(latitude, longitude, radius)
Return the interpolated elevation at a point. Load the correct tile for latitude and longitude given. If the tile doesn't exist, return None. Otherwise, call the tile's Inverse Distance Weighted function and return the elevation. Args: latitude: float with the latitude in decimal degrees longitude: float with the longitude in decimal degrees radius: int of 1 or 2 indicating the approximate radius of adjacent cells to include Returns: a float of the interpolated elevation with the same unit as the .hgt file (meters)
codesearchnet
def to_dataframe(self, bqstorage_client=None, dtypes=None, progress_bar_type=None): if pandas is None: raise ValueError(_NO_PANDAS_ERROR) return pandas.DataFrame()
Create an empty dataframe. Args: bqstorage_client (Any): Ignored. Added for compatibility with RowIterator. dtypes (Any): Ignored. Added for compatibility with RowIterator. progress_bar_type (Any): Ignored. Added for compatibility with RowIterator. Returns: pandas.DataFrame: An empty :class:`~pandas.DataFrame`.
juraj-google-style
def plot_soma3d(ax, soma, color=None, alpha=_ALPHA): color = _get_color(color, tree_type=NeuriteType.soma) if isinstance(soma, SomaCylinders): for (start, end) in zip(soma.points, soma.points[1:]): common.plot_cylinder(ax, start=start[COLS.XYZ], end=end[COLS.XYZ], start_radius=start[COLS.R], end_radius=end[COLS.R], color=color, alpha=alpha) else: common.plot_sphere(ax, center=soma.center[COLS.XYZ], radius=soma.radius, color=color, alpha=alpha) _update_3d_datalim(ax, soma)
Generates a 3d figure of the soma. Args: ax(matplotlib axes): on what to plot soma(neurom.core.Soma): plotted soma color(str or None): Color of plotted values, None corresponds to default choice alpha(float): Transparency of plotted values
codesearchnet
def ansible_inventory(self, keys=['vm-type', 'groups', 'vm-provider']): lansible = LagoAnsible(self._prefix) return lansible.get_inventory_str(keys=keys)
Get an Ansible inventory as a string, ``keys`` should be list on which to group the hosts by. You can use any key defined in LagoInitFile. Examples of possible `keys`: `keys=['disks/0/metadata/arch']`, would group the hosts by the architecture. `keys=['/disks/0/metadata/distro', 'disks/0/metadata/arch']`, would create groups by architecture and also by distro. `keys=['groups']` - would group hosts by the groups defined for each VM in the LagoInitFile, i.e.: domains: vm-01: ... groups: web-server .. vm-02: .. groups: db-server Args: keys (list of str): Path to the keys that will be used to create groups. Returns: str: INI-like Ansible inventory
codesearchnet
def _gather_beams(tensor: torch.Tensor, beam_indices: torch.Tensor) -> torch.Tensor: while len(beam_indices.shape) < len(tensor.shape): beam_indices = beam_indices.unsqueeze(-1) gathered_tensor = torch.take_along_dim(input=tensor, indices=beam_indices, dim=1) return gathered_tensor
Gathers the beam slices indexed by beam_indices into new beam array. Args: tensor (`torch.Tensor`): A tensor containing data to be gathered. The tensor is a 2D or a 3D tensor with the two first dimensions depicting the batch and the beam dimensions. beam_indices (`torch.Tensor` of shape `(batch_size, num_beams_to_select)`): The indices of the beams to select . Returns: A tensor with the selected beams
github-repos
def _relative_position_to_absolute_position_unmasked(x): x_shape = common_layers.shape_list(x) batch = x_shape[0] heads = x_shape[1] length = x_shape[2] col_pad = tf.zeros((batch, heads, length, 1)) x = tf.concat([x, col_pad], axis=3) flat_x = tf.reshape(x, [batch, heads, ((length * 2) * length)]) flat_pad = tf.zeros((batch, heads, (length - 1))) flat_x_padded = tf.concat([flat_x, flat_pad], axis=2) final_x = tf.reshape(flat_x_padded, [batch, heads, (length + 1), ((2 * length) - 1)]) final_x = final_x[(:, :, :, (length - 1):)] final_x = final_x[(:, :, :length, :)] return final_x
Converts tensor from relative to aboslute indexing for local attention. Args: x: a Tensor of shape [batch (or batch*num_blocks), heads, length, 2 * length - 1] Returns: A Tensor of shape [batch (or batch*num_blocks), heads, length, length-1]
codesearchnet
def infer_typehints_schema(data): column_data = OrderedDict() for row in data: for key, value in row.items(): column_data.setdefault(key, []).append(value) column_types = OrderedDict([(key, infer_element_type(values)) for key, values in column_data.items()]) return column_types
For internal use only; no backwards-compatibility guarantees. Infer Beam types for tabular data. Args: data (List[dict]): A list of dictionaries representing rows in a table. Returns: An OrderedDict mapping column names to Beam types.
github-repos
def has_result(state, incorrect_msg="Your query did not return a result."): has_no_error(state) if not state.solution_result: raise NameError( "You are using has_result() to verify that the student query generated an error, but the solution query did not return a result either!" ) if not state.student_result: state.do_test(incorrect_msg) return state
Checks if the student's query returned a result. Args: incorrect_msg: If specified, this overrides the automatically generated feedback message in case the student's query did not return a result.
juraj-google-style
def iri(uri_string): uri_string = str(uri_string) if (uri_string[:1] == '?'): return uri_string if (uri_string[:1] == '['): return uri_string if (uri_string[:1] != '<'): uri_string = '<{}'.format(uri_string.strip()) if (uri_string[(len(uri_string) - 1):] != '>'): uri_string = '{}>'.format(uri_string.strip()) return uri_string
converts a string to an IRI or returns an IRI if already formated Args: uri_string: uri in string format Returns: formated uri with <>
codesearchnet
def from_tensors(self, tensors: Iterator[core.Tensor]) -> Any: del tensors return self.placeholder_value(PlaceholderContext())
Generates a value of this type from Tensors. Must use the same fixed amount of tensors as `to_tensors`. Args: tensors: An iterator from which the tensors can be pulled. Returns: A value of this type.
github-repos
def generate_batch(self, inputs: List[List[int]], generation_config: Optional[GenerationConfig]=None, progress_bar: bool=True, **kwargs) -> List[List[int]]: if not inputs: return [] manager = self.init_continuous_batching(generation_config=generation_config) manager.start() results = {} num_requests = len(inputs) try: from tqdm.contrib.logging import logging_redirect_tqdm with logging_redirect_tqdm([logger]): with tqdm(total=num_requests, disable=not progress_bar, desc=f'Solving {num_requests} requests', unit='request') as pbar: manager.add_requests(inputs, **kwargs) finished_count = 0 while finished_count < num_requests: result = manager.get_result(timeout=1) if result: req_id = result.request_id if result.status == RequestStatus.FINISHED: results[req_id] = result finished_count += 1 pbar.update(1) elif not manager.is_running(): logger.error('Generation thread terminated unexpectedly.') break except Exception as e: logger.error(f'Error during batch generation: {e}', exc_info=True) finally: manager.stop(block=True, timeout=5.0) return results
Generate sequences for a batch of prompts using continuous batching. Args: inputs: List of input token sequences (prompts) generation_config: Optional generation configuration **kwargs: Additional generation parameters Returns: `List[List[int]]`: A list containing the generated sequences (including prompt tokens if not handled otherwise) for each input prompt, in the same order. Returns an empty list `[]` for requests that failed.
github-repos
def client(self): if (self._client is None): self._client = Client_(self.servers) return self._client
Get the native memcache client. Returns: `memcache.Client` instance.
codesearchnet
def parse_str_to_expression(fiql_str): nesting_lvl = 0 last_element = None expression = Expression() for (preamble, selector, comparison, argument) in iter_parse(fiql_str): if preamble: for char in preamble: if char == '(': if isinstance(last_element, BaseExpression): raise FiqlFormatException( "%s can not be followed by %s" % ( last_element.__class__, Expression)) expression = expression.create_nested_expression() nesting_lvl += 1 elif char == ')': expression = expression.get_parent() last_element = expression nesting_lvl -= 1 else: if not expression.has_constraint(): raise FiqlFormatException( "%s proceeding initial %s" % ( Operator, Constraint)) if isinstance(last_element, Operator): raise FiqlFormatException( "%s can not be followed by %s" % ( Operator, Operator)) last_element = Operator(char) expression = expression.add_operator(last_element) if selector: if isinstance(last_element, BaseExpression): raise FiqlFormatException("%s can not be followed by %s" % ( last_element.__class__, Constraint)) last_element = Constraint(selector, comparison, argument) expression.add_element(last_element) if nesting_lvl != 0: raise FiqlFormatException( "At least one nested expression was not correctly closed") if not expression.has_constraint(): raise FiqlFormatException( "Parsed string '%s' contained no constraint" % fiql_str) return expression
Parse a FIQL formatted string into an ``Expression``. Args: fiql_str (string): The FIQL formatted string we want to parse. Returns: Expression: An ``Expression`` object representing the parsed FIQL string. Raises: FiqlFormatException: Unable to parse string due to incorrect formatting. Example: >>> expression = parse_str_to_expression( ... "name==bar,dob=gt=1990-01-01")
juraj-google-style
def parse_file(filename): poscar_read = False poscar_string = [] dataset = [] all_dataset = [] all_dataset_aug = {} dim = None dimline = None read_dataset = False ngrid_pts = 0 data_count = 0 poscar = None with zopen(filename, "rt") as f: for line in f: original_line = line line = line.strip() if read_dataset: toks = line.split() for tok in toks: if data_count < ngrid_pts: x = data_count % dim[0] y = int(math.floor(data_count / dim[0])) % dim[1] z = int(math.floor(data_count / dim[0] / dim[1])) dataset[x, y, z] = float(tok) data_count += 1 if data_count >= ngrid_pts: read_dataset = False data_count = 0 all_dataset.append(dataset) elif not poscar_read: if line != "" or len(poscar_string) == 0: poscar_string.append(line) elif line == "": poscar = Poscar.from_string("\n".join(poscar_string)) poscar_read = True elif not dim: dim = [int(i) for i in line.split()] ngrid_pts = dim[0] * dim[1] * dim[2] dimline = line read_dataset = True dataset = np.zeros(dim) elif line == dimline: read_dataset = True dataset = np.zeros(dim) else: key = len(all_dataset) - 1 if key not in all_dataset_aug: all_dataset_aug[key] = [] all_dataset_aug[key].append(original_line) if len(all_dataset) == 4: data = {"total": all_dataset[0], "diff_x": all_dataset[1], "diff_y": all_dataset[2], "diff_z": all_dataset[3]} data_aug = {"total": all_dataset_aug.get(0, None), "diff_x": all_dataset_aug.get(1, None), "diff_y": all_dataset_aug.get(2, None), "diff_z": all_dataset_aug.get(3, None)} diff_xyz = np.array([data["diff_x"], data["diff_y"], data["diff_z"]]) diff_xyz = diff_xyz.reshape((3, dim[0] * dim[1] * dim[2])) ref_direction = np.array([1.01, 1.02, 1.03]) ref_sign = np.sign(np.dot(ref_direction, diff_xyz)) diff = np.multiply(np.linalg.norm(diff_xyz, axis=0), ref_sign) data["diff"] = diff.reshape((dim[0], dim[1], dim[2])) elif len(all_dataset) == 2: data = {"total": all_dataset[0], "diff": all_dataset[1]} data_aug = {"total": all_dataset_aug.get(0, None), "diff": all_dataset_aug.get(1, None)} else: data = {"total": all_dataset[0]} data_aug = {"total": all_dataset_aug.get(0, None)} return poscar, data, data_aug
Convenience method to parse a generic volumetric data file in the vasp like format. Used by subclasses for parsing file. Args: filename (str): Path of file to parse Returns: (poscar, data)
juraj-google-style
def flags(cls): assert (cls.__bases__ == (object,)) d = dict(cls.__dict__) new_type = type(cls.__name__, (int,), d) new_type.__module__ = cls.__module__ map_ = {} for (key, value) in iteritems(d): if ((key.upper() == key) and isinstance(value, integer_types)): value_instance = new_type(value) setattr(new_type, key, value_instance) map_[value] = key def str_(self): value = int(self) matches = [] for (k, v) in map_.items(): if (value & k): matches.append(('%s.%s' % (type(self).__name__, v))) value &= (~ k) if ((value != 0) or (not matches)): matches.append(text_type(value)) return ' | '.join(matches) def repr_(self): return ('<%s: %d>' % (str(self), int(self))) setattr(new_type, '__repr__', repr_) setattr(new_type, '__str__', str_) return new_type
A decorator for creating an int flags class. Makes the values a subclass of the type and implements repr/str. The new class will be a subclass of int. Args: cls (type): The class to convert to an flags Returns: type: A new class :: @flags class Foo(object): FOO = 1 BAR = 2
codesearchnet
def dvds_new_releases(self, **kwargs): path = self._get_path('dvds_new_releases') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Gets the upcoming movies from the API. Args: page_limit (optional): number of movies to show per page, default=16 page (optional): results page number, default=1 country (optional): localized data for selected country, default="us" Returns: A dict respresentation of the JSON returned from the API.
juraj-google-style
def _is_ready(self, as_of): if self.is_one_off(): return self.initial_billing_cycle.date_range.lower <= as_of else: return True
Is the RecurringCost ready to be enacted as of the date `as_of` This determines if `as_of` precedes the start of `initial_billing_cycle`. If so, we should not be enacting this RecurringCost yet. Args: as_of (Date):
juraj-google-style
def create_batch(cls, size, **kwargs): return [cls.create(**kwargs) for _ in range(size)]
Create a batch of instances of the given class, with overriden attrs. Args: size (int): the number of instances to create Returns: object list: the created instances
codesearchnet
def create_backup(self, resource, timeout=(- 1)): return self._client.create(resource, uri=self.BACKUPS_PATH, timeout=timeout)
Creates a backup bundle with all the artifacts present on the appliance. At any given point only one backup bundle will exist on the appliance. Args: resource (dict): Deployment Group to create the backup. timeout: Timeout in seconds. Waits for task completion by default. The timeout does not abort the operation in OneView, it just stops waiting for its completion. Returns: dict: A Deployment Group associated with the Artifact Bundle backup.
codesearchnet
class ParameterListState(object): def __init__(self, opening_bracket, newline, opening_column): self.opening_bracket = opening_bracket self.has_split_before_first_param = newline self.opening_column = opening_column self.parameters = opening_bracket.parameters self.split_before_closing_bracket = False @property def closing_bracket(self): return self.opening_bracket.matching_bracket @property def has_typed_return(self): return self.closing_bracket.next_token.value == '->' @property @lru_cache() def has_default_values(self): return any((param.has_default_value for param in self.parameters)) @property @lru_cache() def ends_in_comma(self): if not self.parameters: return False return self.parameters[-1].last_token.next_token.value == ',' @property @lru_cache() def last_token(self): token = self.opening_bracket.matching_bracket while not token.is_comment and token.next_token: token = token.next_token return token @lru_cache() def LastParamFitsOnLine(self, indent): if not self.has_typed_return: return False if not self.parameters: return True total_length = self.last_token.total_length last_param = self.parameters[-1].first_token total_length -= last_param.total_length - len(last_param.value) return total_length + indent <= style.Get('COLUMN_LIMIT') @lru_cache() def SplitBeforeClosingBracket(self, indent): if style.Get('DEDENT_CLOSING_BRACKETS'): return True if self.ends_in_comma: return True if not self.parameters: return False total_length = self.last_token.total_length last_param = self.parameters[-1].first_token total_length -= last_param.total_length - len(last_param.value) return total_length + indent > style.Get('COLUMN_LIMIT') def Clone(self): clone = ParameterListState(self.opening_bracket, self.has_split_before_first_param, self.opening_column) clone.split_before_closing_bracket = self.split_before_closing_bracket clone.parameters = [param.Clone() for param in self.parameters] return clone def __repr__(self): return '[opening_bracket::%s, has_split_before_first_param::%s, opening_column::%d]' % (self.opening_bracket, self.has_split_before_first_param, self.opening_column) def __eq__(self, other): return hash(self) == hash(other) def __ne__(self, other): return not self == other def __hash__(self, *args, **kwargs): return hash((self.opening_bracket, self.has_split_before_first_param, self.opening_column, (hash(param) for param in self.parameters)))
Maintains the state of function parameter list formatting decisions. Attributes: opening_bracket: The opening bracket of the parameter list. closing_bracket: The closing bracket of the parameter list. has_typed_return: True if the function definition has a typed return. ends_in_comma: True if the parameter list ends in a comma. last_token: Returns the last token of the function declaration. has_default_values: True if the parameters have default values. has_split_before_first_param: Whether there is a newline before the first parameter. opening_column: The position of the opening parameter before a newline. parameters: A list of parameter objects (Parameter). split_before_closing_bracket: Split before the closing bracket. Sometimes needed if the indentation would collide.
github-repos
def Print(self, x, data, message, **kwargs): tf.logging.info("PlacementMeshImpl::Print") new_slices = x.tensor_list[:] with tf.device(self._devices[0]): new_slices[0] = tf.Print( new_slices[0], [t for d in data for t in d.tensor_list], message, **kwargs) return self.LaidOutTensor(new_slices)
call tf.Print. Args: x: a LaidOutTensor data: a list of LaidOutTensor message: a string **kwargs: keyword arguments to tf.print Returns: a LaidOutTensor
juraj-google-style
def solve_fba(self, objective): self._prob.set_objective(self._v_wt[objective]) return self._solve(lp.ObjectiveSense.Maximize)
Solve the wild type problem using FBA. Args: objective: The objective reaction to be maximized. Returns: The LP Result object for the solved FBA problem.
juraj-google-style
def check_type(o, acceptable_types, may_be_none=True): if not isinstance(acceptable_types, tuple): acceptable_types = (acceptable_types,) if may_be_none and o is None: pass elif isinstance(o, acceptable_types): pass else: error_message = ( "We were expecting to receive an instance of one of the following " "types: {types}{none}; but instead we received {o} which is a " "{o_type}.".format( types=", ".join([repr(t.__name__) for t in acceptable_types]), none="or 'None'" if may_be_none else "", o=o, o_type=repr(type(o).__name__) ) ) raise TypeError(error_message)
Object is an instance of one of the acceptable types or None. Args: o: The object to be inspected. acceptable_types: A type or tuple of acceptable types. may_be_none(bool): Whether or not the object may be None. Raises: TypeError: If the object is None and may_be_none=False, or if the object is not an instance of one of the acceptable types.
juraj-google-style