code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def __get_distribution_tags(self, client, arn): return {t['Key']: t['Value'] for t in client.list_tags_for_resource(Resource=arn)['Tags']['Items']}
Returns a dict containing the tags for a CloudFront distribution Args: client (botocore.client.CloudFront): Boto3 CloudFront client object arn (str): ARN of the distribution to get tags for Returns: `dict`
codesearchnet
def to_channel_dimension_format(image: np.ndarray, channel_dim: Union[ChannelDimension, str], input_channel_dim: Optional[Union[ChannelDimension, str]]=None) -> np.ndarray: if not isinstance(image, np.ndarray): raise TypeError(f'Input image must be of type np.ndarray, got {type(image)}') if input_channel_dim is None: input_channel_dim = infer_channel_dimension_format(image) target_channel_dim = ChannelDimension(channel_dim) if input_channel_dim == target_channel_dim: return image if target_channel_dim == ChannelDimension.FIRST: axes = list(range(image.ndim - 3)) + [image.ndim - 1, image.ndim - 3, image.ndim - 2] image = image.transpose(axes) elif target_channel_dim == ChannelDimension.LAST: axes = list(range(image.ndim - 3)) + [image.ndim - 2, image.ndim - 1, image.ndim - 3] image = image.transpose(axes) else: raise ValueError(f'Unsupported channel dimension format: {channel_dim}') return image
Converts `image` to the channel dimension format specified by `channel_dim`. The input can have arbitrary number of leading dimensions. Only last three dimension will be permuted to format the `image`. Args: image (`numpy.ndarray`): The image to have its channel dimension set. channel_dim (`ChannelDimension`): The channel dimension format to use. input_channel_dim (`ChannelDimension`, *optional*): The channel dimension format of the input image. If not provided, it will be inferred from the input image. Returns: `np.ndarray`: The image with the channel dimension set to `channel_dim`.
github-repos
def cache_bottlenecks(sess, image_lists, image_dir, bottleneck_dir, jpeg_data_tensor, decoded_image_tensor, resized_input_tensor, bottleneck_tensor, module_name): how_many_bottlenecks = 0 ensure_dir_exists(bottleneck_dir) for (label_name, label_lists) in image_lists.items(): for category in ['training', 'testing', 'validation']: category_list = label_lists[category] for (index, unused_base_name) in enumerate(category_list): get_or_create_bottleneck(sess, image_lists, label_name, index, image_dir, category, bottleneck_dir, jpeg_data_tensor, decoded_image_tensor, resized_input_tensor, bottleneck_tensor, module_name) how_many_bottlenecks += 1 if ((how_many_bottlenecks % 100) == 0): tf.logging.info((str(how_many_bottlenecks) + ' bottleneck files created.'))
Ensures all the training, testing, and validation bottlenecks are cached. Because we're likely to read the same image multiple times (if there are no distortions applied during training) it can speed things up a lot if we calculate the bottleneck layer values once for each image during preprocessing, and then just read those cached values repeatedly during training. Here we go through all the images we've found, calculate those values, and save them off. Args: sess: The current active TensorFlow Session. image_lists: OrderedDict of training images for each label. image_dir: Root folder string of the subfolders containing the training images. bottleneck_dir: Folder string holding cached files of bottleneck values. jpeg_data_tensor: Input tensor for jpeg data from file. decoded_image_tensor: The output of decoding and resizing the image. resized_input_tensor: The input node of the recognition graph. bottleneck_tensor: The penultimate output layer of the graph. module_name: The name of the image module being used. Returns: Nothing.
codesearchnet
def _new_named_tuple(self, class_name: str, fields: list[tuple[str, Any]]) -> pytd.Class: class_base = pytd.NamedType('typing.NamedTuple') class_constants = tuple((pytd.Constant(n, t) for n, t in fields)) return pytd.Class(name=class_name, keywords=(), bases=(class_base,), methods=(), constants=class_constants, decorators=(), classes=(), slots=None, template=())
Generates a pytd class for a named tuple. Args: class_name: The name of the generated class fields: A list of (name, type) tuples. Returns: A generated class that describes the named tuple.
github-repos
def restore_app_connection(self, port=None): self.host_port = (port or utils.get_available_host_port()) self._retry_connect() self.ed = self._start_event_client()
Restores the sl4a after device got disconnected. Instead of creating new instance of the client: - Uses the given port (or find a new available host_port if none is given). - Tries to connect to remote server with selected port. Args: port: If given, this is the host port from which to connect to remote device port. If not provided, find a new available port as host port. Raises: AppRestoreConnectionError: When the app was not able to be started.
codesearchnet
def decode_list(cls, obj, element_type): if (not isinstance(obj, list)): raise Exception('expected a python list') return list(map((lambda x: cls.do_decode(x, element_type)), obj))
Decodes json into a list, handling conversion of the elements. Args: obj: the json object to decode element_type: a class object which is the conjure type of the elements in this list. Returns: A python list where the elements are instances of type element_type.
codesearchnet
def mean(self): chunk_iter = chunks(self.times, self.bestof) times = list(map(min, chunk_iter)) mean = (sum(times) / len(times)) return mean
The mean of the best results of each trial. Returns: float: mean of measured seconds Note: This is typically less informative than simply looking at the min. It is recommended to use min as the expectation value rather than mean in most cases. Example: >>> import math >>> self = Timerit(num=10, verbose=0) >>> self.call(math.factorial, 50) >>> assert self.mean() > 0
codesearchnet
def __init__(self, src_file, sync_dst_file, *async_dst_files): self._origin_stack = '\n'.join(traceback.format_stack()) self.tee_file = None self._src_file = src_file self._sync_dst_file = sync_dst_file self._async_dst_files = list(async_dst_files) self._write_queues = [] self._write_threads = [] for f in async_dst_files: q = queue.Queue() t = spawn_reader_writer(q.get, functools.partial(self._write, f)) self._write_queues.append(q) self._write_threads.append(t) src_fd = self._src_file.fileno() def read(): try: return os.read(src_fd, 1024) except OSError: return six.b('') self._read_thread = spawn_reader_writer(read, self._write_to_all)
Constructor. Args: src_file: file to read from. sync_dst_file: file to write to synchronously when `self.write()` is called. async_dst_files: files to write to asynchronously
juraj-google-style
def __x_google_quota_definitions_descriptor(self, limit_definitions): if not limit_definitions: return None definitions_list = [{ 'name': ld.metric_name, 'metric': ld.metric_name, 'unit': '1/min/{project}', 'values': {'STANDARD': ld.default_limit}, 'displayName': ld.display_name, } for ld in limit_definitions] metrics = [{ 'name': ld.metric_name, 'valueType': 'INT64', 'metricKind': 'GAUGE', } for ld in limit_definitions] return { 'quota': {'limits': definitions_list}, 'metrics': metrics, }
Describes the quota limit definitions for an API. Args: limit_definitions: List of endpoints.LimitDefinition tuples Returns: A dict descriptor of the API's quota limit definitions.
juraj-google-style
def _reshape(self, fused_qkv: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: batch_size, seq_length, three_times_hidden_size = fused_qkv.shape fused_qkv = fused_qkv.view(batch_size, seq_length, self.num_heads, 3, self.head_dim) query_layer = fused_qkv[..., 0, :].transpose(1, 2) key_layer = fused_qkv[..., 1, :].transpose(1, 2) value_layer = fused_qkv[..., 2, :].transpose(1, 2) return (query_layer, key_layer, value_layer)
Split the last dimension into (num_heads, head_dim) and reshapes to (bs, heads, len, dim) shape without making any copies, results share same memory storage as `fused_qkv` Args: fused_qkv (`torch.tensor`): [batch_size, seq_length, num_heads * 3 * head_dim] Returns: query: [batch_size, num_heads, seq_length, head_dim] key: [batch_size, num_heads, seq_length, head_dim] value: [batch_size, num_heads, seq_length, head_dim]
github-repos
def generate_code(meta, prefix=None, node=False, min=False): if isinstance(meta, dict): (url_prefix, auth_header, resources) = parse_meta(meta) else: (url_prefix, auth_header, resources) = meta if (prefix is not None): url_prefix = prefix core = render_core(url_prefix, auth_header, resources) if min: filename = 'res.web.min.js' else: filename = 'res.web.js' if node: filename = 'res.node.js' base = read_file(filename) return base.replace('"
Generate res.js Args: meta: tuple(url_prefix, auth_header, resources) or metadata of API Returns: res.js source code
codesearchnet
def draw_ID(ID, idx_array, drawID_raster): for i in range(idx_array.shape[0]): x = idx_array[i, 0] y = idx_array[i, 1] drawID_raster[x, y] = ID return drawID_raster
Draw every pixel's ID After computing all given value's pixels connectivity, every pixel will have an ID. Then we need to draw these pixels' ID on the undrawed rasterfile. Args: ID: given ID value idx_array: pixels position set which have the given ID value drawID_raster: undrawed rasterfile Return: drawID_raster: rasterfile after drawing ID
juraj-google-style
def output_compressed_dinf(dinfflowang, compdinffile, weightfile): dinf_r = RasterUtilClass.read_raster(dinfflowang) data = dinf_r.data xsize = dinf_r.nCols ysize = dinf_r.nRows nodata_value = dinf_r.noDataValue cal_dir_code = frompyfunc(DinfUtil.compress_dinf, 2, 3) updated_angle, dir_code, weight = cal_dir_code(data, nodata_value) RasterUtilClass.write_gtiff_file(dinfflowang, ysize, xsize, updated_angle, dinf_r.geotrans, dinf_r.srs, DEFAULT_NODATA, GDT_Float32) RasterUtilClass.write_gtiff_file(compdinffile, ysize, xsize, dir_code, dinf_r.geotrans, dinf_r.srs, DEFAULT_NODATA, GDT_Int16) RasterUtilClass.write_gtiff_file(weightfile, ysize, xsize, weight, dinf_r.geotrans, dinf_r.srs, DEFAULT_NODATA, GDT_Float32)
Output compressed Dinf flow direction and weight to raster file Args: dinfflowang: Dinf flow direction raster file compdinffile: Compressed D8 flow code weightfile: The correspond weight
juraj-google-style
def __init__(self, channel): self.CreateJob = channel.unary_unary( "/google.cloud.talent.v4beta1.JobService/CreateJob", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.CreateJobRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__pb2.Job.FromString, ) self.GetJob = channel.unary_unary( "/google.cloud.talent.v4beta1.JobService/GetJob", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.GetJobRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__pb2.Job.FromString, ) self.UpdateJob = channel.unary_unary( "/google.cloud.talent.v4beta1.JobService/UpdateJob", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.UpdateJobRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__pb2.Job.FromString, ) self.DeleteJob = channel.unary_unary( "/google.cloud.talent.v4beta1.JobService/DeleteJob", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.DeleteJobRequest.SerializeToString, response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString, ) self.ListJobs = channel.unary_unary( "/google.cloud.talent.v4beta1.JobService/ListJobs", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.ListJobsRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.ListJobsResponse.FromString, ) self.BatchDeleteJobs = channel.unary_unary( "/google.cloud.talent.v4beta1.JobService/BatchDeleteJobs", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.BatchDeleteJobsRequest.SerializeToString, response_deserializer=google_dot_protobuf_dot_empty__pb2.Empty.FromString, ) self.SearchJobs = channel.unary_unary( "/google.cloud.talent.v4beta1.JobService/SearchJobs", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.SearchJobsRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.SearchJobsResponse.FromString, ) self.SearchJobsForAlert = channel.unary_unary( "/google.cloud.talent.v4beta1.JobService/SearchJobsForAlert", request_serializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.SearchJobsRequest.SerializeToString, response_deserializer=google_dot_cloud_dot_talent__v4beta1_dot_proto_dot_job__service__pb2.SearchJobsResponse.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def load_identity_signer(key_dir, key_name): key_path = os.path.join(key_dir, '{}.priv'.format(key_name)) if (not os.path.exists(key_path)): raise LocalConfigurationError('No such signing key file: {}'.format(key_path)) if (not os.access(key_path, os.R_OK)): raise LocalConfigurationError('Key file is not readable: {}'.format(key_path)) LOGGER.info('Loading signing key: %s', key_path) try: with open(key_path, 'r') as key_file: private_key_str = key_file.read().strip() except IOError as e: raise LocalConfigurationError('Could not load key file: {}'.format(str(e))) try: private_key = Secp256k1PrivateKey.from_hex(private_key_str) except signing.ParseError as e: raise LocalConfigurationError('Invalid key in file {}: {}'.format(key_path, str(e))) context = signing.create_context('secp256k1') crypto_factory = CryptoFactory(context) return crypto_factory.new_signer(private_key)
Loads a private key from the key directory, based on a validator's identity. Args: key_dir (str): The path to the key directory. key_name (str): The name of the key to load. Returns: Signer: the cryptographic signer for the key
codesearchnet
def linear(x): return x
Linear activation function (pass-through). A "linear" activation is an identity function: it returns the input, unmodified. Args: x: Input tensor.
github-repos
def register_model(cls, model): rest_name = model.rest_name resource_name = model.resource_name if (rest_name not in cls._model_rest_name_registry): cls._model_rest_name_registry[rest_name] = [model] cls._model_resource_name_registry[resource_name] = [model] elif (model not in cls._model_rest_name_registry[rest_name]): cls._model_rest_name_registry[rest_name].append(model) cls._model_resource_name_registry[resource_name].append(model)
Register a model class according to its remote name Args: model: the model to register
codesearchnet
def assert_text(self, *args, **kwargs): query = TextQuery(*args, **kwargs) @self.synchronize(wait=query.wait) def assert_text(): count = query.resolve_for(self) if (not (matches_count(count, query.options) and ((count > 0) or expects_none(query.options)))): raise ExpectationNotMet(query.failure_message) return True return assert_text()
Asserts that the page or current node has the given text content, ignoring any HTML tags. Args: *args: Variable length argument list for :class:`TextQuery`. **kwargs: Arbitrary keyword arguments for :class:`TextQuery`. Returns: True Raises: ExpectationNotMet: If the assertion hasn't succeeded during the wait time.
codesearchnet
def _ragged_stack_concat_axis_0(rt_inputs, stack_values): flat_values = [rt.flat_values for rt in rt_inputs] concatenated_flat_values = array_ops.concat(flat_values, axis=0) nested_splits = [rt.nested_row_splits for rt in rt_inputs] ragged_rank = rt_inputs[0].ragged_rank concatenated_nested_splits = [_concat_ragged_splits([ns[dim] for ns in nested_splits]) for dim in range(ragged_rank)] if stack_values: stack_lengths = array_ops_stack.stack([rt.nrows() for rt in rt_inputs]) stack_splits = ragged_util.lengths_to_splits(stack_lengths) concatenated_nested_splits.insert(0, stack_splits) return ragged_tensor.RaggedTensor.from_nested_row_splits(concatenated_flat_values, concatenated_nested_splits, validate=False)
Helper function to concatenate or stack ragged tensors along axis 0. Args: rt_inputs: A list of RaggedTensors, all with the same rank and ragged_rank. stack_values: Boolean. If true, then stack values; otherwise, concatenate them. Returns: A RaggedTensor.
github-repos
def add_argument(self, parser, bootstrap=False): if self.cli_expose: args = self._get_argparse_names(parser.prefix_chars) kwargs = self._get_argparse_kwargs(bootstrap) parser.add_argument(*args, **kwargs)
Add this item as an argument to the given parser. Args: parser (argparse.ArgumentParser): The parser to add this item to. bootstrap: Flag to indicate whether you only want to mark this item as required or not
codesearchnet
def setup(self, hosts, artifacts, extra_artifacts, use_tsk, reason, grr_server_url, grr_username, grr_password, approvers=None, verify=True): super(GRRArtifactCollector, self).setup(reason, grr_server_url, grr_username, grr_password, approvers=approvers, verify=verify) if (artifacts is not None): self.artifacts = [item.strip() for item in artifacts.strip().split(',')] if (extra_artifacts is not None): self.extra_artifacts = [item.strip() for item in extra_artifacts.strip().split(',')] self.hostnames = [item.strip() for item in hosts.strip().split(',')] self.use_tsk = use_tsk
Initializes a GRR artifact collector. Args: hosts: Comma-separated list of hostnames to launch the flow on. artifacts: list of GRR-defined artifacts. extra_artifacts: list of GRR-defined artifacts to append. use_tsk: toggle for use_tsk flag on GRR flow. reason: justification for GRR access. grr_server_url: GRR server URL. grr_username: GRR username. grr_password: GRR password. approvers: list of GRR approval recipients. verify: boolean, whether to verify the GRR server's x509 certificate.
codesearchnet
def fasta_format_check(fasta_path, logger): header_count = 0 line_count = 1 nt_count = 0 with open(fasta_path) as f: for l in f: l = l.strip() if (l == ''): continue if (l[0] == '>'): header_count += 1 continue if ((header_count == 0) and (l[0] != '>')): error_msg = 'First non-blank line (L:{line_count}) does not contain FASTA header. Line beginning with ">" expected.'.format(line_count=line_count) logger.error(error_msg) raise Exception(error_msg) non_nucleotide_chars_in_line = (set(l) - VALID_NUCLEOTIDES) if (len(non_nucleotide_chars_in_line) > 0): error_msg = 'Line {line} contains the following non-nucleotide characters: {non_nt_chars}'.format(line=line_count, non_nt_chars=', '.join([x for x in non_nucleotide_chars_in_line])) logger.error(error_msg) raise Exception(error_msg) nt_count += len(l) line_count += 1 if (nt_count == 0): error_msg = 'File "{}" does not contain any nucleotide sequence.'.format(fasta_path) logger.error(error_msg) raise Exception(error_msg) logger.info('Valid FASTA format "{}" ({} bp)'.format(fasta_path, nt_count))
Check that a file is valid FASTA format. - First non-blank line needs to begin with a '>' header character. - Sequence can only contain valid IUPAC nucleotide characters Args: fasta_str (str): FASTA file contents string Raises: Exception: If invalid FASTA format
codesearchnet
def install_app(app, target='/Applications/'): if (target[(- 4):] != '.app'): if (app[(- 1):] == '/'): base_app = os.path.basename(app[:(- 1)]) else: base_app = os.path.basename(app) target = os.path.join(target, base_app) if (not (app[(- 1)] == '/')): app += '/' cmd = 'rsync -a --delete "{0}" "{1}"'.format(app, target) return __salt__['cmd.run'](cmd)
Install an app file by moving it into the specified Applications directory Args: app (str): The location of the .app file target (str): The target in which to install the package to Default is ''/Applications/'' Returns: str: The results of the rsync command CLI Example: .. code-block:: bash salt '*' macpackage.install_app /tmp/tmp.app /Applications/
codesearchnet
def Push(self, source_file, device_filename, mtime='0', timeout_ms=None, progress_callback=None, st_mode=None): if isinstance(source_file, str): if os.path.isdir(source_file): self.Shell(('mkdir ' + device_filename)) for f in os.listdir(source_file): self.Push(os.path.join(source_file, f), ((device_filename + '/') + f), progress_callback=progress_callback) return source_file = open(source_file, 'rb') with source_file: connection = self.protocol_handler.Open(self._handle, destination=b'sync:', timeout_ms=timeout_ms) kwargs = {} if (st_mode is not None): kwargs['st_mode'] = st_mode self.filesync_handler.Push(connection, source_file, device_filename, mtime=int(mtime), progress_callback=progress_callback, **kwargs) connection.Close()
Push a file or directory to the device. Args: source_file: Either a filename, a directory or file-like object to push to the device. device_filename: Destination on the device to write to. mtime: Optional, modification time to set on the file. timeout_ms: Expected timeout for any part of the push. st_mode: stat mode for filename progress_callback: callback method that accepts filename, bytes_written and total_bytes, total_bytes will be -1 for file-like objects
codesearchnet
async def snap(self, user=None, view=None): if view is None: view = self.view if user is None: user = self.auth.getUserByName('root') snap = await view.snap(user) return snap
Return a transaction object for the default view. Args: write (bool): Set to True for a write transaction. Returns: (synapse.lib.snap.Snap) NOTE: This must be used in a with block.
juraj-google-style
async def evaluate_trained_model(state): return (await evaluate_model(state.train_model_path, state.best_model_path, os.path.join(fsdb.eval_dir(), state.train_model_name), state.seed))
Evaluate the most recently trained model against the current best model. Args: state: the RL loop State instance.
codesearchnet
def DocumentVersionsRow( self, parser_mediator, query, row, **unused_kwargs): query_hash = hash(query) version_path = self._GetRowValue(query_hash, row, 'version_path') path = self._GetRowValue(query_hash, row, 'path') paths = version_path.split('/') if len(paths) < 2 or not paths[1].isdigit(): user_sid = '' else: user_sid = paths[1] version_path = self.ROOT_VERSION_PATH + version_path path, _, _ = path.rpartition('/') event_data = MacDocumentVersionsEventData() event_data.last_time = self._GetRowValue(query_hash, row, 'last_time') event_data.name = self._GetRowValue(query_hash, row, 'name') event_data.path = path event_data.query = query event_data.user_sid = '{0!s}'.format(user_sid) event_data.version_path = version_path timestamp = self._GetRowValue(query_hash, row, 'version_time') date_time = dfdatetime_posix_time.PosixTime(timestamp=timestamp) event = time_events.DateTimeValuesEvent( date_time, definitions.TIME_DESCRIPTION_CREATION) parser_mediator.ProduceEventWithEventData(event, event_data)
Parses a document versions row. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. query (str): query that created the row. row (sqlite3.Row): row.
juraj-google-style
def get_dimension(self, dimension, default=None, strict=False): if ((dimension is not None) and (not isinstance(dimension, (int, basestring, Dimension)))): raise TypeError(('Dimension lookup supports int, string, and Dimension instances, cannot lookup Dimensions using %s type.' % type(dimension).__name__)) all_dims = self.dimensions() if isinstance(dimension, int): if (0 <= dimension < len(all_dims)): return all_dims[dimension] elif strict: raise KeyError(('Dimension %r not found' % dimension)) else: return default dimension = dimension_name(dimension) name_map = {dim.name: dim for dim in all_dims} name_map.update({dim.label: dim for dim in all_dims}) name_map.update({util.dimension_sanitizer(dim.name): dim for dim in all_dims}) if (strict and (dimension not in name_map)): raise KeyError(('Dimension %r not found.' % dimension)) else: return name_map.get(dimension, default)
Get a Dimension object by name or index. Args: dimension: Dimension to look up by name or integer index default (optional): Value returned if Dimension not found strict (bool, optional): Raise a KeyError if not found Returns: Dimension object for the requested dimension or default
codesearchnet
def add_redistribution(self, protocol, route_map_name=None): protocols = ['bgp', 'rip', 'static', 'connected'] if (protocol not in protocols): raise ValueError('redistributed protocol must bebgp, connected, rip or static') if (route_map_name is None): cmd = 'redistribute {}'.format(protocol) else: cmd = 'redistribute {} route-map {}'.format(protocol, route_map_name) return self.configure_ospf(cmd)
Adds a protocol redistribution to OSPF Args: protocol (str): protocol to redistribute route_map_name (str): route-map to be used to filter the protocols Returns: bool: True if the command completes successfully Exception: ValueError: This will be raised if the protocol pass is not one of the following: [rip, bgp, static, connected]
codesearchnet
def send(self, content_type='HTML'): payload = self.api_representation(content_type) endpoint = 'https: self._make_api_call('post', endpoint=endpoint, data=json.dumps(payload))
Takes the recipients, body, and attachments of the Message and sends. Args: content_type: Can either be 'HTML' or 'Text', defaults to HTML.
juraj-google-style
def linear_interpolate_rank(tensor1, tensor2, coeffs, rank=1): (_, _, _, num_channels) = common_layers.shape_list(tensor1) diff_sq_sum = tf.reduce_sum(((tensor1 - tensor2) ** 2), axis=(0, 1, 2)) (_, feature_ranks) = tf.math.top_k(diff_sq_sum, k=rank) feature_rank = feature_ranks[(- 1)] channel_inds = tf.range(num_channels, dtype=tf.int32) channel_mask = tf.equal(channel_inds, feature_rank) ones_t = tf.ones(num_channels, dtype=tf.float32) zeros_t = tf.zeros(num_channels, dtype=tf.float32) interp_tensors = [] for coeff in coeffs: curr_coeff = tf.where(channel_mask, (coeff * ones_t), zeros_t) interp_tensor = (tensor1 + (curr_coeff * (tensor2 - tensor1))) interp_tensors.append(interp_tensor) return tf.concat(interp_tensors, axis=0)
Linearly interpolate channel at "rank" between two tensors. The channels are ranked according to their L2 norm between tensor1[channel] and tensor2[channel]. Args: tensor1: 4-D Tensor, NHWC tensor2: 4-D Tensor, NHWC coeffs: list of floats. rank: integer. Returns: interp_latents: list of interpolated 4-D Tensors, shape=(NHWC)
codesearchnet
def get_roles(client): done = False marker = None roles = [] while not done: if marker: response = client.list_roles(Marker=marker) else: response = client.list_roles() roles += response['Roles'] if response['IsTruncated']: marker = response['Marker'] else: done = True return roles
Returns a list of all the roles for an account. Returns a list containing all the roles for the account. Args: client (:obj:`boto3.session.Session`): A boto3 Session object Returns: :obj:`list` of `dict`
juraj-google-style
def tag_versions(repo_path): repo = dulwich.repo.Repo(repo_path) tags = get_tags(repo) maj_version = 0 feat_version = 0 fix_version = 0 last_maj_version = 0 last_feat_version = 0 result = [] for commit_sha, children in reversed( get_children_per_first_parent(repo_path).items() ): commit = get_repo_object(repo, commit_sha) maj_version, feat_version, fix_version = get_version( commit=commit, tags=tags, maj_version=maj_version, feat_version=feat_version, fix_version=fix_version, children=children, ) if ( last_maj_version != maj_version or last_feat_version != feat_version ): last_maj_version = maj_version last_feat_version = feat_version tag_name = 'refs/tags/v%d.%d' % (maj_version, feat_version) if ON_PYTHON3: repo[str.encode(tag_name)] = commit else: repo[tag_name] = commit result.append( 'v%d.%d -> %s' % (maj_version, feat_version, commit_sha) ) return '\n'.join(result)
Given a repo will add a tag for each major version. Args: repo_path(str): path to the git repository to tag.
juraj-google-style
def sholl_crossings(neurites, center, radii): def _count_crossings(neurite, radius): 'count_crossings of segments in neurite with radius' r2 = (radius ** 2) count = 0 for (start, end) in iter_segments(neurite): (start_dist2, end_dist2) = (morphmath.point_dist2(center, start), morphmath.point_dist2(center, end)) count += int(((start_dist2 <= r2 <= end_dist2) or (end_dist2 <= r2 <= start_dist2))) return count return np.array([sum((_count_crossings(neurite, r) for neurite in iter_neurites(neurites))) for r in radii])
calculate crossings of neurites Args: nrn(morph): morphology on which to perform Sholl analysis radii(iterable of floats): radii for which crossings will be counted Returns: Array of same length as radii, with a count of the number of crossings for the respective radius
codesearchnet
def get_sparse_tensors(self, transformation_cache, state_manager): sparse_tensors = self.categorical_column.get_sparse_tensors(transformation_cache, state_manager) return self._get_sparse_tensors_helper(sparse_tensors)
Returns an IdWeightPair. `IdWeightPair` is a pair of `SparseTensor`s which represents ids and weights. `IdWeightPair.id_tensor` is typically a `batch_size` x `num_buckets` `SparseTensor` of `int64`. `IdWeightPair.weight_tensor` is either a `SparseTensor` of `float` or `None` to indicate all weights should be taken to be 1. If specified, `weight_tensor` must have exactly the same shape and indices as `sp_ids`. Expected `SparseTensor` is same as parsing output of a `VarLenFeature` which is a ragged matrix. Args: transformation_cache: A `FeatureTransformationCache` object to access features. state_manager: A `StateManager` to create / access resources such as lookup tables.
github-repos
def _is_valid_netmask(self, netmask): mask = netmask.split('.') if (len(mask) == 4): try: for x in mask: if (int(x) not in self._valid_mask_octets): return False except ValueError: return False for (idx, y) in enumerate(mask): if ((idx > 0) and (y > mask[(idx - 1)])): return False return True try: netmask = int(netmask) except ValueError: return False return (0 <= netmask <= self._max_prefixlen)
Verify that the netmask is valid. Args: netmask: A string, either a prefix or dotted decimal netmask. Returns: A boolean, True if the prefix represents a valid IPv4 netmask.
codesearchnet
def normalize_url(base_url, rel_url): if (not rel_url): return None if (not is_absolute_url(rel_url)): rel_url = rel_url.replace('../', '/') if ((not base_url.endswith('/')) and (not rel_url.startswith('/'))): return ((base_url + '/') + rel_url.replace('../', '/')) return (base_url + rel_url.replace('../', '/')) return rel_url
Normalize the `url` - from relative, create absolute URL. Args: base_url (str): Domain with ``protocol://`` string rel_url (str): Relative or absolute url. Returns: str/None: Normalized URL or None if `url` is blank.
codesearchnet
def assign_add(self, delta, use_locking=False, name=None, read_value=True): assign = state_ops.assign_add(self._variable, delta, use_locking=use_locking, name=name) if read_value: return assign return assign.op
Adds a value to this variable. This is essentially a shortcut for `assign_add(self, delta)`. Args: delta: A `Tensor`. The value to add to this variable. use_locking: If `True`, use locking during the operation. name: The name of the operation to be created read_value: if True, will return something which evaluates to the new value of the variable; if False will return the assign op. Returns: A `Tensor` that will hold the new value of this variable after the addition has completed.
github-repos
def create_config(sections, section_contents): sections_length, section_contents_length = len(sections), len(section_contents) if sections_length != section_contents_length: raise ValueError("Mismatch between argument lengths.\n" "len(sections) = {}\n" "len(section_contents) = {}" .format(sections_length, section_contents_length)) config = configparser.ConfigParser() for section, section_content in zip(sections, section_contents): config[section] = section_content return config
Create a config file from the provided sections and key value pairs. Args: sections (List[str]): A list of section keys. key_value_pairs (Dict[str, str]): A list of of dictionaries. Must be as long as the list of sections. That is to say, if there are two sections, there should be two dicts. Returns: configparser.ConfigParser: A ConfigParser. Raises: ValueError
juraj-google-style
def connect_to(name): kwargs = config_for(name) if (not kwargs): raise AttributeError('connection profile not found in config') node = connect(return_node=True, **kwargs) return node
Creates a node instance based on an entry from the config This function will retrieve the settings for the specified connection from the config and return a Node instance. The configuration must be loaded prior to calling this function. Args: name (str): The name of the connection to load from the config. The name argument should be the connection name (everything right of the colon from the INI file) Returns: This function will return an instance of Node with the settings from the config instance. Raises: AttributeError: raised if the specified configuration name is not found in the loaded configuration
codesearchnet
def fileToMD5(filename, block_size=256*128, binary=False): md5 = hashlib.md5() with open(filename,'rb') as f: for chunk in iter(lambda: f.read(block_size), b''): md5.update(chunk) if not binary: return md5.hexdigest() return md5.digest()
A function that calculates the MD5 hash of a file. Args: ----- filename: Path to the file. block_size: Chunks of suitable size. Block size directly depends on the block size of your filesystem to avoid performances issues. Blocks of 4096 octets (Default NTFS). binary: A boolean representing whether the returned info is in binary format or not. Returns: -------- string: The MD5 hash of the file.
juraj-google-style
def _sort_records_map(records): ctx = context.get() l = len(records) key_records = [None] * l logging.debug("Parsing") for i in range(l): proto = kv_pb.KeyValue() proto.ParseFromString(records[i]) key_records[i] = (proto.key(), records[i]) logging.debug("Sorting") key_records.sort(cmp=_compare_keys) logging.debug("Writing") mapper_spec = ctx.mapreduce_spec.mapper params = input_readers._get_params(mapper_spec) bucket_name = params.get("bucket_name") filename = (ctx.mapreduce_spec.name + "/" + ctx.mapreduce_id + "/output-" + ctx.shard_id + "-" + str(int(time.time()))) full_filename = "/%s/%s" % (bucket_name, filename) filehandle = cloudstorage.open(full_filename, mode="w") with output_writers.GCSRecordsPool(filehandle, ctx=ctx) as pool: for key_record in key_records: pool.append(key_record[1]) logging.debug("Finalizing") filehandle.close() entity = _OutputFile(key_name=full_filename, parent=_OutputFile.get_root_key(ctx.mapreduce_id)) entity.put()
Map function sorting records. Converts records to KeyValue protos, sorts them by key and writes them into new GCS file. Creates _OutputFile entity to record resulting file name. Args: records: list of records which are serialized KeyValue protos.
juraj-google-style
def DEFINE_boolean(name, default, help, flag_values=_flagvalues.FLAGS, module_name=None, **args): DEFINE_flag(_flag.BooleanFlag(name, default, help, **args), flag_values, module_name)
Registers a boolean flag. Such a boolean flag does not take an argument. If a user wants to specify a false value explicitly, the long option beginning with 'no' must be used: i.e. --noflag This flag will have a value of None, True or False. None is possible if default=None and the user does not specify the flag on the command line. Args: name: str, the flag name. default: bool|str|None, the default value of the flag. help: str, the help message. flag_values: FlagValues, the FlagValues instance with which the flag will be registered. This should almost never need to be overridden. module_name: str, the name of the Python module declaring this flag. If not provided, it will be computed using the stack trace of this call. **args: dict, the extra keyword args that are passed to Flag __init__.
codesearchnet
def case(pred_fn_pairs, default=None, exclusive=False, name='smart_case'): return control_flow_ops._case_helper(cond, pred_fn_pairs, default, exclusive, name, allow_python_preds=True)
Like tf.case, except attempts to statically evaluate predicates. If any predicate in `pred_fn_pairs` is a bool or has a constant value, the associated callable will be called or omitted depending on its value. Otherwise this functions like tf.case. Args: pred_fn_pairs: Dict or list of pairs of a boolean scalar tensor and a callable which returns a list of tensors. default: Optional callable that returns a list of tensors. exclusive: True iff at most one predicate is allowed to evaluate to `True`. name: A name for this operation (optional). Returns: The tensors returned by the first pair whose predicate evaluated to True, or those returned by `default` if none does. Raises: TypeError: If `pred_fn_pairs` is not a list/dictionary. TypeError: If `pred_fn_pairs` is a list but does not contain 2-tuples. TypeError: If `fns[i]` is not callable for any i, or `default` is not callable.
codesearchnet
def mesh_axis_to_tensor_axis(self, mesh_ndims): ta2ma = self._tensor_axis_to_mesh_axis return tuple( [ta2ma.index(mesh_axis) if mesh_axis in ta2ma else None for mesh_axis in xrange(mesh_ndims)])
For each mesh axis, which Tensor axis maps to it. Args: mesh_ndims: int. Returns: Tuple of optional integers, with length mesh_ndims.
juraj-google-style
def FromString(cls, range_string): disjuncts = None range_string = range_string.strip() if (len(range_string) == 0): raise ArgumentError('You must pass a finite string to SemanticVersionRange.FromString', range_string=range_string) if ((len(range_string) == 1) and (range_string[0] == '*')): conj = (None, None, True, True) disjuncts = [[conj]] elif (range_string[0] == '^'): ver = range_string[1:] try: ver = SemanticVersion.FromString(ver) except DataError as err: raise ArgumentError('Could not parse ^X.Y.Z version', parse_error=str(err), range_string=range_string) lower = ver upper = ver.inc_first_nonzero() conj = (lower, upper, True, False) disjuncts = [[conj]] elif (range_string[0] == '='): ver = range_string[1:] try: ver = SemanticVersion.FromString(ver) except DataError as err: raise ArgumentError('Could not parse =X.Y.Z version', parse_error=str(err), range_string=range_string) conj = (ver, ver, True, True) disjuncts = [[conj]] if (disjuncts is None): raise ArgumentError('Invalid range specification that could not be parsed', range_string=range_string) return SemanticVersionRange(disjuncts)
Parse a version range string into a SemanticVersionRange Currently, the only possible range strings are: ^X.Y.Z - matches all versions with the same leading nonzero digit greater than or equal the given range. * - matches everything =X.Y.Z - matches only the exact version given Args: range_string (string): A string specifying the version range Returns: SemanticVersionRange: The resulting version range object Raises: ArgumentError: if the range string does not define a valid range.
codesearchnet
def ValidateFeedStartAndExpirationDates(self, problems, first_date, last_date, first_date_origin, last_date_origin, today): warning_cutoff = today + datetime.timedelta(days=60) if last_date < warning_cutoff: problems.ExpirationDate(time.mktime(last_date.timetuple()), last_date_origin) if first_date > today: problems.FutureService(time.mktime(first_date.timetuple()), first_date_origin)
Validate the start and expiration dates of the feed. Issue a warning if it only starts in the future, or if it expires within 60 days. Args: problems: The problem reporter object first_date: A date object representing the first day the feed is active last_date: A date object representing the last day the feed is active today: A date object representing the date the validation is being run on Returns: None
juraj-google-style
def update_variant(self, variant_obj): LOG.debug('Updating variant %s', variant_obj.get('simple_id')) new_variant = self.variant_collection.find_one_and_replace( {'_id': variant_obj['_id']}, variant_obj, return_document=pymongo.ReturnDocument.AFTER ) return new_variant
Update one variant document in the database. This means that the variant in the database will be replaced by variant_obj. Args: variant_obj(dict) Returns: new_variant(dict)
juraj-google-style
def available_credit(context): notes = commerce.CreditNote.unclaimed().filter(invoice__user=user_for_context(context)) ret = (notes.values('amount').aggregate(Sum('amount'))['amount__sum'] or 0) return (0 - ret)
Calculates the sum of unclaimed credit from this user's credit notes. Returns: Decimal: the sum of the values of unclaimed credit notes for the current user.
codesearchnet
def get_megatron_sharded_states(args, tp_size, pp_size, pp_rank): tp_state_dicts = [] for i in range(tp_size): sub_dir_name = f'mp_rank_{i:02d}' if pp_size == 1 else f'mp_rank_{i:02d}_{pp_rank:03d}' for checkpoint_name in ['model_optim_rng.pt', 'model_rng.pt']: checkpoint_path = os.path.join(args.load_path, sub_dir_name, checkpoint_name) if os.path.isfile(checkpoint_path): break check_torch_load_is_safe() state_dict = torch.load(checkpoint_path, map_location='cpu', weights_only=True) tp_state_dicts.append(state_dict) return tp_state_dicts
Get sharded checkpoints from NVIDIA Megatron-LM checkpoint based on the provided tensor parallel size, pipeline parallel size and pipeline parallel rank. Args: args (argparse.Namespace): the arguments to the script tp_size (int): the tensor parallel size pp_size (int): the pipeline parallel size pp_rank (int): the pipeline parallel rank
github-repos
def __getattr__(self, name): return lambda *args, **kwargs: self._Execute(name, *args, **kwargs)
Handles transparent proxying to gdb subprocess. This returns a lambda which, when called, sends an RPC request to gdb Args: name: The method to call within GdbService Returns: The result of the RPC.
juraj-google-style
def get_object(tree): if isinstance(tree, Tree): if tree.label() == 'DT' or tree.label() == 'POS': return '' words = [] for child in tree: words.append(get_object(child)) return ' '.join([_f for _f in words if _f]) else: return tree
Get the object in the tree object. Method should remove unnecessary letters and words:: the a/an 's Args: tree (Tree): Parsed tree structure Returns: Resulting string of tree ``(Ex: "red car")``
juraj-google-style
def _object_url(self, objtype, objid): return "{base_url}/api/{api_version}/{controller}/{obj_id}".format( base_url=self._base_url(), api_version=self.api_version, controller=self._controller_name(objtype), obj_id=objid )
Generate the URL for the specified object Args: objtype (str): The object's type objid (int): The objects ID Returns: A string containing the URL of the object
juraj-google-style
def htmlcolor_to_rgb(str_color): if (not (str_color.startswith(' raise ValueError("Bad html color format. Expected: ' result = [((1.0 * int(n, 16)) / 255) for n in (str_color[1:3], str_color[3:5], str_color[5:])] return result
function to convert HTML-styly color string to RGB values Args: s: Color in HTML format Returns: list of three RGB color components
codesearchnet
def evaluate_hourly_forecasts(self): score_columns = ['Run_Date', 'Forecast_Hour', 'Ensemble Name', 'Model_Name', 'Forecast_Variable', 'Neighbor_Radius', 'Smoothing_Radius', 'Size_Threshold', 'ROC', 'Reliability'] all_scores = pd.DataFrame(columns=score_columns) for (h, hour) in enumerate(range(self.start_hour, (self.end_hour + 1))): for neighbor_radius in self.neighbor_radii: n_filter = disk(neighbor_radius) for (s, size_threshold) in enumerate(self.size_thresholds): print('Eval hourly forecast {0:02d} {1} {2} {3} {4:d} {5:d}'.format(hour, self.model_name, self.forecast_variable, self.run_date, neighbor_radius, size_threshold)) hour_obs = fftconvolve((self.raw_obs[self.mrms_variable][h] >= self.obs_thresholds[s]), n_filter, mode='same') hour_obs[(hour_obs > 1)] = 1 hour_obs[(hour_obs < 1)] = 0 if self.obs_mask: hour_obs = hour_obs[(self.raw_obs[self.mask_variable][h] > 0)] for smoothing_radius in self.smoothing_radii: hour_var = 'neighbor_prob_r_{0:d}_s_{1:d}_{2}_{3:0.2f}'.format(neighbor_radius, smoothing_radius, self.forecast_variable, size_threshold) if self.obs_mask: hour_forecast = self.hourly_forecasts[hour_var][h][(self.raw_obs[self.mask_variable][h] > 0)] else: hour_forecast = self.hourly_forecasts[hour_var][h] roc = DistributedROC(thresholds=self.probability_levels, obs_threshold=0.5) roc.update(hour_forecast, hour_obs) rel = DistributedReliability(thresholds=self.probability_levels, obs_threshold=0.5) rel.update(hour_forecast, hour_obs) row = [self.run_date, hour, self.ensemble_name, self.model_name, self.forecast_variable, neighbor_radius, smoothing_radius, size_threshold, roc, rel] all_scores.loc[(hour_var + '_{0:d}'.format(hour))] = row return all_scores
Calculates ROC curves and Reliability scores for each forecast hour. Returns: A pandas DataFrame containing forecast metadata as well as DistributedROC and Reliability objects.
codesearchnet
def refresh_state(self, id_or_uri, configuration, timeout=-1): uri = self._client.build_uri(id_or_uri) + self.REFRESH_STATE_PATH return self._client.update(resource=configuration, uri=uri, timeout=timeout)
Refreshes a drive enclosure. Args: id_or_uri: Can be either the resource ID or the resource URI. configuration: Configuration timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: dict: Drive Enclosure
juraj-google-style
def has_implicit_access_to_enrollment_api(user, obj): request = get_request_or_stub() decoded_jwt = get_decoded_jwt_from_request(request) return request_user_has_implicit_access_via_jwt(decoded_jwt, ENTERPRISE_ENROLLMENT_API_ADMIN_ROLE, obj)
Check that if request user has implicit access to `ENTERPRISE_ENROLLMENT_API_ADMIN_ROLE` feature role. Returns: boolean: whether the request user has access or not
codesearchnet
def __eq__(self, other): if super().__eq__(other) and \ (self._samples == other._samples).all(): return True return False
Two SamplePulses are the same if they are of the same type and have the same name and samples. Args: other (SamplePulse): other SamplePulse Returns: bool: are self and other equal.
juraj-google-style
def ListAssets(logdir, plugin_name): plugin_dir = PluginDirectory(logdir, plugin_name) try: return [x.rstrip('/') for x in tf.io.gfile.listdir(plugin_dir)] except tf.errors.NotFoundError: return []
List all the assets that are available for given plugin in a logdir. Args: logdir: A directory that was created by a TensorFlow summary.FileWriter. plugin_name: A string name of a plugin to list assets for. Returns: A string list of available plugin assets. If the plugin subdirectory does not exist (either because the logdir doesn't exist, or because the plugin didn't register) an empty list is returned.
codesearchnet
def parse(file_contents, file_name): try: yaml.load(file_contents) except Exception: _, exc_value, _ = sys.exc_info() return("Cannot Parse: {file_name}: \n {exc_value}" .format(file_name=file_name, exc_value=exc_value))
This takes a list of filenames and their paths of expected yaml files and tried to parse them, erroring if there are any parsing issues. Args: file_contents (str): Contents of a yml file Raises: yaml.parser.ParserError: Raises an error if the file contents cannot be parsed and interpreted as yaml
juraj-google-style
def trace_flush(self): cmd = enums.JLinkTraceCommand.FLUSH res = self._dll.JLINKARM_TRACE_Control(cmd, 0) if (res == 1): raise errors.JLinkException('Failed to flush the trace buffer.') return None
Flushes the trace buffer. After this method is called, the trace buffer is empty. This method is best called when the device is reset. Args: self (JLink): the ``JLink`` instance. Returns: ``None``
juraj-google-style
def validate_file(fn, options=None): file_results = FileValidationResults(filepath=fn) output.info(('Performing JSON schema validation on %s' % fn)) if (not options): options = ValidationOptions(files=fn) try: with open(fn) as instance_file: file_results.object_results = validate(instance_file, options) except Exception as ex: if ('Expecting value' in str(ex)): line_no = str(ex).split()[3] file_results.fatal = ValidationErrorResults(('Invalid JSON input on line %s' % line_no)) else: file_results.fatal = ValidationErrorResults(ex) msg = "Unexpected error occurred with file '{fn}'. No further validation will be performed: {error}" output.info(msg.format(fn=fn, error=str(ex))) file_results.is_valid = (all((object_result.is_valid for object_result in file_results.object_results)) and (not file_results.fatal)) return file_results
Validate the input document `fn` according to the options passed in. If any exceptions are raised during validation, no further validation will take place. Args: fn: The filename of the JSON file to be validated. options: An instance of ``ValidationOptions``. Returns: An instance of FileValidationResults.
codesearchnet
def get_all_apps(): LOG.info('Retreiving list of all Spinnaker applications') url = '{}/applications'.format(API_URL) response = requests.get(url, verify=GATE_CA_BUNDLE, cert=GATE_CLIENT_CERT) assert response.ok, 'Could not retrieve application list' pipelines = response.json() LOG.debug('All Applications:\n%s', pipelines) return pipelines
Get a list of all applications in Spinnaker. Returns: requests.models.Response: Response from Gate containing list of all apps.
codesearchnet
def parse(cls, args): try: (options, args) = cls.optparser.parse_args(args) if options.mode not in ["1", "2"]: raise ParseError("mode must be either '1' or '2'", cls.optparser.format_help()) if (options.dbtap_id is None) or (options.db_table is None): raise ParseError("dbtap_id and db_table are required", cls.optparser.format_help()) if options.mode is "1": if options.hive_table is None: raise ParseError("hive_table is required for mode 1", cls.optparser.format_help()) elif options.export_dir is None: raise ParseError("export_dir is required for mode 2", cls.optparser.format_help()) if options.db_update_mode is not None: if options.db_update_mode not in ["allowinsert", "updateonly"]: raise ParseError("db_update_mode should either be left blank for append " "mode or be 'updateonly' or 'allowinsert'", cls.optparser.format_help()) if options.db_update_mode is "updateonly": if options.db_update_keys is None: raise ParseError("db_update_keys is required when db_update_mode " "is 'updateonly'", cls.optparser.format_help()) elif options.db_update_keys is not None: raise ParseError("db_update_keys is used only when db_update_mode " "is 'updateonly'", cls.optparser.format_help()) except OptionParsingError as e: raise ParseError(e.msg, cls.optparser.format_help()) except OptionParsingExit as e: return None v = vars(options) v["command_type"] = "DbExportCommand" return v
Parse command line arguments to construct a dictionary of command parameters that can be used to create a command Args: `args`: sequence of arguments Returns: Dictionary that can be used in create method Raises: ParseError: when the arguments are not correct
juraj-google-style
async def await_rpc(self, address, rpc_id, *args, **kwargs): self.verify_calling_thread(True, 'await_rpc must be called from **inside** the event loop') if isinstance(rpc_id, RPCDeclaration): arg_format = rpc_id.arg_format resp_format = rpc_id.resp_format rpc_id = rpc_id.rpc_id else: arg_format = kwargs.get('arg_format', None) resp_format = kwargs.get('resp_format', None) arg_payload = b'' if (arg_format is not None): arg_payload = pack_rpc_payload(arg_format, args) self._logger.debug('Sending rpc to %d:%04X, payload=%s', address, rpc_id, args) response = AwaitableResponse() self._rpc_queue.put_rpc(address, rpc_id, arg_payload, response) try: resp_payload = (await response.wait(1.0)) except RPCRuntimeError as err: resp_payload = err.binary_error if (resp_format is None): return [] resp = unpack_rpc_payload(resp_format, resp_payload) return resp
Send an RPC from inside the EmulationLoop. This is the primary method by which tasks running inside the EmulationLoop dispatch RPCs. The RPC is added to the queue of waiting RPCs to be drained by the RPC dispatch task and this coroutine will block until it finishes. **This method must only be called from inside the EmulationLoop** Args: address (int): The address of the tile that has the RPC. rpc_id (int): The 16-bit id of the rpc we want to call *args: Any required arguments for the RPC as python objects. **kwargs: Only two keyword arguments are supported: - arg_format: A format specifier for the argument list - result_format: A format specifier for the result Returns: list: A list of the decoded response members from the RPC.
codesearchnet
def __init__(self, *args, pubdate=None, excerpt=None, tags=None, allow_comments=True, **kwargs): super().__init__(*args, **kwargs) self.excerpt = excerpt or _get_excerpt(self.body) self.pubdate = pubdate self.tags = tags or [] self.allow_comments = allow_comments
Constructor. Also see Entry.__init__. Args: pubdate (datetime): When the post was published. excerpt (str): An excerpt of the post body. tags (list): A list of Tag objects associated with the post. allow_comments (bool): Whether to allow comments. Default False.
juraj-google-style
def get_file(self, filename, scope='all'): filename = os.path.abspath(os.path.join(self.root, filename)) layouts = self._get_layouts_in_scope(scope) for ly in layouts: if (filename in ly.files): return ly.files[filename] return None
Returns the BIDSFile object with the specified path. Args: filename (str): The path of the file to retrieve. Must be either an absolute path, or relative to the root of this BIDSLayout. scope (str, list): Scope of the search space. If passed, only BIDSLayouts that match the specified scope will be searched. See BIDSLayout docstring for valid values. Returns: A BIDSFile, or None if no match was found.
codesearchnet
def uncheck(self, locator=None, allow_label_click=None, **kwargs): self._check_with_label( "checkbox", False, locator=locator, allow_label_click=allow_label_click, **kwargs)
Find a check box and uncheck it. The check box can be found via name, id, or label text. :: page.uncheck("German") Args: locator (str, optional): Which check box to uncheck. allow_label_click (bool, optional): Attempt to click the label to toggle state if element is non-visible. Defaults to :data:`capybara.automatic_label_click`. **kwargs: Arbitrary keyword arguments for :class:`SelectorQuery`.
juraj-google-style
def has_request(self, request): queue_item = QueueItem(request, Response(request.url)) key = queue_item.get_hash() for status in QueueItem.STATUSES: if key in self.__get_var("items_" + status).keys(): return True return False
Check if the given request already exists in the queue. Args: request (:class:`nyawc.http.Request`): The request to check. Returns: bool: True if already exists, False otherwise.
juraj-google-style
def find_module_defining_flag(self, flagname, default=None): registered_flag = self._flags().get(flagname) if registered_flag is None: return default for module, flags in six.iteritems(self.flags_by_module_dict()): for flag in flags: if (flag.name == registered_flag.name and flag.short_name == registered_flag.short_name): return module return default
Return the name of the module defining this flag, or default. Args: flagname: str, name of the flag to lookup. default: Value to return if flagname is not defined. Defaults to None. Returns: The name of the module which registered the flag with this name. If no such module exists (i.e. no flag with this name exists), we return default.
juraj-google-style
def delete(self, reference, option=None): write_pb = _helpers.pb_for_delete(reference._document_path, option) self._add_write_pbs([write_pb])
Add a "change" to delete a document. See :meth:`~.firestore_v1beta1.document.DocumentReference.delete` for more information on how ``option`` determines how the change is applied. Args: reference (~.firestore_v1beta1.document.DocumentReference): A document reference that will be deleted in this batch. option (Optional[~.firestore_v1beta1.client.WriteOption]): A write option to make assertions / preconditions on the server state of the document before applying changes.
codesearchnet
def get_output_info_dict(self, signature=None): return self._spec.get_output_info_dict(signature=signature, tags=self._tags)
Describes the outputs provided by a signature. Args: signature: A string with the signature to get ouputs information for. If None, the default signature is used if defined. Returns: The result of ModuleSpec.get_output_info_dict() for the given signature, and the graph variant selected by `tags` when this Module was initialized. Raises: KeyError: if there is no such signature.
juraj-google-style
def _update_state_from_shard_states(self, state, shard_states, control): (state.active_shards, state.aborted_shards, state.failed_shards) = (0, 0, 0) total_shards = 0 processed_counts = [] processed_status = [] state.counters_map.clear() for s in shard_states: total_shards += 1 status = 'unknown' if s.active: state.active_shards += 1 status = 'running' if (s.result_status == model.ShardState.RESULT_SUCCESS): status = 'success' elif (s.result_status == model.ShardState.RESULT_ABORTED): state.aborted_shards += 1 status = 'aborted' elif (s.result_status == model.ShardState.RESULT_FAILED): state.failed_shards += 1 status = 'failed' state.counters_map.add_map(s.counters_map) processed_counts.append(s.counters_map.get(context.COUNTER_MAPPER_CALLS)) processed_status.append(status) state.set_processed_counts(processed_counts, processed_status) state.last_poll_time = datetime.datetime.utcfromtimestamp(self._time()) spec = state.mapreduce_spec if (total_shards != spec.mapper.shard_count): logging.error("Found %d shard states. Expect %d. Issuing abort command to job '%s'", total_shards, spec.mapper.shard_count, spec.mapreduce_id) model.MapreduceControl.abort(spec.mapreduce_id) state.active = bool(state.active_shards) if ((not control) and (state.failed_shards or state.aborted_shards)): model.MapreduceControl.abort(spec.mapreduce_id) if (not state.active): if (state.failed_shards or (not total_shards)): state.result_status = model.MapreduceState.RESULT_FAILED elif state.aborted_shards: state.result_status = model.MapreduceState.RESULT_ABORTED else: state.result_status = model.MapreduceState.RESULT_SUCCESS self._finalize_outputs(spec, state) self._finalize_job(spec, state) else: @db.transactional(retries=5) def _put_state(): 'The helper for storing the state.' fresh_state = model.MapreduceState.get_by_job_id(spec.mapreduce_id) if (not fresh_state.active): logging.warning('Job %s is not active. Looks like spurious task execution. Dropping controller task.', spec.mapreduce_id) return config = util.create_datastore_write_config(spec) state.put(config=config) _put_state()
Update mr state by examing shard states. Args: state: current mapreduce state as MapreduceState. shard_states: an iterator over shard states. control: model.MapreduceControl entity.
codesearchnet
def update(self, media_blob: genai_types.Blob): if self.generation_start_sec is not None and self.ttft_sec is None: self.time_audio_start = time.perf_counter() self.ttft_sec = self.time_audio_start - self.generation_start_sec self.audio_duration += audio_duration_sec(media_blob.data, RECEIVE_SAMPLE_RATE)
Updates the generation request with the new media data. Args: media_blob: The new media data.
github-repos
def _group(self, group_data): if isinstance(group_data, dict): xid = group_data.get('xid') else: xid = group_data.xid if (self.groups.get(xid) is not None): group_data = self.groups.get(xid) elif (self.groups_shelf.get(xid) is not None): group_data = self.groups_shelf.get(xid) else: self.groups[xid] = group_data return group_data
Return previously stored group or new group. Args: group_data (dict|obj): An Group dict or instance of Group object. Returns: dict|obj: The new Group dict/object or the previously stored dict/object.
codesearchnet
def remove_line(self, section, line): try: s = self._get_section(section, create=False) except KeyError: return 0 return s.remove(line)
Remove all instances of a line. Returns: int: the number of lines removed
codesearchnet
def list_key_values(input: t.Dict[str, str]) -> None: for cmd, desc in input.items(): print(f'{cmd} => {desc}')
Display key-value pairs from a dictionary. Args: input (Dict[str, str]): The dictionary containing key-value pairs.
github-repos
def peek_step(self, val: ArrayValue, sn: 'DataNode') -> Tuple[(ObjectValue, 'DataNode')]: keys = self.parse_keys(sn) for en in val: flag = True try: for k in keys: if (en[k] != keys[k]): flag = False break except KeyError: continue if flag: return (en, sn) return (None, sn)
Return the entry addressed by the receiver + its schema node. Args: val: Current value (array). sn: Current schema node.
codesearchnet
def _initialize_pvariables(self, pvariables: Dict[(str, PVariable)], ordering: List[str], initializer: Optional[InitializerList]=None) -> List[Tuple[(str, TensorFluent)]]: if (initializer is not None): init = dict() for ((name, args), value) in initializer: arity = (len(args) if (args is not None) else 0) name = '{}/{}'.format(name, arity) init[name] = init.get(name, []) init[name].append((args, value)) fluents = [] for name in ordering: pvar = pvariables[name] shape = self.rddl._param_types_to_shape(pvar.param_types) dtype = utils.range_type_to_dtype(pvar.range) fluent = np.full(shape, pvar.default) if (initializer is not None): for (args, val) in init.get(name, []): if (args is not None): idx = [] for (ptype, arg) in zip(pvar.param_types, args): idx.append(self.rddl.object_table[ptype]['idx'][arg]) idx = tuple(idx) fluent[idx] = val else: fluent = val with self.graph.as_default(): t = tf.constant(fluent, dtype=dtype, name=utils.identifier(name)) scope = ([None] * len(t.shape)) fluent = TensorFluent(t, scope, batch=False) fluent_pair = (name, fluent) fluents.append(fluent_pair) return fluents
Instantiates `pvariables` given an initialization list and returns a list of TensorFluents in the given `ordering`. Returns: List[Tuple[str, TensorFluent]]: A list of pairs of fluent name and fluent tensor.
codesearchnet
def parse(self, body): if isinstance(body, six.string_types): body = json.loads(body) version = body['version'] self.version = version session = body['session'] self.session.new = session['new'] self.session.session_id = session['sessionId'] application_id = session['application']['applicationId'] self.session.application.application_id = application_id if 'attributes' in session and session['attributes']: self.session.attributes = session.get('attributes', {}) else: self.session.attributes = {} self.session.user.user_id = session['user']['userId'] self.session.user.access_token = session['user'].get('accessToken', 0) request = body['request'] if request['type'] == 'LaunchRequest': self.request = LaunchRequest() elif request['type'] == 'IntentRequest': self.request = IntentRequest() self.request.intent = Intent() intent = request['intent'] self.request.intent.name = intent['name'] if 'slots' in intent and intent['slots']: for name, slot in six.iteritems(intent['slots']): self.request.intent.slots[name] = Slot() self.request.intent.slots[name].name = slot['name'] self.request.intent.slots[name].value = slot.get('value') elif request['type'] == 'SessionEndedRequest': self.request = SessionEndedRequest() self.request.reason = request['reason'] self.request.type = request['type'] self.request.request_id = request['requestId'] self.request.timestamp = request['timestamp'] return self
Parse JSON request, storing content in object attributes. Args: body: str. HTTP request body. Returns: self
juraj-google-style
def exists(self, pattern, **match_kwargs): ret = self.match(pattern, **match_kwargs) if (ret is None): return None if (not ret.matched): return None return ret
Check if image exists in screen Returns: If exists, return FindPoint, or return None if result.confidence < self.image_match_threshold
codesearchnet
def register(name): if not isinstance(name, str): raise TypeError('Expected `name` to be a string; got %r' % (name,)) if not _REGISTERED_NAME_RE.match(name): raise ValueError("Registered name must have the form '{project_name}.{type_name}' (e.g. 'my_project.MyTypeSpec'); got %r." % name) def decorator_fn(cls): if not (isinstance(cls, type) and issubclass(cls, internal.TypeSpec)): raise TypeError('Expected `cls` to be a TypeSpec; got %r' % (cls,)) if cls in _TYPE_SPEC_TO_NAME: raise ValueError('Class %s.%s has already been registered with name %s.' % (cls.__module__, cls.__name__, _TYPE_SPEC_TO_NAME[cls])) if name in _NAME_TO_TYPE_SPEC: raise ValueError('Name %s has already been registered for class %s.%s.' % (name, _NAME_TO_TYPE_SPEC[name].__module__, _NAME_TO_TYPE_SPEC[name].__name__)) _TYPE_SPEC_TO_NAME[cls] = name _NAME_TO_TYPE_SPEC[name] = cls return cls return decorator_fn
Decorator used to register a globally unique name for a TypeSpec subclass. Args: name: The name of the type spec. Must be globally unique. Must have the form `"{project_name}.{type_name}"`. E.g. `"my_project.MyTypeSpec"`. Returns: A class decorator that registers the decorated class with the given name.
github-repos
def reload_config(self, dockercfg_path=None): self._auth_configs = auth.load_config( dockercfg_path, credstore_env=self.credstore_env )
Force a reload of the auth configuration Args: dockercfg_path (str): Use a custom path for the Docker config file (default ``$HOME/.docker/config.json`` if present, otherwise``$HOME/.dockercfg``) Returns: None
juraj-google-style
def load_ner_model(lang='en', version='2'): src_dir = 'ner{}'.format(version) p = locate_resource(src_dir, lang) fh = _open(p) try: return pickle.load(fh) except UnicodeDecodeError: fh.seek(0) return pickle.load(fh, encoding='latin1')
Return a named entity extractor parameters for `lang` and of version `version` Args: lang (string): language code. version (string): version of the parameters to be used.
codesearchnet
def group_modes(modes): if len(modes) > 0: previous = modes[0] grouped = [] for changep in modes[1:]: if changep['label'] != previous['label']: previous['to'] = changep['from'] grouped.append(previous) previous = changep previous['to'] = modes[-1]['to'] grouped.append(previous) return grouped else: return modes
Groups consecutive transportation modes with same label, into one Args: modes (:obj:`list` of :obj:`dict`) Returns: :obj:`list` of :obj:`dict`
juraj-google-style
def _astimezone_ts(self, timezone): if (self.created.tzinfo is timezone): return self else: nw_obj = Timestamps(((None,) * 4)) nw_obj.created = self.created.astimezone(timezone) nw_obj.changed = self.changed.astimezone(timezone) nw_obj.mft_changed = self.mft_changed.astimezone(timezone) nw_obj.accessed = self.accessed.astimezone(timezone) return nw_obj
Changes the time zones of all timestamps. Receives a new timezone and applies to all timestamps, if necessary. Args: timezone (:obj:`tzinfo`): Time zone to be applied Returns: A new ``Timestamps`` object if the time zone changes, otherwise returns ``self``.
codesearchnet
def last_updated(self, path): raise NotImplementedError
Get UNIX Epoch time in seconds on the FileSystem. Args: path: string path of file. Returns: float UNIX Epoch time Raises: ``BeamIOError``: if path doesn't exist.
github-repos
def get_metadata(self, handle): response = self.open_url(url=handle, suffix='.metadata') try: return json.load(response) finally: response.close()
Returns the associated metadata info for the given handle, the metadata file must exist (``handle + '.metadata'``). If the given handle has an ``.xz`` extension, it will get removed when calculating the handle metadata path Args: handle (str): Path to the template to get the metadata from Returns: dict: Metadata for the given handle
juraj-google-style
def merge_dictionaries(dicts, merge_lists=False): dict1 = dicts[0] for other_dict in dicts[1:]: merge_two_dictionaries(dict1, other_dict, merge_lists=merge_lists) return dict1
Merges all dictionaries in dicts into a single dictionary and returns result Args: dicts (List[DictUpperBound]): Dictionaries to merge into the first one in the list merge_lists (bool): Whether to merge lists (True) or replace lists (False). Default is False. Returns: DictUpperBound: Merged dictionary
juraj-google-style
def appendDirectory(self, directory, projectFilePath): lines = [] with open(projectFilePath, 'r') as original: for l in original: lines.append(l) with open(projectFilePath, 'w') as new: for line in lines: card = {} try: card = self._extractCard(line) except: card = self._extractDirectoryCard(line) numSpaces = max(2, (25 - len(card['name']))) if (card['value'] is None): rewriteLine = ('%s\n' % card['name']) elif (card['name'] == 'WMS'): rewriteLine = ('%s %s\n' % (card['name'], card['value'])) elif (card['name'] == 'PROJECT_PATH'): filePath = ('"%s"' % os.path.normpath(directory)) rewriteLine = ('%s%s%s\n' % (card['name'], (' ' * numSpaces), filePath)) elif ('"' in card['value']): filename = card['value'].strip('"') filePath = ('"%s"' % os.path.join(directory, filename)) rewriteLine = ('%s%s%s\n' % (card['name'], (' ' * numSpaces), filePath)) else: rewriteLine = ('%s%s%s\n' % (card['name'], (' ' * numSpaces), card['value'])) new.write(rewriteLine)
Append directory to relative paths in project file. By default, the project file paths are read and written as relative paths. Use this method to prepend a directory to all the paths in the project file. Args: directory (str): Directory path to prepend to file paths in project file. projectFilePath (str): Path to project file that will be modified.
codesearchnet
def _process_kwargs_parameters(sig, func, parent_class, model_name_lowercase, documented_kwargs, indent_level, undocumented_parameters): docstring = '' source_args_dict = source_args_doc(ImageProcessorArgs) unroll_kwargs = func.__name__ in UNROLL_KWARGS_METHODS if not unroll_kwargs and parent_class is not None: unroll_kwargs = any((unroll_kwargs_class in parent_class.__name__ for unroll_kwargs_class in UNROLL_KWARGS_CLASSES)) if unroll_kwargs: kwargs_parameters = [kwargs_param for _, kwargs_param in sig.parameters.items() if kwargs_param.kind == inspect.Parameter.VAR_KEYWORD] for kwarg_param in kwargs_parameters: if kwarg_param.annotation == inspect.Parameter.empty: continue kwargs_documentation = kwarg_param.annotation.__args__[0].__doc__ if kwargs_documentation is not None: documented_kwargs, _ = parse_docstring(kwargs_documentation) if model_name_lowercase is not None: documented_kwargs = format_args_docstring(documented_kwargs, model_name_lowercase) for param_name, param_type_annotation in kwarg_param.annotation.__args__[0].__annotations__.items(): param_type = str(param_type_annotation) optional = False if 'typing' in param_type: param_type = ''.join(param_type.split('typing.')).replace('transformers.', '~') else: param_type = f'{param_type.replace('transformers.', '~').replace('builtins', '')}.{param_name}' if 'ForwardRef' in param_type: param_type = re.sub("ForwardRef\\('([\\w.]+)'\\)", '\\1', param_type) if 'Optional' in param_type: param_type = re.sub('Optional\\[(.*?)\\]', '\\1', param_type) optional = True param_default = '' if parent_class is not None: param_default = str(getattr(parent_class, param_name, '')) param_default = f', defaults to `{param_default}`' if param_default != '' else '' param_type, optional_string, shape_string, additional_info, description, is_documented = _get_parameter_info(param_name, documented_kwargs, source_args_dict, param_type, optional) if is_documented: if param_type == '': print(f'🚨 {param_name} for {kwarg_param.annotation.__args__[0].__qualname__} in file {func.__code__.co_filename} has no type') param_type = param_type if '`' in param_type else f'`{param_type}`' if additional_info: docstring += set_min_indent(f'{param_name} ({param_type}{additional_info}):{description}', indent_level + 8) else: docstring += set_min_indent(f'{param_name} ({param_type}{shape_string}{optional_string}{param_default}):{description}', indent_level + 8) else: undocumented_parameters.append(f'🚨 `{param_name}` is part of {kwarg_param.annotation.__args__[0].__qualname__}, but not documented. Make sure to add it to the docstring of the function in {func.__code__.co_filename}.') return docstring
Process **kwargs parameters if needed. Args: sig (`inspect.Signature`): Function signature func (`function`): Function the parameters belong to parent_class (`class`): Parent class of the function model_name_lowercase (`str`): Lowercase model name documented_kwargs (`dict`): Dictionary of kwargs that are already documented indent_level (`int`): Indentation level undocumented_parameters (`list`): List to append undocumented parameters to
github-repos
def perform_extract_job(self, destination, job_id, table_reference, destination_format, project=None, include_header=True, compression=ExportCompression.NONE, use_avro_logical_types=False, job_labels=None): job_project = project or table_reference.projectId job_reference = bigquery.JobReference(jobId=job_id, projectId=job_project) request = bigquery.BigqueryJobsInsertRequest(projectId=job_project, job=bigquery.Job(configuration=bigquery.JobConfiguration(extract=bigquery.JobConfigurationExtract(destinationUris=destination, sourceTable=table_reference, printHeader=include_header, destinationFormat=destination_format, compression=compression, useAvroLogicalTypes=use_avro_logical_types), labels=_build_job_labels(job_labels)), jobReference=job_reference)) return self._start_job(request).jobReference
Starts a job to export data from BigQuery. Returns: bigquery.JobReference with the information about the job that was started.
github-repos
def decode(self, codes): assert codes.ndim == 2 N, M = codes.shape assert M == self.M assert codes.dtype == self.code_dtype vecs = np.empty((N, self.Ds * self.M), dtype=np.float32) for m in range(self.M): vecs[:, m * self.Ds : (m+1) * self.Ds] = self.codewords[m][codes[:, m], :] return vecs
Given PQ-codes, reconstruct original D-dimensional vectors approximately by fetching the codewords. Args: codes (np.ndarray): PQ-cdoes with shape=(N, M) and dtype=self.code_dtype. Each row is a PQ-code Returns: np.ndarray: Reconstructed vectors with shape=(N, D) and dtype=np.float32
juraj-google-style
def set_size(self, height=220, width=350, height_threshold=120, width_threshold=160): self.set_integer("height", height) self.set_integer("width", width) self.set_integer("small_height_threshold", height_threshold) self.set_integer("small_width_threshold", width_threshold)
Set the size of the chart. Args: height (int): height in pixels. width (int): width in pixels. height_threshold (int): height threshold in pixels width_threshold (int): width threshold in pixesls
juraj-google-style
def window_design(self, window_length, beta): self.window = np.kaiser(window_length, beta) return self.window
Kaiser window design Args: window_length: Length of the window in number of samples beta: Beta value for Kaiser window design Returns: window: Window designed using the beta and length provided as inputs
juraj-google-style
def __init__(self, model_data, image, env=None): self.model_data = model_data self.image = image self.env = env
Create a definition of a model which can be part of an Inference Pipeline Args: model_data (str): The S3 location of a SageMaker model data ``.tar.gz`` file. image (str): A Docker image URI. env (dict[str, str]): Environment variables to run with ``image`` when hosted in SageMaker (default: None).
juraj-google-style
def load_hdf5(path): with h5py.File(path, 'r') as f: is_sparse = f['issparse'][...] if is_sparse: shape = tuple(f['shape'][...]) data = f['data'][...] indices = f['indices'][...] indptr = f['indptr'][...] X = sparse.csr_matrix((data, indices, indptr), shape=shape) else: X = f['data'][...] y = f['target'][...] return X, y
Load data from a HDF5 file. Args: path (str): A path to the HDF5 format file containing data. dense (boolean): An optional variable indicating if the return matrix should be dense. By default, it is false. Returns: Data matrix X and target vector y
juraj-google-style
def get_version(self, id=None, endpoint=None): return self._call_endpoint(GET_VERSION, id=id, endpoint=endpoint)
Get the current version of the endpoint. Note: Not all endpoints currently implement this method Args: id: (int, optional) id to use for response tracking endpoint: (RPCEndpoint, optional) endpoint to specify to use Returns: json object of the result or the error encountered in the RPC call
juraj-google-style
def get_dimension(self, key, value, **kwargs): return self._get_object_by_name(self._DIMENSION_ENDPOINT_SUFFIX, '{0}/{1}'.format(key, value), **kwargs)
get a dimension by key and value Args: key (string): key of the dimension value (string): value of the dimension Returns: dictionary of response
codesearchnet
def set_checkbox_value(w, value): save = w.blockSignals(True) try: w.setChecked(bool(value)) finally: w.blockSignals(save)
Sets a checkbox's "checked" property + signal blocking + value tolerance Args: w: QCheckBox instance value: something that can be converted to a bool
juraj-google-style