code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def create_latin_hypercube_samples(order, dim=1): randoms = numpy.random.random(order*dim).reshape((dim, order)) for dim_ in range(dim): perm = numpy.random.permutation(order) randoms[dim_] = (perm + randoms[dim_])/order return randoms
Latin Hypercube sampling. Args: order (int): The order of the latin hyper-cube. Defines the number of samples. dim (int): The number of dimensions in the latin hyper-cube. Returns (numpy.ndarray): Latin hyper-cube with ``shape == (dim, order)``.
juraj-google-style
def get_attached_bytes_map(meta_graph): result = {} if ATTACHMENT_COLLECTION_SAVED not in meta_graph.collection_def: return result collection_def = meta_graph.collection_def[ATTACHMENT_COLLECTION_SAVED] if collection_def.WhichOneof("kind") != "bytes_list": raise ValueError( "Internal CollectionDef for attached messages has kind %s, " "expected bytes_list" % collection_def.WhichOneof("kind")) attachment = module_attachment_pb2.ModuleAttachment() for value in collection_def.bytes_list.value: attachment.ParseFromString(value) result[attachment.key] = attachment.value return result
Returns the dict of ModuleAttachments stored in `meta_graph`. Args: meta_graph: A MetaGraphDef, as built by SavedModelHandler.add_graph_copy() from some graph. Returns: A dict, containing the `(key, bytes)` items passed to `attach_bytes()` when the graph had been built. Raises: ValueError: if `meta-graph` is malformed.
juraj-google-style
def get_object_metadata(self, request): kwargs = {'Bucket': request.bucket, 'Key': request.object} try: boto_response = self.client.head_object(**kwargs) except Exception as e: raise messages.S3ClientError(str(e), get_http_error_code(e)) item = messages.Item(boto_response['ETag'], request.object, boto_response['LastModified'], boto_response['ContentLength'], boto_response['ContentType']) return item
Retrieves an object's metadata. Args: request: (GetRequest) input message Returns: (Object) The response message.
github-repos
def longest_one_seg_prefix(self, word): for i in range(self.longest_seg, 0, -1): if word[:i] in self.seg_dict: return word[:i] return ''
Return longest Unicode IPA prefix of a word Args: word (unicode): input word as Unicode IPA string Returns: unicode: longest single-segment prefix of `word` in database
juraj-google-style
def StartsWith(self, value): self._awql = self._CreateSingleValueCondition(value, 'STARTS_WITH') return self._query_builder
Sets the type of the WHERE clause as "starts with". Args: value: The value to be used in the WHERE condition. Returns: The query builder that this WHERE builder links to.
codesearchnet
def cumulative_distribution(self, X): self.check_fit() low_bounds = self.model.dataset.mean() - (5 * self.model.dataset.std()) result = [] for value in X: result.append(self.model.integrate_box_1d(low_bounds, value)) return np.array(result)
Computes the integral of a 1-D pdf between two bounds Args: X(numpy.array): Shaped (1, n), containing the datapoints. Returns: numpy.array: estimated cumulative distribution.
juraj-google-style
def Corr(poly, dist=None, **kws): if isinstance(poly, distributions.Dist): (poly, dist) = (polynomials.variable(len(poly)), poly) else: poly = polynomials.Poly(poly) cov = Cov(poly, dist, **kws) var = numpy.diag(cov) vvar = numpy.sqrt(numpy.outer(var, var)) return numpy.where((vvar > 0), (cov / vvar), 0)
Correlation matrix of a distribution or polynomial. Args: poly (Poly, Dist): Input to take correlation on. Must have ``len(poly)>=2``. dist (Dist): Defines the space the correlation is taken on. It is ignored if ``poly`` is a distribution. Returns: (numpy.ndarray): Correlation matrix with ``correlation.shape == poly.shape+poly.shape``. Examples: >>> Z = chaospy.MvNormal([3, 4], [[2, .5], [.5, 1]]) >>> print(numpy.around(chaospy.Corr(Z), 4)) [[1. 0.3536] [0.3536 1. ]] >>> x = chaospy.variable() >>> Z = chaospy.Normal() >>> print(numpy.around(chaospy.Corr([x, x**2], Z), 4)) [[1. 0.] [0. 1.]]
codesearchnet
def _oai_to_xml(marc_oai): record = MARCXMLRecord(marc_oai) record.oai_marc = False return record.to_XML()
Convert OAI to MARC XML. Args: marc_oai (str): String with either OAI or MARC XML. Returns: str: String with MARC XML.
codesearchnet
def _add_tags(self, tags): alltagsadded = True for tag in tags: if (not self._add_tag(tag)): alltagsadded = False return alltagsadded
Add a list of tag Args: tags (List[str]): list of tags to add Returns: bool: True if all tags added or False if any already present.
codesearchnet
def matches(self, stream): if self.match_type != stream.stream_type: return False if self.match_id is not None: return self.match_id == stream.stream_id if self.match_spec == DataStreamSelector.MatchUserOnly: return not stream.system elif self.match_spec == DataStreamSelector.MatchSystemOnly: return stream.system elif self.match_spec == DataStreamSelector.MatchUserAndBreaks: return (not stream.system) or (stream.system and (stream.stream_id in DataStream.KnownBreakStreams)) return True
Check if this selector matches the given stream Args: stream (DataStream): The stream to check Returns: bool: True if this selector matches the stream
juraj-google-style
def clean_title(title): date_pattern = re.compile(r'\W*' r'\d{1,2}' r'[/\-.]' r'\d{1,2}' r'[/\-.]' r'(?=\d*)(?:.{4}|.{2})' r'\W*') title = date_pattern.sub(' ', title) title = re.sub(r'\s{2,}', ' ', title) title = title.strip() return title
Clean title -> remove dates, remove duplicated spaces and strip title. Args: title (str): Title. Returns: str: Clean title without dates, duplicated, trailing and leading spaces.
juraj-google-style
def may_lose_data(self, unused_windowing: core.Windowing) -> DataLossReason: return DataLossReason.NO_POTENTIAL_LOSS
Returns whether or not this trigger could cause data loss. A trigger can cause data loss in the following scenarios: * The trigger has a chance to finish. For instance, AfterWatermark() without a late trigger would cause all late data to be lost. This scenario is only accounted for if the windowing strategy allows late data. Otherwise, the trigger is not responsible for the data loss. Note that this only returns the potential for loss. It does not mean that there will be data loss. It also only accounts for loss related to the trigger, not other potential causes. Args: windowing: The Windowing that this trigger belongs to. It does not need to be the top-level trigger. Returns: The DataLossReason. If there is no potential loss, DataLossReason.NO_POTENTIAL_LOSS is returned. Otherwise, all the potential reasons are returned as a single value.
github-repos
def get_tqdm_kwargs(**kwargs): default = dict(smoothing=0.5, dynamic_ncols=True, ascii=True, bar_format='{l_bar}{bar}|{n_fmt}/{total_fmt}[{elapsed}<{remaining},{rate_noinv_fmt}]') try: interval = float(os.environ['TENSORPACK_PROGRESS_REFRESH']) except KeyError: interval = _pick_tqdm_interval(kwargs.get('file', sys.stderr)) default['mininterval'] = interval default.update(kwargs) return default
Return default arguments to be used with tqdm. Args: kwargs: extra arguments to be used. Returns: dict:
codesearchnet
def execute_plan(plan): results = [action() for action in plan] return [result for result in results if actns.step_has_failed(result)]
Execute the plan. Args: plan (:obj:`list` of :obj:`actions.Step`): The plan we want to execute. Returns: (:obj:`list` of :obj:`actions.Step`): A list of failed actions.
juraj-google-style
def prepare_policy_template(self, scaling_type, period_sec, server_group): template_kwargs = { 'app': self.app, 'env': self.env, 'region': self.region, 'server_group': server_group, 'period_sec': period_sec, 'scaling_policy': self.settings['asg']['scaling_policy'], } if scaling_type == 'scale_up': template_kwargs['operation'] = 'increase' template_kwargs['comparisonOperator'] = 'GreaterThanThreshold' template_kwargs['scalingAdjustment'] = 1 elif scaling_type == 'scale_down': cur_threshold = int(self.settings['asg']['scaling_policy']['threshold']) self.settings['asg']['scaling_policy']['threshold'] = floor(cur_threshold * 0.5) template_kwargs['operation'] = 'decrease' template_kwargs['comparisonOperator'] = 'LessThanThreshold' template_kwargs['scalingAdjustment'] = -1 rendered_template = get_template(template_file='infrastructure/autoscaling_policy.json.j2', **template_kwargs) self.log.info('Creating a %s policy in %s for %s', scaling_type, self.env, self.app) wait_for_task(rendered_template) self.log.info('Successfully created a %s policy in %s for %s', scaling_type, self.env, self.app)
Renders scaling policy templates based on configs and variables. After rendering, POSTs the json to Spinnaker for creation. Args: scaling_type (str): ``scale_up`` or ``scaling_down``. Type of policy period_sec (int): Period of time to look at metrics for determining scale server_group (str): The name of the server group to render template for
juraj-google-style
def similar_movies(self, **kwargs): path = self._get_id_path('similar_movies') response = self._GET(path, kwargs) self._set_attrs_to_values(response) return response
Get the similar movies for a specific movie id. Args: page: (optional) Minimum value of 1. Expected value is an integer. language: (optional) ISO 639-1 code. append_to_response: (optional) Comma separated, any movie method. Returns: A dict representation of the JSON returned from the API.
codesearchnet
async def _on_trace_notification(self, trace_event): conn_string = trace_event.get('connection_string') payload = trace_event.get('payload') (await self.notify_event(conn_string, 'trace', payload))
Callback function called when a trace chunk is received. Args: trace_chunk (dict): The received trace chunk information
codesearchnet
def reset(self): self._will_reset() if self._has_backup: self._restore() else: _LIB.Reset(self._env) self._did_reset() self.done = False return self.screen
Reset the state of the environment and returns an initial observation. Returns: state (np.ndarray): next frame as a result of the given action
codesearchnet
def usufyToXlsxExport(d, fPath): from pyexcel_xlsx import get_data try: oldData = {"OSRFramework": get_data(fPath) } except: oldData = {"OSRFramework":[]} tabularData = _generateTabularData(d, oldData) from pyexcel_xlsx import save_data save_data(fPath, tabularData)
Workaround to export to a .xlsx file. Args: ----- d: Data to export. fPath: File path for the output file.
juraj-google-style
def _OpenFileObject(self, path_spec): if not path_spec.HasParent(): raise errors.PathSpecError( 'Unsupported path specification without parent.') parent_path_spec = path_spec.parent file_system = resolver.Resolver.OpenFileSystem( parent_path_spec, resolver_context=self._resolver_context) segment_file_path_specs = ewf.EWFGlobPathSpec(file_system, path_spec) if not segment_file_path_specs: return None if parent_path_spec.IsSystemLevel(): self._resolver_context.SetMaximumNumberOfFileObjects( len(segment_file_path_specs) + 127) for segment_file_path_spec in segment_file_path_specs: file_object = resolver.Resolver.OpenFileObject( segment_file_path_spec, resolver_context=self._resolver_context) self._file_objects.append(file_object) ewf_handle = pyewf.handle() ewf_handle.open_file_objects(self._file_objects) return ewf_handle
Opens the file-like object defined by path specification. Args: path_spec (PathSpec): path specification. Returns: pyewf.handle: a file-like object or None. Raises: PathSpecError: if the path specification is invalid.
juraj-google-style
def onkeyup(self, key, keycode, ctrl, shift, alt): return (key, keycode, ctrl, shift, alt)
Called when user types and releases a key. The widget should be able to receive the focus in order to emit the event. Assign a 'tabindex' attribute to make it focusable. Args: key (str): the character value keycode (str): the numeric char code
juraj-google-style
def __init__(self, fn, job_id, *args, **kwargs): super(LambdaJob, self).__init__(job_id) self._future = _async.async.executor.submit(fn, *args, **kwargs)
Initializes an instance of a Job. Args: fn: the lambda function to execute asyncronously job_id: an optional ID for the job. If None, a UUID will be generated.
juraj-google-style
def size_filter(labeled_grid, min_size): out_grid = np.zeros(labeled_grid.shape, dtype=int) slices = find_objects(labeled_grid) j = 1 for (i, s) in enumerate(slices): box = labeled_grid[s] size = np.count_nonzero((box.ravel() == (i + 1))) if ((size >= min_size) and (box.shape[0] > 1) and (box.shape[1] > 1)): out_grid[np.where((labeled_grid == (i + 1)))] = j j += 1 return out_grid
Remove labeled objects that do not meet size threshold criteria. Args: labeled_grid: 2D output from label method. min_size: minimum size of object in pixels. Returns: labeled grid with smaller objects removed.
codesearchnet
def save_shared_file(self, sharekey=None): endpoint = '/api/sharedfile/{sharekey}/save'.format(sharekey=sharekey) data = self._make_request('POST', endpoint=endpoint, data=None) try: sf = SharedFile.NewFromJSON(data) sf.saved = True return sf except: raise Exception('{0}'.format(data['error']))
Save a SharedFile to your Shake. Args: sharekey (str): Sharekey for the file to save. Returns: SharedFile saved to your shake.
codesearchnet
def create_resource(self, resource_type=None, uri=None): if resource_type in [NonRDFSource, Binary, BasicContainer, DirectContainer, IndirectContainer]: return resource_type(self, uri) else: raise TypeError("expecting Resource type, such as BasicContainer or NonRDFSource")
Convenience method for creating a new resource Note: A Resource is instantiated, but is not yet created. Still requires resource.create(). Args: uri (rdflib.term.URIRef, str): uri of resource to create resource_type (NonRDFSource (Binary), BasicContainer, DirectContainer, IndirectContainer): resource type to create Returns: (NonRDFSource (Binary), BasicContainer, DirectContainer, IndirectContainer): instance of appropriate type
juraj-google-style
def get_file(profile, branch, file_path): branch_sha = get_branch_sha(profile, branch) tree = get_files_in_branch(profile, branch_sha) match = None for item in tree: if item.get("path") == file_path: match = item break file_sha = match.get("sha") blob = blobs.get_blob(profile, file_sha) content = blob.get("content") decoded_content = b64decode(content) return decoded_content.decode("utf-8")
Get a file from a branch. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. branch The name of a branch. file_path The path of the file to fetch. Returns: The (UTF-8 encoded) content of the file, as a string.
juraj-google-style
def list_depth(list_, func=max, _depth=0): depth_list = [list_depth(item, func=func, _depth=(_depth + 1)) for item in list_ if util_type.is_listlike(item)] if (len(depth_list) > 0): return func(depth_list) else: return _depth
Returns the deepest level of nesting within a list of lists Args: list_ : a nested listlike object func : depth aggregation strategy (defaults to max) _depth : internal var Example: >>> # ENABLE_DOCTEST >>> from utool.util_list import * # NOQA >>> list_ = [[[[[1]]], [3]], [[1], [3]], [[1], [3]]] >>> result = (list_depth(list_, _depth=0)) >>> print(result)
codesearchnet
def _tavella_randell_nonuniform_grid(x_min, x_max, x_star, num_grid_points, alpha, dtype): c1 = tf.math.asinh((x_min - x_star) / alpha) c2 = tf.math.asinh((x_max - x_star) / alpha) i = tf.expand_dims(tf.range(0, num_grid_points + 1, 1, dtype=dtype), axis=-1) grid = x_star + alpha * tf.math.sinh(c2 * i / num_grid_points + c1 * (1 - i / num_grid_points)) return tf.transpose(grid)
Creates non-uniform grid clustered around a specified point. Args: x_min: A real `Tensor` of shape `(dim,)` specifying the lower limit of the grid. x_max: A real `Tensor` of same shape and dtype as `x_min` specifying the upper limit of the grid. x_star: A real `Tensor` of same shape and dtype as `x_min` specifying the location on the grid around which higher grid density is desired. num_grid_points: A scalar integer `Tensor` specifying the number of points on the grid. alpha: A scalar parameter which controls the degree of non-uniformity of the grid. The smaller values of `alpha` correspond to greater degree of clustering around `x_star`. dtype: The default dtype to use when converting values to `Tensor`s. Returns: A real `Tensor` of shape `(dim, num_grid_points+1)` containing the non-uniform grid.
github-repos
def __call__(self, data: List) -> np.ndarray: max_length = max(len(x) for x in data) answer = np.zeros(shape=(len(data), max_length, self.dim), dtype=int) for i, sent in enumerate(data): for j, word in enumerate(sent): answer[i, j][self._get_word_indexes(word)] = 1 return answer
Transforms words to one-hot encoding according to the dictionary. Args: data: the batch of words Returns: a 3D array. answer[i][j][k] = 1 iff data[i][j] is the k-th word in the dictionary.
juraj-google-style
def object_upload(self, bucket, key, content, content_type): args = {'uploadType': 'media', 'name': key} headers = {'Content-Type': content_type} url = Api._UPLOAD_ENDPOINT + (Api._OBJECT_PATH % (bucket, '')) return google.datalab.utils.Http.request(url, args=args, data=content, headers=headers, credentials=self._credentials, raw_response=True)
Writes text content to the object. Args: bucket: the name of the bucket containing the object. key: the key of the object to be written. content: the text content to be written. content_type: the type of text content. Raises: Exception if the object could not be written to.
juraj-google-style
def receive_bytes(self, data): i = 0 n = len(data) responses = [] while (i < n): if (not self._receiving): bytes_to_read = min((4 - self._header.tell()), (n - i)) self._header.write(data[i:(i + bytes_to_read)]) i += bytes_to_read if (self._header.tell() == 4): self._header.seek(0) nbytes = Int32.decode(self._header) self._rbuffer = KafkaBytes(nbytes) self._receiving = True elif (self._header.tell() > 4): raise Errors.KafkaError('this should not happen - are you threading?') if self._receiving: total_bytes = len(self._rbuffer) staged_bytes = self._rbuffer.tell() bytes_to_read = min((total_bytes - staged_bytes), (n - i)) self._rbuffer.write(data[i:(i + bytes_to_read)]) i += bytes_to_read staged_bytes = self._rbuffer.tell() if (staged_bytes > total_bytes): raise Errors.KafkaError('Receive buffer has more bytes than expected?') if (staged_bytes != total_bytes): break self._receiving = False self._rbuffer.seek(0) resp = self._process_response(self._rbuffer) responses.append(resp) self._reset_buffer() return responses
Process bytes received from the network. Arguments: data (bytes): any length bytes received from a network connection to a kafka broker. Returns: responses (list of (correlation_id, response)): any/all completed responses, decoded from bytes to python objects. Raises: KafkaProtocolError: if the bytes received could not be decoded. CorrelationIdError: if the response does not match the request correlation id.
codesearchnet
def get_token(self, token_name, project_name, dataset_name): url = self.url() + "/nd/resource/dataset/{}".format(dataset_name)\ + "/project/{}".format(project_name)\ + "/token/{}/".format(token_name) req = self.remote_utils.get_url(url) if req.status_code is not 200: raise RemoteDataUploadError('Could not find {}'.format(req.text)) else: return req.json()
Get a token with the given parameters. Arguments: project_name (str): Project name dataset_name (str): Dataset name project is based on token_name (str): Token name Returns: dict: Token info
juraj-google-style
def unpack_small_tensors(tower_grads, packing): if (not packing): return tower_grads new_tower_grads = [] num_devices = len(tower_grads) num_packed = (len(packing.keys()) for (dev_idx, gv_list) in enumerate(tower_grads): new_gv_list = gv_list[num_packed:] for i in xrange(0, num_packed): k = ('%d:%d' % (dev_idx, i)) gpt = packing[k] gv = unpack_grad_tuple(gv_list[i], gpt) for (gi, idx) in enumerate(gpt.indices): assert (idx == gpt.indices[gi]) new_gv_list.insert(idx, gv[gi]) new_tower_grads.append(new_gv_list) return new_tower_grads
Undo the structure alterations to tower_grads done by pack_small_tensors. Args: tower_grads: List of List of (grad, var) tuples. packing: A dict generated by pack_small_tensors describing the changes it made to tower_grads. Returns: new_tower_grads: identical to tower_grads except that concatentations of small tensors have been split apart and returned to their original positions, paired with their original variables.
codesearchnet
def console_get_alignment(con: tcod.console.Console) -> int: return int(lib.TCOD_console_get_alignment(_console(con)))
Return this consoles current alignment mode. Args: con (Console): Any Console instance. .. deprecated:: 8.5 Check :any:`Console.default_alignment` instead.
codesearchnet
def GetSources(self, event): if self.DATA_TYPE != event.data_type: raise errors.WrongFormatter('Unsupported data type: {0:s}.'.format( event.data_type)) return self.SOURCE_SHORT, self.SOURCE_LONG
Determines the the short and long source for an event object. Args: event (EventObject): event. Returns: tuple(str, str): short and long source string. Raises: WrongFormatter: if the event object cannot be formatted by the formatter.
juraj-google-style
def create_dummy_object(name: str, backend_name: str) -> str: if name.isupper(): return DUMMY_CONSTANT.format(name) elif name.islower(): return DUMMY_FUNCTION.format(name, backend_name) else: return DUMMY_CLASS.format(name, backend_name)
Create the code for a dummy object. Args: name (`str`): The name of the object. backend_name (`str`): The name of the backend required for that object. Returns: `str`: The code of the dummy object.
github-repos
def get(query): conversion_funcs = _tensor_conversion_func_cache.get(query) if conversion_funcs is None: with _tensor_conversion_func_lock: conversion_funcs = _tensor_conversion_func_cache.get(query) if conversion_funcs is None: conversion_funcs = [] for _, funcs_at_priority in sorted(_tensor_conversion_func_registry.items()): conversion_funcs.extend(((base_type, conversion_func) for base_type, conversion_func in funcs_at_priority if issubclass(query, base_type))) _tensor_conversion_func_cache[query] = conversion_funcs return conversion_funcs
Get conversion function for objects of `cls`. Args: query: The type to query for. Returns: A list of conversion functions in increasing order of priority.
github-repos
def DecryptMessage(self, encrypted_response): try: response_comms = rdf_flows.ClientCommunication.FromSerializedString(encrypted_response) return self.DecodeMessages(response_comms) except (rdfvalue.DecodeError, type_info.TypeValueError, ValueError, AttributeError) as e: raise DecodingError(('Error while decrypting messages: %s' % e))
Decrypt the serialized, encrypted string. Args: encrypted_response: A serialized and encrypted string. Returns: a Packed_Message_List rdfvalue
codesearchnet
def update_memo(self, task_id, task, r): if not self.memoize or not task['memoize']: return if task['hashsum'] in self.memo_lookup_table: logger.info('Updating appCache entry with latest %s:%s call' % (task['func_name'], task_id)) self.memo_lookup_table[task['hashsum']] = r else: self.memo_lookup_table[task['hashsum']] = r
Updates the memoization lookup table with the result from a task. Args: - task_id (int): Integer task id - task (dict) : A task dict from dfk.tasks - r (Result future): Result future A warning is issued when a hash collision occurs during the update. This is not likely.
juraj-google-style
def fixed_padding(inputs, kernel_size, data_format): pad_total = (kernel_size - 1) pad_beg = (pad_total pad_end = (pad_total - pad_beg) if (data_format == 'channels_first'): padded_inputs = tf.pad(inputs, [[0, 0], [0, 0], [pad_beg, pad_end], [pad_beg, pad_end]]) else: padded_inputs = tf.pad(inputs, [[0, 0], [pad_beg, pad_end], [pad_beg, pad_end], [0, 0]]) return padded_inputs
Pads the input along the spatial dimensions independently of input size. Args: inputs: A tensor of size [batch, channels, height_in, width_in] or [batch, height_in, width_in, channels] depending on data_format. kernel_size: The kernel to be used in the conv2d or max_pool2d operation. Should be a positive integer. data_format: The input format ('channels_last' or 'channels_first'). Returns: A tensor with the same format as the input with the data either intact (if kernel_size == 1) or padded (if kernel_size > 1).
codesearchnet
def from_parameters(cls, parameters: Dict[str, Any], dna_spec: DNASpec, use_literal_values: bool=False) -> 'DNA': del use_literal_values return cls.from_dict(parameters, dna_spec)
Create DNA from parameters based on DNASpec. Deprecated: use `from_dict` instead. Args: parameters: A 1-depth dict of parameter names to parameter values. dna_spec: DNASpec to interpret the parameters. use_literal_values: If True, parameter values are literal values from DNASpec. Returns: DNA instance bound with the DNASpec. Raises: ValueError: If parameters are not aligned with DNA spec.
github-repos
def query(botcust2, message): logger.debug('Getting Mitsuku reply') params = {'botid': 'f6a012073e345a08', 'amp;skin': 'chat'} headers = {'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'en-US,en;q=0.8', 'Cache-Control': 'max-age=0', 'Connection': 'keep-alive', 'Content-Length': str((len(message) + 34)), 'Content-Type': 'application/x-www-form-urlencoded', 'Cookie': ('botcust2=' + botcust2), 'DNT': '1', 'Host': 'kakko.pandorabots.com', 'Origin': 'https: data = {'botcust2': botcust2, 'message': message} logger.debug('Sending POST request') response = requests.post(url, params=params, headers=headers, data=data) logger.debug('POST response {}'.format(response)) parsed = lxml.html.parse(io.StringIO(response.text)).getroot() try: result = parsed[1][2][0][2].tail[1:] logger.debug('Getting botcust2 successful') except IndexError: result = False logger.critical('Getting botcust2 from html failed') return result
Sends a message to Mitsuku and retrieves the reply Args: botcust2 (str): The botcust2 identifier message (str): The message to send to Mitsuku Returns: reply (str): The message Mitsuku sent back
codesearchnet
def _update_general_statistics(a_float, dist): if (not dist.count): dist.count = 1 dist.maximum = a_float dist.minimum = a_float dist.mean = a_float dist.sumOfSquaredDeviation = 0 else: old_count = dist.count old_mean = dist.mean new_mean = (((old_count * old_mean) + a_float) / (old_count + 1)) delta_sum_squares = ((a_float - old_mean) * (a_float - new_mean)) dist.count += 1 dist.mean = new_mean dist.maximum = max(a_float, dist.maximum) dist.minimum = min(a_float, dist.minimum) dist.sumOfSquaredDeviation += delta_sum_squares
Adds a_float to distribution, updating the statistics fields. Args: a_float (float): a new value dist (:class:`endpoints_management.gen.servicecontrol_v1_messages.Distribution`): the Distribution being updated
codesearchnet
def node_filter(self, name, **kwargs): def decorator(func): self.filters[name] = NodeFilter(name, func, **kwargs) return decorator
Returns a decorator function for adding a node filter. Args: name (str): The name of the filter. **kwargs: Variable keyword arguments for the filter. Returns: Callable[[Callable[[Element, Any], bool]]]: A decorator function for adding a node filter.
codesearchnet
def get_appliance_event_after_time(self, location_id, since, per_page=None, page=None, min_power=None): url = 'https: headers = self.__gen_headers() headers['Content-Type'] = 'application/json' params = {'locationId': location_id, 'since': since} if min_power: params['minPower'] = min_power if per_page: params['perPage'] = per_page if page: params['page'] = page url = self.__append_url_params(url, params) r = requests.get(url, headers=headers) return r.json()
Get appliance events by location Id after defined time. Args: location_id (string): hexadecimal id of the sensor to query, e.g. ``0x0013A20040B65FAD`` since (string): ISO 8601 start time for getting the events that are created or updated after it. Maxiumim value allowed is 1 day from the current time. min_power (string): The minimum average power (in watts) for filtering. Only events with an average power above this value will be returned. (default: 400) per_page (string, optional): the number of returned results per page (min 1, max 500) (default: 10) page (string, optional): the page number to return (min 1, max 100000) (default: 1) Returns: list: dictionary objects containing appliance events meeting specified criteria
codesearchnet
def metaclass(*metaclasses): def _inner(cls): metabases = tuple( collections.OrderedDict( (c, None) for c in (metaclasses + (type(cls),)) ).keys() ) _Meta = metabases[0] for base in metabases[1:]: class _Meta(base, _Meta): pass return six.add_metaclass(_Meta)(cls) return _inner
Create the class using all metaclasses. Args: metaclasses: A tuple of metaclasses that will be used to generate and replace a specified class. Returns: A decorator that will recreate the class using the specified metaclasses.
juraj-google-style
def verify_token(id_token, request, audience=None, certs_url=_GOOGLE_OAUTH2_CERTS_URL): certs = _fetch_certs(request, certs_url) return jwt.decode(id_token, certs=certs, audience=audience)
Verifies an ID token and returns the decoded token. Args: id_token (Union[str, bytes]): The encoded token. request (google.auth.transport.Request): The object used to make HTTP requests. audience (str): The audience that this token is intended for. If None then the audience is not verified. certs_url (str): The URL that specifies the certificates to use to verify the token. This URL should return JSON in the format of ``{'key id': 'x509 certificate'}``. Returns: Mapping[str, Any]: The decoded token.
codesearchnet
def map(self, ID_s, FROM=None, TO=None, target_as_set=False, no_match_sub=None): def io_mode(ID_s): '\n Handles the input/output modalities of the mapping.\n ' unlist_return = False list_of_lists = False if isinstance(ID_s, str): ID_s = [ID_s] unlist_return = True elif isinstance(ID_s, list): if ((len(ID_s) > 0) and isinstance(ID_s[0], list)): list_of_lists = True return (ID_s, unlist_return, list_of_lists) if (FROM == TO): return ID_s (ID_s, unlist_return, list_of_lists) = io_mode(ID_s) if list_of_lists: mapped_ids = [self.map(ID, FROM, TO, target_as_set, no_match_sub) for ID in ID_s] else: mapped_ids = self._map(ID_s, FROM, TO, target_as_set, no_match_sub) if unlist_return: return mapped_ids[0] return Mapping(ID_s, mapped_ids)
The main method of this class and the essence of the package. It allows to "map" stuff. Args: ID_s: Nested lists with strings as leafs (plain strings also possible) FROM (str): Origin key for the mapping (default: main key) TO (str): Destination key for the mapping (default: main key) target_as_set (bool): Whether to summarize the output as a set (removes duplicates) no_match_sub: Object representing the status of an ID not being able to be matched (default: None) Returns: Mapping: a mapping object capturing the result of the mapping request
codesearchnet
def whois_nameservers(self, nameservers): api_name = 'opendns-whois-nameservers' fmt_url_path = u'whois/nameservers/{0}' return self._multi_get(api_name, fmt_url_path, nameservers)
Calls WHOIS Nameserver end point Args: emails: An enumerable of nameservers Returns: A dict of {nameserver: domain_result}
codesearchnet
class PatchTSMixerPretrainHead(nn.Module): def __init__(self, config: PatchTSMixerConfig): super().__init__() self.dropout_layer = nn.Dropout(config.head_dropout) self.base_pt_block = nn.Linear(config.d_model, config.patch_length) def forward(self, hidden_features): hidden_features = self.dropout_layer(hidden_features) forecast = self.base_pt_block(hidden_features) return forecast
Pretraining head. Args: config (`PatchTSMixerConfig`): Configuration.
github-repos
def tracers(tracersfile): if (not tracersfile.is_file()): return None tra = {} with tracersfile.open('rb') as fid: readbin = partial(_readbin, fid) magic = readbin() if (magic > 8000): magic -= 8000 readbin() readbin = partial(readbin, file64=True) if (magic < 100): raise ParsingError(tracersfile, 'magic > 100 expected to get tracervar info') nblk = (magic % 100) readbin('f', 2) readbin() readbin('f') ninfo = readbin() ntra = readbin(nwords=nblk, unpack=False) readbin('f') curv = readbin() if curv: readbin('f') infos = [] for _ in range(ninfo): infos.append(b''.join(readbin('b', 16)).strip().decode()) tra[infos[(- 1)]] = [] if (magic > 200): ntrace_elt = readbin() if (ntrace_elt > 0): readbin('f', ntrace_elt) for ntrab in ntra: data = readbin('f', (ntrab * ninfo)) for (idx, info) in enumerate(infos): tra[info].append(data[idx::ninfo]) return tra
Extract tracers data. Args: tracersfile (:class:`pathlib.Path`): path of the binary tracers file. Returns: dict of list of numpy.array: Tracers data organized by attribute and block.
codesearchnet
def timedelta(self, time_input1, time_input2): time_input1 = self.any_to_datetime(time_input1) time_input2 = self.any_to_datetime(time_input2) diff = (time_input1 - time_input2) delta = relativedelta(time_input1, time_input2) total_months = ((delta.years * 12) + delta.months) total_weeks = (((delta.years * 52) + (total_months * 4)) + delta.weeks) total_days = diff.days total_hours = ((total_days * 24) + delta.hours) total_minutes = ((total_hours * 60) + delta.minutes) total_seconds = ((total_minutes * 60) + delta.seconds) total_microseconds = ((total_seconds * 1000) + delta.microseconds) return {'datetime_1': time_input1.isoformat(), 'datetime_2': time_input2.isoformat(), 'years': delta.years, 'months': delta.months, 'weeks': delta.weeks, 'days': delta.days, 'hours': delta.hours, 'minutes': delta.minutes, 'seconds': delta.seconds, 'microseconds': delta.microseconds, 'total_months': total_months, 'total_weeks': total_weeks, 'total_days': total_days, 'total_hours': total_hours, 'total_minutes': total_minutes, 'total_seconds': total_seconds, 'total_microseconds': total_microseconds}
Calculates time delta between two time expressions. Args: time_input1 (string): The time input string (see formats above). time_input2 (string): The time input string (see formats above). Returns: (dict): Dict with delta values.
codesearchnet
def CompileReport(self, mediator): results = {} for key, count in iter(self._counter.items()): search_engine, _, search_term = key.partition(':') results.setdefault(search_engine, {}) results[search_engine][search_term] = count lines_of_text = [] for search_engine, terms in sorted(results.items()): lines_of_text.append(' == ENGINE: {0:s} =='.format(search_engine)) for search_term, count in sorted( terms.items(), key=lambda x: (x[1], x[0]), reverse=True): lines_of_text.append('{0:d} {1:s}'.format(count, search_term)) lines_of_text.append('') lines_of_text.append('') report_text = '\n'.join(lines_of_text) analysis_report = reports.AnalysisReport( plugin_name=self.NAME, text=report_text) analysis_report.report_array = self._search_term_timeline analysis_report.report_dict = results return analysis_report
Compiles an analysis report. Args: mediator (AnalysisMediator): mediates interactions between analysis plugins and other components, such as storage and dfvfs. Returns: AnalysisReport: analysis report.
juraj-google-style
def start_standing_subprocess(cmd, shell=False, env=None, stdout=subprocess.PIPE, stderr=subprocess.PIPE): logging.debug('Starting standing subprocess with: %s', cmd) proc = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=stdout, stderr=stderr, shell=shell, env=env) proc.stdin.close() proc.stdin = None logging.debug('Started standing subprocess %d', proc.pid) return proc
Starts a long-running subprocess. This is not a blocking call and the subprocess started by it should be explicitly terminated with stop_standing_subprocess. For short-running commands, you should use subprocess.check_call, which blocks. Args: cmd: string, the command to start the subprocess with. shell: bool, True to run this command through the system shell, False to invoke it directly. See subprocess.Popen() docs. env: dict, a custom environment to run the standing subprocess. If not specified, inherits the current environment. See subprocess.Popen() docs. stdout: None, subprocess.PIPE, subprocess.DEVNULL, an existing file descriptor, or an existing file object. See subprocess.Popen() docs. stderr: None, subprocess.PIPE, subprocess.DEVNULL, an existing file descriptor, or an existing file object. See subprocess.Popen() docs. Returns: The subprocess that was started.
github-repos
def trees_by_issn(self, issn): return set( self.issn_db.get(issn, OOSet()).keys() )
Search trees by `issn`. Args: issn (str): :attr:`.Tree.issn` property of :class:`.Tree`. Returns: set: Set of matching :class:`Tree` instances.
juraj-google-style
def build(self, var_list): if self.built: return super().build(var_list) self.adam_momentums = {} self.adam_velocities = {} self.muon_momentums = {} self.muon_velocities = {} for var in var_list: if not self._overwrite_variable_with_gradient(var): self.adam_momentums[var.path] = self.add_variable_from_reference(reference_variable=var, name='momentum') if self._should_use_adamw(var): self.adam_velocities[var.path] = self.add_variable_from_reference(reference_variable=var, name='velocity')
Initialize optimizer variables. Adam optimizer has 3 types of variables: momentums, velocities and velocity_hat (only set when amsgrad is applied), Args: var_list: list of model variables to build Adam variables on.
github-repos
def silence(warning, silence=True): if (not isinstance(warning, int)): raise ValueError('Input to silence should be a warning object - not of type {}'.format(type(warning))) if silence: __silencers__.add(warning) elif (warning in __silencers__): __silencers__.remove(warning) return __silencers__
Silence a particular warning on all Bokeh models. Args: warning (Warning) : Bokeh warning to silence silence (bool) : Whether or not to silence the warning Returns: A set containing the all silenced warnings This function adds or removes warnings from a set of silencers which is referred to when running ``check_integrity``. If a warning is added to the silencers - then it will never be raised. .. code-block:: python >>> from bokeh.core.validation.warnings import EMPTY_LAYOUT >>> bokeh.core.validation.silence(EMPTY_LAYOUT, True) {1002} To turn a warning back on use the same method but with the silence argument set to false .. code-block:: python >>> bokeh.core.validation.silence(EMPTY_LAYOUT, False) set()
codesearchnet
def prange(N=1, dim=1): A = {} r = numpy.arange(N, dtype=int) key = numpy.zeros(dim, dtype=int) for i in range(N): key[-1] = i A[tuple(key)] = 1*(r==i) return Poly(A, dim, (N,), int)
Constructor to create a range of polynomials where the exponent vary. Args: N (int): Number of polynomials in the array. dim (int): The dimension the polynomial should span. Returns: (Poly): A polynomial array of length N containing simple polynomials with increasing exponent. Examples: >>> print(prange(4)) [1, q0, q0^2, q0^3] >>> print(prange(4, dim=3)) [1, q2, q2^2, q2^3]
juraj-google-style
def get_server(self, name): mech = self.get(name) return mech if isinstance(mech, ServerMechanism) else None
Like :meth:`.get`, but only mechanisms inheriting :class:`ServerMechanism` will be returned. Args: name: The SASL mechanism name. Returns: The mechanism object or ``None``
juraj-google-style
def of(cls, msg_header: MessageHeader) -> 'MessageDecoder': cte_hdr = msg_header.parsed.content_transfer_encoding return cls.of_cte(cte_hdr)
Return a decoder from the message header object. See Also: :meth:`.of_cte` Args: msg_header: The message header object.
juraj-google-style
def is_diagonal(matrix: np.ndarray, *, atol: float = 1e-8) -> bool: matrix = np.copy(matrix) for i in range(min(matrix.shape)): matrix[i, i] = 0 return tolerance.all_near_zero(matrix, atol=atol)
Determines if a matrix is a approximately diagonal. A matrix is diagonal if i!=j implies m[i,j]==0. Args: matrix: The matrix to check. atol: The per-matrix-entry absolute tolerance on equality. Returns: Whether the matrix is diagonal within the given tolerance.
juraj-google-style
def get_acmg(acmg_terms): prediction = 'uncertain_significance' pvs = False ps_terms = [] pm_terms = [] pp_terms = [] ba = False bs_terms = [] bp_terms = [] for term in acmg_terms: if term.startswith('PVS'): pvs = True elif term.startswith('PS'): ps_terms.append(term) elif term.startswith('PM'): pm_terms.append(term) elif term.startswith('PP'): pp_terms.append(term) elif term.startswith('BA'): ba = True elif term.startswith('BS'): bs_terms.append(term) elif term.startswith('BP'): bp_terms.append(term) pathogenic = is_pathogenic(pvs, ps_terms, pm_terms, pp_terms) likely_pathogenic = is_likely_pathogenic(pvs, ps_terms, pm_terms, pp_terms) benign = is_benign(ba, bs_terms) likely_benign = is_likely_benign(bs_terms, bp_terms) if (pathogenic or likely_pathogenic): if (benign or likely_benign): prediction = 'uncertain_significance' elif pathogenic: prediction = 'pathogenic' else: prediction = 'likely_pathogenic' else: if benign: prediction = 'benign' if likely_benign: prediction = 'likely_benign' return prediction
Use the algorithm described in ACMG paper to get a ACMG calssification Args: acmg_terms(set(str)): A collection of prediction terms Returns: prediction(int): 0 - Uncertain Significanse 1 - Benign 2 - Likely Benign 3 - Likely Pathogenic 4 - Pathogenic
codesearchnet
def generate_output_asn(self, json_data=None, hr=True, show_name=False, colorize=True): if (json_data is None): json_data = {} keys = {'asn', 'asn_cidr', 'asn_country_code', 'asn_date', 'asn_registry', 'asn_description'}.intersection(json_data) output = '' for key in keys: output += generate_output(line='0', short=(HR_ASN[key]['_short'] if hr else key), name=(HR_ASN[key]['_name'] if (hr and show_name) else None), value=(json_data[key] if ((json_data[key] is not None) and (len(json_data[key]) > 0) and (json_data[key] != 'NA')) else 'None'), colorize=colorize) return output
The function for generating CLI output ASN results. Args: json_data (:obj:`dict`): The data to process. Defaults to None. hr (:obj:`bool`): Enable human readable key translations. Defaults to True. show_name (:obj:`bool`): Show human readable name (default is to only show short). Defaults to False. colorize (:obj:`bool`): Colorize the console output with ANSI colors. Defaults to True. Returns: str: The generated output.
codesearchnet
def __init__(self, library, options=None): if platform.python_implementation() != 'CPython': raise RuntimeError('Delegates are currently only supported into CPythondue to missing immediate reference counting.') self._library = ctypes.pydll.LoadLibrary(library) self._library.tflite_plugin_create_delegate.argtypes = [ctypes.POINTER(ctypes.c_char_p), ctypes.POINTER(ctypes.c_char_p), ctypes.c_int, ctypes.CFUNCTYPE(None, ctypes.c_char_p)] self._library.tflite_plugin_create_delegate.restype = ctypes.c_void_p options = options or {} options_keys = (ctypes.c_char_p * len(options))() options_values = (ctypes.c_char_p * len(options))() for idx, (key, value) in enumerate(options.items()): options_keys[idx] = str(key).encode('utf-8') options_values[idx] = str(value).encode('utf-8') class ErrorMessageCapture: def __init__(self): self.message = '' def report(self, x): self.message += x if isinstance(x, str) else x.decode('utf-8') capture = ErrorMessageCapture() error_capturer_cb = ctypes.CFUNCTYPE(None, ctypes.c_char_p)(capture.report) self._delegate_ptr = self._library.tflite_plugin_create_delegate(options_keys, options_values, len(options), error_capturer_cb) if self._delegate_ptr is None: raise ValueError(capture.message)
Loads delegate from the shared library. Args: library: Shared library name. options: Dictionary of options that are required to load the delegate. All keys and values in the dictionary should be serializable. Consult the documentation of the specific delegate for required and legal options. (default None) Raises: RuntimeError: This is raised if the Python implementation is not CPython.
github-repos
def _GetPropertyValue(self, parser_mediator, properties, property_name): property_value = properties.get(property_name, None) if isinstance(property_value, py2to3.BYTES_TYPE): try: property_value = property_value.decode('utf-8') except UnicodeDecodeError: parser_mediator.ProduceExtractionWarning( 'unable to decode property: {0:s}'.format(property_name)) return property_value
Retrieves a property value. Args: parser_mediator (ParserMediator): mediates interactions between parsers and other components, such as storage and dfvfs. properties (dict[str, object]): properties. property_name (str): name of the property. Returns: str: property value.
juraj-google-style
def comment(data, what): data = data.splitlines() data = map( lambda x: " data ) return "\n".join(data)
Comments line containing `what` in string `data`. Args: data (str): Configuration file in string. what (str): Line which will be commented out. Returns: str: Configuration file with commented `what`.
juraj-google-style
def scatter_div(self, sparse_delta, use_locking=False, name=None): raise NotImplementedError
Divide this variable by `tf.IndexedSlices`. Args: sparse_delta: `tf.IndexedSlices` to divide this variable by. use_locking: If `True`, use locking during the operation. name: the name of the operation. Returns: The updated variable. Raises: TypeError: if `sparse_delta` is not an `IndexedSlices`.
github-repos
def _post_process(self, feed_item, new_item): for third_party_url in feed_item.get('third_party_urls', []): third_party_url[FieldMap.CREATIVE_ID] = new_item['id'] third_party_url[FieldMap.CREATIVE_NAME] = new_item['name'] for association in feed_item.get('associations', []): association[FieldMap.CREATIVE_ID] = self.get(association)['id'] association[FieldMap.CREATIVE_NAME] = self.get(association)['name'] dcm_association = self.creative_asset_dao.get(association, required=True) if dcm_association: association[FieldMap.CREATIVE_ASSET_ID] = dcm_association.get('id', None) association[FieldMap.CREATIVE_ASSET_NAME] = dcm_association.get('name', None) backup_lp = self.landing_page_dao.get(feed_item, column_name=FieldMap.BACKUP_IMAGE_CLICK_THROUGH_LANDING_PAGE_ID) if backup_lp: association[FieldMap.BACKUP_IMAGE_CLICK_THROUGH_LANDING_PAGE_ID] = backup_lp['id'] association[FieldMap.BACKUP_IMAGE_CLICK_THROUGH_LANDING_PAGE_NAME] = backup_lp['name'] backup_asset = self.creative_asset_dao.get(association, column_name=FieldMap.CREATIVE_BACKUP_ASSET_ID) if backup_asset: association[FieldMap.CREATIVE_BACKUP_ASSET_ID] = backup_asset['id'] for click_tag in feed_item.get('click_tags', []): click_tag[FieldMap.CREATIVE_ID] = new_item['id'] click_tag[FieldMap.CREATIVE_NAME] = new_item['name'] click_tag_lp = self.landing_page_dao.get(click_tag, column_name=FieldMap.CLICK_TAG_LANDING_PAGE_ID) if click_tag_lp: click_tag[FieldMap.CLICK_TAG_LANDING_PAGE_ID] = click_tag_lp['id'] click_tag[FieldMap.CLICK_TAG_LANDING_PAGE_NAME] = click_tag_lp['name'] backup_asset = self.creative_asset_dao.get(feed_item, column_name=FieldMap.CREATIVE_BACKUP_ASSET_ID) if backup_asset: feed_item[FieldMap.CREATIVE_BACKUP_ASSET_ID] = backup_asset['id'] backup_lp = self.landing_page_dao.get(feed_item, column_name=FieldMap.BACKUP_IMAGE_CLICK_THROUGH_LANDING_PAGE_ID) if backup_lp: feed_item[FieldMap.BACKUP_IMAGE_CLICK_THROUGH_LANDING_PAGE_ID] = backup_lp['id'] feed_item[FieldMap.BACKUP_IMAGE_CLICK_THROUGH_LANDING_PAGE_NAME] = backup_lp['name']
Maps ids and names of related entities so they can be updated in the Bulkdozer feed. When Bulkdozer is done processing an item, it writes back the updated names and ids of related objects, this method makes sure those are updated in the creative feed. Args: feed_item: Feed item representing the creative from the Bulkdozer feed. item: The DCM creative being updated or created.
github-repos
def convert_into_by_batch(input_dir, output_format='csv', java_options=None, **kwargs): if input_dir is None or not os.path.isdir(input_dir): raise AttributeError("'input_dir' shoud be directory path") kwargs['format'] = _extract_format_for_conversion(output_format) if java_options is None: java_options = [] elif isinstance(java_options, str): java_options = shlex.split(java_options) kwargs['batch'] = input_dir _run(java_options, kwargs)
Convert tables from PDFs in a directory. Args: input_dir (str): Directory path. output_format (str, optional): Output format of this function (csv, json or tsv) java_options (list, optional): Set java options like `-Xmx256m`. kwargs (dict): Dictionary of option for tabula-java. Details are shown in `build_options()` Returns: Nothing. Outputs are saved into the same directory with `input_dir`
juraj-google-style
def optimize_with_repeates(self, fast=None, verbose=None, n_times=10, lambd=None, lambd_g=None, lambd_n=None): verbose = dlimix.getVerbose(verbose) if (not self.init): self._initGP(fast) opt_list = [] fixed0 = sp.zeros_like(self.gp.getParams()['dataTerm']) for i in range(n_times): scales1 = self._getScalesRand() fixed1 = (0.1 * sp.randn(fixed0.shape[0], fixed0.shape[1])) conv = self.trainGP(fast=fast, scales0=scales1, fixed0=fixed1, lambd=lambd, lambd_g=lambd_g, lambd_n=lambd_n) if conv: temp = 1 for j in range(len(opt_list)): if sp.allclose(abs(self.getScales()), abs(opt_list[j]['scales'])): temp = 0 opt_list[j]['counter'] += 1 break if (temp == 1): opt = {} opt['counter'] = 1 opt['LML'] = self.getLML() opt['scales'] = self.getScales() opt_list.append(opt) LML = sp.array([opt_list[i]['LML'] for i in range(len(opt_list))]) index = LML.argsort()[::(- 1)] out = [] if verbose: print('\nLocal mimima\n') print('n_times\t\tLML') print('------------------------------------') for i in range(len(opt_list)): out.append(opt_list[index[i]]) if verbose: print(('%d\t\t%f' % (opt_list[index[i]]['counter'], opt_list[index[i]]['LML']))) print('') return out
Train the model repeadly up to a number specified by the users with random restarts and return a list of all relative minima that have been found. This list is sorted according to least likelihood. Each list term is a dictionary with keys "counter", "LML", and "scales". After running this function, the vc object will be set at the last iteration. Thus, if you wish to get the vc object of one of the repeats, then set the scales. For example: vc.setScales(scales=optimize_with_repeates_output[0]["scales"]) Args: fast: Boolean. if set to True initalize kronSumGP verbose: Boolean. If set to True, verbose output is produced. (default True) n_times: number of re-starts of the optimization. (default 10)
codesearchnet
def error(self, error_msg): if (self.logger is not None): self.logger.error(error_msg) if (self.exc is not None): raise self.exc(error_msg)
Outputs error message on own logger. Also raises exceptions if need be. Args: error_msg: message to output
codesearchnet
def _name_search(cls, method, filters): filters = cls._get_name_filters(filters) return [cls.deserialize(cls._zeep_to_dict(row)) for row in method(filters)]
Helper for search methods that use name filters. Args: method (callable): The Five9 API method to call with the name filters. filters (dict): A dictionary of search parameters, keyed by the name of the field to search. This should conform to the schema defined in :func:`five9.Five9.create_criteria`. Returns: list[BaseModel]: A list of records representing the result.
codesearchnet
def Svn(url, fname, to=None): if (to is None): to = str(CFG['tmp_dir']) src_dir = (local.path(to) / fname) if (not source_required(src_dir)): Copy(src_dir, '.') return from benchbuild.utils.cmd import svn svn('co', url, src_dir) update_hash(src_dir) Copy(src_dir, '.')
Checkout the SVN repo. Args: url (str): The SVN SOURCE repo. fname (str): The name of the repo on disk. to (str): The name of the TARGET folder on disk. Defaults to ``CFG["tmpdir"]``
codesearchnet
def generate_branches(scales=None, angles=None, shift_angle=0): branches = [] for (pos, scale) in enumerate(scales): angle = ((((- sum(angles)) / 2) + sum(angles[:pos])) + shift_angle) branches.append([scale, angle]) return branches
Generates branches with alternative system. Args: scales (tuple/array): Indicating how the branch/es length/es develop/s from age to age. angles (tuple/array): Holding the branch and shift angle in radians. shift_angle (float): Holding the rotation angle for all branches. Returns: branches (2d-array): A array constits of arrays holding scale and angle for every branch.
codesearchnet
def from_raw(self, raw: RawScalar) -> Optional[ScalarValue]: if isinstance(raw, str): return raw
Return a cooked value of the receiver type. Args: raw: Raw value obtained from JSON parser.
juraj-google-style
def GetMessages(self, soft_size_limit=None): with self._lock: ret = rdf_flows.MessageList() ret_size = 0 for message in self._Generate(): self._total_size -= len(message) ret.job.append(rdf_flows.GrrMessage.FromSerializedString(message)) ret_size += len(message) if soft_size_limit is not None and ret_size > soft_size_limit: break return ret
Retrieves and removes the messages from the queue. Args: soft_size_limit: int If there is more data in the queue than soft_size_limit bytes, the returned list of messages will be approximately this large. If None (default), returns all messages currently on the queue. Returns: rdf_flows.MessageList A list of messages that were .Put on the queue earlier.
juraj-google-style
def SampleTaskStatus(self, task, status): if self._tasks_profiler: self._tasks_profiler.Sample(task, status)
Takes a sample of the status of the task for profiling. Args: task (Task): a task. status (str): status.
codesearchnet
def __call__(self, shape, dtype=None, **kwargs): _validate_kwargs(self.__class__.__name__, kwargs) dtype = _assert_float_dtype(_get_dtype(dtype)) if _PARTITION_SHAPE in kwargs: shape = kwargs[_PARTITION_SHAPE] return self._random_generator.truncated_normal(shape, self.mean, self.stddev, dtype)
Returns a tensor object initialized to random normal values (truncated). Args: shape: Shape of the tensor. dtype: Optional dtype of the tensor. Only floating point types are supported. If not specified, `tf.keras.backend.floatx()` is used, which default to `float32` unless you configured it otherwise (via `tf.keras.backend.set_floatx(float_dtype)`) **kwargs: Additional keyword arguments.
github-repos
def emboss_pepstats_parser(infile): with open(infile) as f: lines = f.read().split('\n') info_dict = {} for l in lines[38:47]: info = l.split('\t') cleaninfo = list(filter(lambda x: x != '', info)) prop = cleaninfo[0] num = cleaninfo[2] percent = float(cleaninfo[-1]) / float(100) info_dict['mol_percent_' + prop.lower() + '-pepstats'] = percent return info_dict
Get dictionary of pepstats results. Args: infile: Path to pepstats outfile Returns: dict: Parsed information from pepstats TODO: Only currently parsing the bottom of the file for percentages of properties.
juraj-google-style
def __init__(self, timestamp=None): super(Filetime, self).__init__() self._precision = definitions.PRECISION_100_NANOSECONDS self._timestamp = timestamp
Initializes a FILETIME timestamp. Args: timestamp (Optional[int]): FILETIME timestamp.
juraj-google-style
def set_domain_id(self, value=None, default=False, disable=False): return self._configure_mlag('domain-id', value, default, disable)
Configures the mlag domain-id value Args: value (str): The value to configure the domain-id default (bool): Configures the domain-id using the default keyword disable (bool): Negates the domain-id using the no keyword Returns: bool: Returns True if the commands complete successfully
codesearchnet
def reset_for_retry(self, output_writer): self.input_reader = self.initial_input_reader self.slice_id = 0 self.retries += 1 self.output_writer = output_writer self.handler = self.mapreduce_spec.mapper.handler
Reset self for shard retry. Args: output_writer: new output writer that contains new output files.
juraj-google-style
def plot_term_kdes(self, words, **kwargs): stem = PorterStemmer().stem for word in words: kde = self.kde(stem(word), **kwargs) plt.plot(kde) plt.show()
Plot kernel density estimates for multiple words. Args: words (list): A list of unstemmed terms.
codesearchnet
def call_fn(fn: TransitionOperator, args: Union[Tuple[Any], Any]) -> Any: if isinstance(args, (list, tuple)) and not mcmc_util.is_namedtuple_like(args): args = args return fn(*args) else: return fn(args)
Calls a transition operator with args, unpacking args if its a sequence. Args: fn: A `TransitionOperator`. args: Arguments to `fn` Returns: ret: Return value of `fn`.
juraj-google-style
def reconnect(self): if self._auth_method is "userpass": self._mgr = manager.connect(host=self._conn[0], port=self._conn[1], username=self._auth[0], password=self._auth[1], hostkey_verify=self._hostkey_verify) elif self._auth_method is "key": self._mgr = manager.connect(host=self._conn[0], port=self._conn[1], username=self._auth[0], key_filename=self._auth_key, hostkey_verify=self._hostkey_verify) else: raise ValueError("auth_method incorrect value.") self._mgr.timeout = 600 return True
Reconnect session with device. Args: None Returns: bool: True if reconnect succeeds, False if not. Raises: None
juraj-google-style
def update_submit_s3_uri(estimator, job_name): if (estimator.uploaded_code is None): return pattern = '(?<=/)[^/]+?(?=/source/sourcedir.tar.gz)' submit_uri = estimator.uploaded_code.s3_prefix submit_uri = re.sub(pattern, job_name, submit_uri) script_name = estimator.uploaded_code.script_name estimator.uploaded_code = fw_utils.UploadedCode(submit_uri, script_name)
Updated the S3 URI of the framework source directory in given estimator. Args: estimator (sagemaker.estimator.Framework): The Framework estimator to update. job_name (str): The new job name included in the submit S3 URI Returns: str: The updated S3 URI of framework source directory
codesearchnet
def decode(self, decoder_input_ids, encoder_outputs, encoder_attention_mask: Optional[jnp.ndarray]=None, decoder_attention_mask: Optional[jnp.ndarray]=None, decoder_position_ids: Optional[jnp.ndarray]=None, past_key_values: Optional[dict]=None, output_attentions: Optional[bool]=None, output_hidden_states: Optional[bool]=None, return_dict: Optional[bool]=None, train: bool=False, params: Optional[dict]=None, dropout_rng: PRNGKey=None): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states return_dict = return_dict if return_dict is not None else self.config.return_dict encoder_hidden_states = encoder_outputs[0] if encoder_attention_mask is None: batch_size, sequence_length = encoder_hidden_states.shape[:2] encoder_attention_mask = jnp.ones((batch_size, sequence_length)) batch_size, sequence_length = decoder_input_ids.shape if decoder_attention_mask is None: decoder_attention_mask = jnp.ones((batch_size, sequence_length)) if decoder_position_ids is None: if past_key_values is not None: raise ValueError('Make sure to provide `decoder_position_ids` when passing `past_key_values`.') decoder_position_ids = jnp.broadcast_to(jnp.arange(sequence_length)[None, :], (batch_size, sequence_length)) rngs = {} if dropout_rng is not None: rngs['dropout'] = dropout_rng inputs = {'params': params or self.params} if past_key_values: inputs['cache'] = past_key_values mutable = ['cache'] else: mutable = False def _decoder_forward(module, decoder_input_ids, decoder_attention_mask, decoder_position_ids, **kwargs): decoder_module = module._get_decoder_module() return decoder_module(decoder_input_ids, decoder_attention_mask, decoder_position_ids, **kwargs) outputs = self.module.apply(inputs, decoder_input_ids=jnp.array(decoder_input_ids, dtype='i4'), decoder_attention_mask=jnp.array(decoder_attention_mask, dtype='i4'), decoder_position_ids=jnp.array(decoder_position_ids, dtype='i4'), encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=jnp.array(encoder_attention_mask, dtype='i4'), output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, deterministic=not train, rngs=rngs, mutable=mutable, method=_decoder_forward) if past_key_values is not None and return_dict: outputs, past = outputs outputs['past_key_values'] = unfreeze(past['cache']) return outputs elif past_key_values is not None and (not return_dict): outputs, past = outputs outputs = outputs[:1] + (unfreeze(past['cache']),) + outputs[1:] return outputs
Returns: Example: ```python >>> from transformers import AutoTokenizer, FlaxMBartForConditionalGeneration >>> model = FlaxMBartForConditionalGeneration.from_pretrained("facebook/mbart-large-cc25") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/mbart-large-cc25") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="jax") >>> encoder_outputs = model.encode(**inputs) >>> decoder_start_token_id = model.config.decoder_start_token_id >>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id >>> outputs = model.decode(decoder_input_ids, encoder_outputs) >>> last_decoder_hidden_states = outputs.last_hidden_state ```
github-repos
def norm(values, min=None, max=None): min = np.min(values) if min is None else min max = np.max(values) if max is None else max return (values - min) / (max-min)
Unity-based normalization to scale data into 0-1 range. (values - min) / (max - min) Args: values: Array of values to be normalized min (float, optional): Lower bound of normalization range max (float, optional): Upper bound of normalization range Returns: Array of normalized values
juraj-google-style
def CheckCheck(filename, clean_lines, linenum, error): lines = clean_lines.elided (check_macro, start_pos) = FindCheckMacro(lines[linenum]) if (not check_macro): return (last_line, end_line, end_pos) = CloseExpression(clean_lines, linenum, start_pos) if (end_pos < 0): return if (not Match('\\s*;', last_line[end_pos:])): return if (linenum == end_line): expression = lines[linenum][(start_pos + 1):(end_pos - 1)] else: expression = lines[linenum][(start_pos + 1):] for i in xrange((linenum + 1), end_line): expression += lines[i] expression += last_line[0:(end_pos - 1)] lhs = '' rhs = '' operator = None while expression: matched = Match('^\\s*(<<|<<=|>>|>>=|->\\*|->|&&|\\|\\||==|!=|>=|>|<=|<|\\()(.*)$', expression) if matched: token = matched.group(1) if (token == '('): expression = matched.group(2) (end, _) = FindEndOfExpressionInLine(expression, 0, ['(']) if (end < 0): return lhs += ('(' + expression[0:end]) expression = expression[end:] elif (token in ('&&', '||')): return elif (token in ('<<', '<<=', '>>', '>>=', '->*', '->')): lhs += token expression = matched.group(2) else: operator = token rhs = matched.group(2) break else: matched = Match('^([^-=!<>()&|]+)(.*)$', expression) if (not matched): matched = Match('^(\\s*\\S)(.*)$', expression) if (not matched): break lhs += matched.group(1) expression = matched.group(2) if (not (lhs and operator and rhs)): return if ((rhs.find('&&') > (- 1)) or (rhs.find('||') > (- 1))): return lhs = lhs.strip() rhs = rhs.strip() match_constant = '^([-+]?(\\d+|0[xX][0-9a-fA-F]+)[lLuU]{0,3}|".*"|\\\'.*\\\')$' if (Match(match_constant, lhs) or Match(match_constant, rhs)): error(filename, linenum, 'readability/check', 2, ('Consider using %s instead of %s(a %s b)' % (_CHECK_REPLACEMENT[check_macro][operator], check_macro, operator)))
Checks the use of CHECK and EXPECT macros. Args: filename: The name of the current file. clean_lines: A CleansedLines instance containing the file. linenum: The number of the line to check. error: The function to call with any errors found.
codesearchnet
def predict_on_batch(self, data: Union[(list, tuple)], return_indexes: bool=False) -> List[List[str]]: X = self._transform_batch(data) (objects_number, lengths) = (len(X[0]), [len(elem) for elem in data[0]]) Y = self.model_.predict_on_batch(X) labels = np.argmax(Y, axis=(- 1)) answer: List[List[str]] = ([None] * objects_number) for (i, (elem, length)) in enumerate(zip(labels, lengths)): elem = elem[:length] answer[i] = (elem if return_indexes else self.tags.idxs2toks(elem)) return answer
Makes predictions on a single batch Args: data: a batch of word sequences together with additional inputs return_indexes: whether to return tag indexes in vocabulary or tags themselves Returns: a batch of label sequences
codesearchnet
def register_menu_item(self, items): for itm in items: if (itm.group in self.menu_items): if (itm not in self.menu_items[itm.group]['items']): self.menu_items[itm.group]['items'].append(itm) else: logger.warning('Tried registering menu item to unknown group {}'.format(itm.group))
Registers a views menu items into the metadata for the application. Skip if the item is already present Args: items (`list` of `MenuItem`): A list of `MenuItem`s Returns: `None`
codesearchnet
def fill_empty_rows(ragged_input, default_value, name=None): with ops.name_scope(name, 'RaggedFillEmptyRows', [ragged_input]): if not isinstance(ragged_input, ragged_tensor.RaggedTensor): raise TypeError(f'ragged_input must be RaggedTensor, got {type(ragged_input)}') default_value = ops.convert_to_tensor(default_value, dtype=ragged_input.dtype) output_value_rowids, output_values, empty_row_indicator, unused_reverse_index_map = gen_ragged_array_ops.ragged_fill_empty_rows(value_rowids=ragged_input.value_rowids(), values=ragged_input.values, nrows=ragged_input.nrows(), default_value=default_value) return (ragged_tensor.RaggedTensor.from_value_rowids(values=output_values, value_rowids=output_value_rowids, validate=False), empty_row_indicator)
Fills empty rows in the input `RaggedTensor` with rank 2 with a default value. This op adds entries with the specified `default_value` for any row in the input that does not already have a value. The op also returns an indicator vector such that empty_row_indicator[i] = True iff row i was an empty row. Args: ragged_input: A `RaggedTensor` with rank 2. default_value: The value to fill for empty rows, with the same type as `ragged_input.` name: A name prefix for the returned tensors (optional) Returns: ragged_ordered_output: A `RaggedTensor`with all empty rows filled in with `default_value`. empty_row_indicator: A bool vector indicating whether each input row was empty. Raises: TypeError: If `ragged_input` is not a `RaggedTensor`.
github-repos
def add_file(self, path, compress): if not os.path.isfile(path): raise ValueError('{} is not a file'.format(path)) self.fileobj.seek(self.last_offset) with open(path, 'rb') as f: flags = os.stat(path).st_mode & 0o777 self.add_fileobj(f, path, compress, flags)
Add a single file to the MAR file. Args: path (str): path to a file to add to this MAR file. compress (str): One of 'xz', 'bz2', or None. Defaults to None.
juraj-google-style
def get_concatenated_pdf_from_disk(filenames: Iterable[str], start_recto: bool = True) -> bytes: if start_recto: writer = PdfFileWriter() for filename in filenames: if filename: if writer.getNumPages() % 2 != 0: writer.addBlankPage() writer.appendPagesFromReader( PdfFileReader(open(filename, 'rb'))) return pdf_from_writer(writer) else: merger = PdfFileMerger() for filename in filenames: if filename: merger.append(open(filename, 'rb')) return pdf_from_writer(merger)
Concatenates PDFs from disk and returns them as an in-memory binary PDF. Args: filenames: iterable of filenames of PDFs to concatenate start_recto: start a new right-hand page for each new PDF? Returns: concatenated PDF, as ``bytes``
juraj-google-style
def merge_and_fit(self, track, pairings): for (self_seg_index, track_seg_index, _) in pairings: self_s = self.segments[self_seg_index] ss_start = self_s.points[0] track_s = track.segments[track_seg_index] tt_start = track_s.points[0] tt_end = track_s.points[(- 1)] d_start = ss_start.distance(tt_start) d_end = ss_start.distance(tt_end) if (d_start > d_end): track_s = track_s.copy() track_s.points = list(reversed(track_s.points)) self_s.merge_and_fit(track_s) return self
Merges another track with this one, ordering the points based on a distance heuristic Args: track (:obj:`Track`): Track to merge with pairings Returns: :obj:`Segment`: self
codesearchnet
def _add_results(self, results, trial_id): for result in results: self.logger.debug("Appending result: %s" % result) result["trial_id"] = trial_id result_record = ResultRecord.from_json(result) result_record.save()
Add a list of results into db. Args: results (list): A list of json results. trial_id (str): Id of the trial.
juraj-google-style
def reply_all(self, reply_comment): payload = (('{ "Comment": "' + reply_comment) + '"}') endpoint = 'https: self._make_api_call('post', endpoint, data=payload)
Replies to everyone on the email, including those on the CC line. With great power, comes great responsibility. Args: reply_comment: The string comment to send to everyone on the email.
codesearchnet
def plan(description, stack_action, context, tail=None, reverse=False): def target_fn(*args, **kwargs): return COMPLETE steps = [Step(stack, fn=stack_action, watch_func=tail) for stack in context.get_stacks()] steps += [Step(target, fn=target_fn) for target in context.get_targets()] graph = build_graph(steps) return build_plan(description=description, graph=graph, targets=context.stack_names, reverse=reverse)
A simple helper that builds a graph based plan from a set of stacks. Args: description (str): a description of the plan. action (func): a function to call for each stack. context (:class:`stacker.context.Context`): a :class:`stacker.context.Context` to build the plan from. tail (func): an optional function to call to tail the stack progress. reverse (bool): if True, execute the graph in reverse (useful for destroy actions). Returns: :class:`plan.Plan`: The resulting plan object
codesearchnet
def clean_up_tokenization(out_string: str) -> str: out_string = out_string.replace(' .', '.').replace(' ?', '?').replace(' !', '!').replace(' ,', ',').replace(" ' ", "'").replace(" n't", "n't").replace(" 'm", "'m").replace(" 's", "'s").replace(" 've", "'ve").replace(" 're", "'re") return out_string
Clean up a list of simple English tokenization artifacts like spaces before punctuations and abbreviated forms. Args: out_string (`str`): The text to clean up. Returns: `str`: The cleaned-up string.
github-repos
def get(self, resource_id=None, resource_action=None, resource_cls=None, single_resource=False): endpoint = self.endpoint if (not resource_cls): resource_cls = self._cls if resource_id: endpoint = self._build_url(endpoint, resource_id) if resource_action: endpoint = self._build_url(endpoint, resource_action) response = self.api.execute('GET', endpoint) if (not response.ok): raise Error.parse(response.json()) if (resource_id or single_resource): return resource_cls.parse(response.json()) return [resource_cls.parse(resource) for resource in response.json()]
Gets the details for one or more resources by ID Args: cls - gophish.models.Model - The resource class resource_id - str - The endpoint (URL path) for the resource resource_action - str - An action to perform on the resource resource_cls - cls - A class to use for parsing, if different than the base resource single_resource - bool - An override to tell Gophish that even though we aren't requesting a single resource, we expect a single response object Returns: One or more instances of cls parsed from the returned JSON
codesearchnet