code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def configure_logging(verbosity): root = logging.getLogger() formatter = logging.Formatter('%(asctime)s.%(msecs)03d %(levelname).3s %(name)s %(message)s', '%y-%m-%d %H:%M:%S') handler = logging.StreamHandler() handler.setFormatter(formatter) loglevels = [logging.CRITICAL, logging.ERROR, logging.WARNING, logging.INFO, logging.DEBUG] if (verbosity >= len(loglevels)): verbosity = (len(loglevels) - 1) level = loglevels[verbosity] root.setLevel(level) root.addHandler(handler)
Set up the global logging level. Args: verbosity (int): The logging verbosity
codesearchnet
def _get_client_address(self, req): try: forwarded_for = req.get_header('X-Forwarded-For', True) return forwarded_for.split(',')[0].strip() except (KeyError, HTTPMissingHeader): return (req.env.get('REMOTE_ADDR') if self.remote_address_fallback else None)
Get address from ``X-Forwarded-For`` header or use remote address. Remote address is used if the ``X-Forwarded-For`` header is not available. Note that this may not be safe to depend on both without proper authorization backend. Args: req (falcon.Request): falcon.Request object. Returns: str: client address.
codesearchnet
def clr(M, **kwargs): R = np.zeros(M.shape) Id = [[0, 0] for i in range(M.shape[0])] for i in range(M.shape[0]): mu_i = np.mean(M[(i, :)]) sigma_i = np.std(M[(i, :)]) Id[i] = [mu_i, sigma_i] for i in range(M.shape[0]): for j in range((i + 1), M.shape[0]): z_i = np.max([0, ((M[(i, j)] - Id[i][0]) / Id[i][0])]) z_j = np.max([0, ((M[(i, j)] - Id[j][0]) / Id[j][0])]) R[(i, j)] = np.sqrt(((z_i ** 2) + (z_j ** 2))) R[(j, i)] = R[(i, j)] return R
Implementation of the Context Likelihood or Relatedness Network algorithm. Args: mat (numpy.ndarray): matrix, if it is a square matrix, the program assumes it is a relevance matrix where mat(i,j) represents the similarity content between nodes i and j. Elements of matrix should be non-negative. Returns: mat_nd (numpy.ndarray): Output deconvolved matrix (direct dependency matrix). Its components represent direct edge weights of observed interactions. .. note:: Ref:Jeremiah J. Faith, Boris Hayete, Joshua T. Thaden, Ilaria Mogno, Jamey Wierzbowski, Guillaume Cottarel, Simon Kasif, James J. Collins, and Timothy S. Gardner. Large-scale mapping and validation of escherichia coli transcriptional regulation from a compendium of expression profiles. PLoS Biology, 2007
codesearchnet
def devserver(port, admin_port, clear): admin_port = admin_port or (port + 1) args = [ '--port={}'.format(port), '--admin_port={}'.format(admin_port) ] if clear: args += ['--clear_datastore=yes'] with conf.within_proj_dir(): shell.run('dev_appserver.py . {args}'.format(args=' '.join(args)))
Run devserver. Args: port (int): Port on which the app will be served. admin_port (int): Port on which the admin interface is served. clear (bool): If set to **True**, clear the datastore on startup.
juraj-google-style
def cancel_job(self, job_id=None, job_name=None): return self._delegator.cancel_job(job_id=job_id, job_name = job_name)
Cancel a running job. Args: job_id (str, optional): Identifier of job to be canceled. job_name (str, optional): Name of job to be canceled. Returns: dict: JSON response for the job cancel operation.
juraj-google-style
def get_email_message(self, message_uid, message_type="text/plain"): self._mail.select("inbox") result = self._mail.uid('fetch', message_uid, "(RFC822)") msg = email.message_from_string(result[1][0][1]) try: for part in msg.walk(): if part.get_content_type() == message_type: return part.get_payload(decode=True) except: return msg.get_payload(decode=True)
Fetch contents of email. Args: message_uid (int): IMAP Message UID number. Kwargs: message_type: Can be 'text' or 'html'
juraj-google-style
def split(self, path: str) -> Tuple[str, str]: raise NotImplementedError
Splits the given path into two parts. Splits the path into a pair (head, tail) such that tail contains the last component of the path and head contains everything up to that. For file-systems other than the local file-system, head should include the prefix. Args: path: path as a string Returns: a pair of path components as strings.
github-repos
def __init__(self, message): super(IndexOutOfBounds, self).__init__( reason=enums.ResultReason.INDEX_OUT_OF_BOUNDS, message=message )
Create an IndexOutOfBounds exception. Args: message (string): A string containing information about the error.
juraj-google-style
def _handle_request(self, request: dict) -> dict: request_body: bytes = request['request_body'] signature_chain_url: str = request['signature_chain_url'] signature: str = request['signature'] alexa_request: dict = request['alexa_request'] if not self._verify_request(signature_chain_url, signature, request_body): return {'error': 'failed certificate/signature check'} timestamp_str = alexa_request['request']['timestamp'] timestamp_datetime = datetime.strptime(timestamp_str, '%Y-%m-%dT%H:%M:%SZ') now = datetime.utcnow() delta = now - timestamp_datetime if now >= timestamp_datetime else timestamp_datetime - now if abs(delta.seconds) > REQUEST_TIMESTAMP_TOLERANCE_SECS: log.error(f'Failed timestamp check for request: {request_body.decode("utf-8", "replace")}') return {'error': 'failed request timestamp check'} conversation_key = alexa_request['session']['user']['userId'] if conversation_key not in self.conversations.keys(): if self.config['multi_instance']: conv_agent = self._init_agent() log.info('New conversation instance level agent initiated') else: conv_agent = self.agent self.conversations[conversation_key] = \ Conversation(config=self.config, agent=conv_agent, conversation_key=conversation_key, self_destruct_callback=lambda: self._del_conversation(conversation_key)) log.info(f'Created new conversation, key: {conversation_key}') conversation = self.conversations[conversation_key] response = conversation.handle_request(alexa_request) return response
Processes Alexa requests from skill server and returns responses to Alexa. Args: request: Dict with Alexa request payload and metadata. Returns: result: Alexa formatted or error response.
juraj-google-style
def _find_suite_classes_in_module(module): test_suites = [] for _, module_member in module.__dict__.items(): if inspect.isclass(module_member): if issubclass(module_member, base_suite.BaseSuite): test_suites.append(module_member) return test_suites
Finds all test suite classes in the given module. Walk through module members and find all classes that is a subclass of BaseSuite. Args: module: types.ModuleType, the module object to find test suite classes. Returns: A list of test suite classes.
github-repos
def __init__( self, maximum_number_of_file_objects=128, maximum_number_of_file_systems=16): super(Context, self).__init__() self._file_object_cache = cache.ObjectsCache( maximum_number_of_file_objects) self._file_system_cache = cache.ObjectsCache( maximum_number_of_file_systems)
Initializes the resolver context object. Args: maximum_number_of_file_objects (Optional[int]): maximum number of file-like objects cached in the context. maximum_number_of_file_systems (Optional[int]): maximum number of file system objects cached in the context.
juraj-google-style
def render_text(text, preformatted=False): return IPython.core.display.HTML(_html.HtmlBuilder.render_text(text, preformatted))
Return text formatted as a HTML Args: text: the text to render preformatted: whether the text should be rendered as preformatted
juraj-google-style
def plot_loss_history(history, figsize=(15, 8)): plt.figure(figsize=figsize) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.xlabel(' plt.ylabel('Loss') plt.legend(['Training', 'Validation']) plt.title('Loss over time') plt.show()
Plots the learning history for a Keras model, assuming the validation data was provided to the 'fit' function. Args: history: The return value from the 'fit' function. figsize: The size of the plot.
codesearchnet
def validate_primitive_json_representation(desc: descriptor.Descriptor, json_str: str) -> None: pattern = _pattern_for_primitive(desc) if pattern is not None and pattern.fullmatch(json_str) is None: raise fhir_errors.InvalidFhirError(f'Unable to find pattern: {pattern!r}.')
Ensures that json_str matches the associated regex pattern, if one exists. Args: desc: The Descriptor of the FHIR primitive to validate. json_str: The JSON string to validate. Raises: fhir_errors.InvalidFhirError: Raised in the event that pattern is unable to be matched on json_str.
github-repos
def _scalar_field_to_json(field, row_value): converter = _SCALAR_VALUE_TO_JSON_ROW.get(field.field_type) if (converter is None): return row_value return converter(row_value)
Maps a field and value to a JSON-safe value. Args: field ( \ :class:`~google.cloud.bigquery.schema.SchemaField`, \ ): The SchemaField to use for type conversion and field name. row_value (any): Value to be converted, based on the field's type. Returns: any: A JSON-serializable object.
codesearchnet
def dtype(self): return self._dtype
The `tf.dtypes.DType` specified by this type for the RaggedTensor. Examples: >>> rt = tf.ragged.constant([["a"], ["b", "c"]], dtype=tf.string) >>> tf.type_spec_from_value(rt).dtype tf.string Returns: A `tf.dtypes.DType` of the values in the RaggedTensor.
github-repos
def _compile_aggregation_expression(self, expr: Expression, scope: Dict[str, TensorFluent], batch_size: Optional[int] = None, noise: Optional[List[tf.Tensor]] = None) -> TensorFluent: etype = expr.etype args = expr.args typed_var_list = args[:-1] vars_list = [var for _, (var, _) in typed_var_list] expr = args[-1] x = self._compile_expression(expr, scope) etype2aggr = { 'sum': x.sum, 'prod': x.prod, 'avg': x.avg, 'maximum': x.maximum, 'minimum': x.minimum, 'exists': x.exists, 'forall': x.forall } if etype[1] not in etype2aggr: raise ValueError('Invalid aggregation expression {}.'.format(expr)) aggr = etype2aggr[etype[1]] fluent = aggr(vars_list=vars_list) return fluent
Compile an aggregation expression `expr` into a TensorFluent in the given `scope` with optional batch size. Args: expr (:obj:`rddl2tf.expr.Expression`): A RDDL aggregation expression. scope (Dict[str, :obj:`rddl2tf.fluent.TensorFluent`]): A fluent scope. batch_size (Optional[size]): The batch size. Returns: :obj:`rddl2tf.fluent.TensorFluent`: The compiled expression as a TensorFluent.
juraj-google-style
def quote(src_string, return_expr=False): node = parse_string(src_string) body = node.body if (len(body) == 1): if (isinstance(body[0], gast.Expr) and (not return_expr)): out = body[0].value else: out = body[0] else: out = node return out
Go from source code to AST nodes. This function returns a tree without enclosing `Module` or `Expr` nodes. Args: src_string: The source code to parse. return_expr: Whether or not to return a containing expression. This can be set to `True` if the result is to be part of a series of statements. Returns: An AST of the given source code.
codesearchnet
def get_loss_func(self, C=1.0, k=1): def lf(x): mu, ln_var = self.encode(x) batchsize = len(mu.data) rec_loss = 0 for l in six.moves.range(k): z = F.gaussian(mu, ln_var) rec_loss += F.bernoulli_nll(x, self.decode(z, sigmoid=False)) \ / (k * batchsize) self.rec_loss = rec_loss self.loss = self.rec_loss + \ C * gaussian_kl_divergence(mu, ln_var) / batchsize return self.loss return lf
Get loss function of VAE. The loss value is equal to ELBO (Evidence Lower Bound) multiplied by -1. Args: C (int): Usually this is 1.0. Can be changed to control the second term of ELBO bound, which works as regularization. k (int): Number of Monte Carlo samples used in encoded vector.
juraj-google-style
def predict(self, df_data, threshold=0.05, **kwargs): nb_jobs = kwargs.get("nb_jobs", SETTINGS.NB_JOBS) list_nodes = list(df_data.columns.values) if nb_jobs != 1: result_feature_selection = Parallel(n_jobs=nb_jobs)(delayed(self.run_feature_selection) (df_data, node, idx, **kwargs) for idx, node in enumerate(list_nodes)) else: result_feature_selection = [self.run_feature_selection(df_data, node, idx, **kwargs) for idx, node in enumerate(list_nodes)] for idx, i in enumerate(result_feature_selection): try: i.insert(idx, 0) except AttributeError: result_feature_selection[idx] = np.insert(i, idx, 0) matrix_results = np.array(result_feature_selection) matrix_results *= matrix_results.transpose() np.fill_diagonal(matrix_results, 0) matrix_results /= 2 graph = nx.Graph() for (i, j), x in np.ndenumerate(matrix_results): if matrix_results[i, j] > threshold: graph.add_edge(list_nodes[i], list_nodes[j], weight=matrix_results[i, j]) for node in list_nodes: if node not in graph.nodes(): graph.add_node(node) return graph
Predict the skeleton of the graph from raw data. Returns iteratively the feature selection algorithm on each node. Args: df_data (pandas.DataFrame): data to construct a graph from threshold (float): cutoff value for feature selection scores kwargs (dict): additional arguments for algorithms Returns: networkx.Graph: predicted skeleton of the graph.
juraj-google-style
def __init__( self, greedy_q_learning, init_state_key ): if isinstance(boltzmann_q_learning, BoltzmannQLearning): self.__boltzmann_q_learning = boltzmann_q_learning else: raise TypeError() self.__init_state_key = init_state_key
Init. Args: boltzmann_q_learning: is-a `BoltzmannQLearning`. init_state_key: First state key.
juraj-google-style
def synthesize(self, duration): sr = self.samplerate.samples_per_second seconds = duration / Seconds(1) samples = np.random.uniform(low=-1., high=1., size=int(sr * seconds)) return AudioSamples(samples, self.samplerate)
Synthesize white noise Args: duration (numpy.timedelta64): The duration of the synthesized sound
juraj-google-style
def get_vis_data_from_string(self, sess, input_string): encoded_inputs = self.encode(input_string) out = sess.run(self.samples, {self.inputs: encoded_inputs}) att_mats = sess.run(self.att_mats, {self.inputs: encoded_inputs, self.targets: np.reshape(out, [1, (- 1), 1, 1])}) output_string = self.decode(out) input_list = self.decode_list(encoded_inputs) output_list = self.decode_list(out) return (output_string, input_list, output_list, att_mats)
Constructs the data needed for visualizing attentions. Args: sess: A tf.Session object. input_string: The input sentence to be translated and visualized. Returns: Tuple of ( output_string: The translated sentence. input_list: Tokenized input sentence. output_list: Tokenized translation. att_mats: Tuple of attention matrices; ( enc_atts: Encoder self attention weights. A list of `num_layers` numpy arrays of size (batch_size, num_heads, inp_len, inp_len) dec_atts: Decoder self attention weights. A list of `num_layers` numpy arrays of size (batch_size, num_heads, out_len, out_len) encdec_atts: Encoder-Decoder attention weights. A list of `num_layers` numpy arrays of size (batch_size, num_heads, out_len, inp_len) )
codesearchnet
def get_url_param(self, index, default=None): params = self.get_url_params() return (params[index] if (index < len(params)) else default)
Return url parameter with given index. Args: - index: starts from zero, and come after controller and action names in url.
codesearchnet
def _update_size(self, size, future): with self._size_lock: if size > self._size and future.done: self._size = size
Keep track of the file size during writing. If specified size value is greater than the current size, update the current size using specified value. Used as callback in default "_flush" implementation for files supporting random write access. Args: size (int): Size value. future (concurrent.futures._base.Future): future.
juraj-google-style
def from_raw(self, robj: RawObject) -> RootNode: cooked = self.schema.from_raw(robj) return RootNode(cooked, self.schema, cooked.timestamp)
Create an instance node from a raw data tree. Args: robj: Dictionary representing a raw data tree. Returns: Root instance node.
juraj-google-style
def SetDefaultValue(self, scan_object): if (not isinstance(scan_object, PathFilterScanTreeNode) and not isinstance(scan_object, py2to3.STRING_TYPES)): raise TypeError('Unsupported scan object type.') if self.default_value: raise ValueError('Default value already set.') self.default_value = scan_object
Sets the default (non-match) value. Args: scan_object: a scan object, either a scan tree sub node (instance of PathFilterScanTreeNode) or a string containing a path. Raises: TypeError: if the scan object is of an unsupported type. ValueError: if the default value is already set.
juraj-google-style
def _parse_pem_data(pem_data): sep = '-----BEGIN CERTIFICATE-----' cert_chain = [six.b((sep + s)) for s in pem_data.split(sep)[1:]] certs = [] load_cert = x509.load_pem_x509_certificate for cert in cert_chain: try: certs.append(load_cert(cert, default_backend())) except ValueError: warnings.warn('Certificate is invalid.') return False return certs
Parse PEM-encoded X.509 certificate chain. Args: pem_data: str. PEM file retrieved from SignatureCertChainUrl. Returns: list or bool: If url is valid, returns the certificate chain as a list of cryptography.hazmat.backends.openssl.x509._Certificate certificates where certs[0] is the first certificate in the file; if url is invalid, returns False.
codesearchnet
def parse_json(json_file): if not os.path.exists(json_file): return None try: with open(json_file, "r") as f: info_str = f.readlines() info_str = "".join(info_str) json_info = json.loads(info_str) return unicode2str(json_info) except BaseException as e: logging.error(e.message) return None
Parse a whole json record from the given file. Return None if the json file does not exists or exception occurs. Args: json_file (str): File path to be parsed. Returns: A dict of json info.
juraj-google-style
class DepthProPreActResidualLayer(nn.Module): def __init__(self, config): super().__init__() self.use_batch_norm = config.use_batch_norm_in_fusion_residual use_bias_in_fusion_residual = config.use_bias_in_fusion_residual if config.use_bias_in_fusion_residual is not None else not self.use_batch_norm self.activation1 = nn.ReLU() self.convolution1 = nn.Conv2d(config.fusion_hidden_size, config.fusion_hidden_size, kernel_size=3, stride=1, padding=1, bias=use_bias_in_fusion_residual) self.activation2 = nn.ReLU() self.convolution2 = nn.Conv2d(config.fusion_hidden_size, config.fusion_hidden_size, kernel_size=3, stride=1, padding=1, bias=use_bias_in_fusion_residual) if self.use_batch_norm: self.batch_norm1 = nn.BatchNorm2d(config.fusion_hidden_size) self.batch_norm2 = nn.BatchNorm2d(config.fusion_hidden_size) def forward(self, hidden_state: torch.Tensor) -> torch.Tensor: residual = hidden_state hidden_state = self.activation1(hidden_state) hidden_state = self.convolution1(hidden_state) if self.use_batch_norm: hidden_state = self.batch_norm1(hidden_state) hidden_state = self.activation2(hidden_state) hidden_state = self.convolution2(hidden_state) if self.use_batch_norm: hidden_state = self.batch_norm2(hidden_state) return hidden_state + residual
ResidualConvUnit, pre-activate residual unit. Args: config (`[DepthProConfig]`): Model configuration class defining the model architecture.
github-repos
def lint(self, content, **kwargs): post_data = {'content': content} data = self.http_post('/ci/lint', post_data=post_data, **kwargs) return (data['status'] == 'valid', data['errors'])
Validate a gitlab CI configuration. Args: content (txt): The .gitlab-ci.yml content **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabVerifyError: If the validation could not be done Returns: tuple: (True, []) if the file is valid, (False, errors(list)) otherwise
juraj-google-style
def merge_default_with_oplog(graph, op_log=None, run_meta=None, add_trace=True, add_trainable_var=True): if not graph and (not context.executing_eagerly()): graph = ops.get_default_graph() tmp_op_log = tfprof_log_pb2.OpLogProto() if not graph: return tmp_op_log logged_ops, string_to_id = _get_logged_ops(graph, run_meta, add_trace=add_trace, add_trainable_var=add_trainable_var) if not op_log: tmp_op_log.log_entries.extend(logged_ops.values()) else: all_ops = {} for entry in op_log.log_entries: all_ops[entry.name] = entry for op_name, entry in logged_ops.items(): if op_name in all_ops: all_ops[op_name].types.extend(entry.types) if entry.float_ops > 0 and all_ops[op_name].float_ops == 0: all_ops[op_name].float_ops = entry.float_ops if entry.code_def.traces and (not all_ops[op_name].code_def.traces): all_ops[op_name].code_def.MergeFrom(entry.code_def) else: all_ops[op_name] = entry tmp_op_log.log_entries.extend(all_ops.values()) for s, i in string_to_id.items(): tmp_op_log.id_to_string[i] = s return tmp_op_log
Merge the tfprof default extra info with caller's op_log. Args: graph: tf.Graph. If None and eager execution is not enabled, use default graph. op_log: OpLogProto proto. run_meta: RunMetadata proto used to complete shape information. add_trace: Whether to add op trace information. add_trainable_var: Whether to assign tf.compat.v1.trainable_variables() op type '_trainable_variables'. Returns: tmp_op_log: Merged OpLogProto proto.
github-repos
def _get_outputs_tensor_info_from_meta_graph_def(meta_graph_def, signature_def_key): return meta_graph_def.signature_def[signature_def_key].outputs
Gets TensorInfos for all outputs of the SignatureDef. Returns a dictionary that maps each output key to its TensorInfo for the given signature_def_key in the meta_graph_def. Args: meta_graph_def: MetaGraphDef protocol buffer with the SignatureDefmap to look up signature_def_key. signature_def_key: A SignatureDef key string. Returns: A dictionary that maps output tensor keys to TensorInfos.
github-repos
def set_setting(self, setting, value): if (setting not in (self._expected_settings + self._optional_settings)): raise exceptions.ConfigurationError("Setting '{0}' is not supported.".format(setting)) if (setting == 'hostname'): self._set_hostname(value) elif (setting == 'port'): self._set_port(value) elif (setting == 'certificate_path'): self._set_certificate_path(value) elif (setting == 'key_path'): self._set_key_path(value) elif (setting == 'ca_path'): self._set_ca_path(value) elif (setting == 'auth_suite'): self._set_auth_suite(value) elif (setting == 'policy_path'): self._set_policy_path(value) elif (setting == 'enable_tls_client_auth'): self._set_enable_tls_client_auth(value) elif (setting == 'tls_cipher_suites'): self._set_tls_cipher_suites(value) elif (setting == 'logging_level'): self._set_logging_level(value) else: self._set_database_path(value)
Set a specific setting value. This will overwrite the current setting value for the specified setting. Args: setting (string): The name of the setting to set (e.g., 'certificate_path', 'hostname'). Required. value (misc): The value of the setting to set. Type varies based on setting. Required. Raises: ConfigurationError: Raised if the setting is not supported or if the setting value is invalid.
codesearchnet
def upload(self, filename, filedata=None, filepath=None, **kwargs): if ((filepath is None) and (filedata is None)): raise GitlabUploadError('No file contents or path specified') if ((filedata is not None) and (filepath is not None)): raise GitlabUploadError('File contents and file path specified') if (filepath is not None): with open(filepath, 'rb') as f: filedata = f.read() url = ('/projects/%(id)s/uploads' % {'id': self.id}) file_info = {'file': (filename, filedata)} data = self.manager.gitlab.http_post(url, files=file_info) return {'alt': data['alt'], 'url': data['url'], 'markdown': data['markdown']}
Upload the specified file into the project. .. note:: Either ``filedata`` or ``filepath`` *MUST* be specified. Args: filename (str): The name of the file being uploaded filedata (bytes): The raw data of the file being uploaded filepath (str): The path to a local file to upload (optional) Raises: GitlabConnectionError: If the server cannot be reached GitlabUploadError: If the file upload fails GitlabUploadError: If ``filedata`` and ``filepath`` are not specified GitlabUploadError: If both ``filedata`` and ``filepath`` are specified Returns: dict: A ``dict`` with the keys: * ``alt`` - The alternate text for the upload * ``url`` - The direct url to the uploaded file * ``markdown`` - Markdown for the uploaded file
codesearchnet
def emboss_pepstats_parser(infile): with open(infile) as f: lines = f.read().split('\n') info_dict = {} for l in lines[38:47]: info = l.split('\t') cleaninfo = list(filter((lambda x: (x != '')), info)) prop = cleaninfo[0] num = cleaninfo[2] percent = (float(cleaninfo[(- 1)]) / float(100)) info_dict[(('mol_percent_' + prop.lower()) + '-pepstats')] = percent return info_dict
Get dictionary of pepstats results. Args: infile: Path to pepstats outfile Returns: dict: Parsed information from pepstats TODO: Only currently parsing the bottom of the file for percentages of properties.
codesearchnet
def has_ontime_pane(self): pass
Whether this trigger creates an empty pane even if there are no elements. Returns: True if this trigger guarantees that there will always be an ON_TIME pane even if there are no elements in that pane.
github-repos
def determine_intent(self, utterance, num_results=1, include_tags=False, context_manager=None): parser = Parser(self.tokenizer, self.tagger) parser.on('tagged_entities', (lambda result: self.emit("tagged_entities", result))) context = [] if context_manager: context = context_manager.get_context() for result in parser.parse(utterance, N=num_results, context=context): self.emit("parse_result", result) remaining_context = self.__get_unused_context(result, context) best_intent, tags = self.__best_intent(result, remaining_context) if best_intent and best_intent.get('confidence', 0.0) > 0: if include_tags: best_intent['__tags__'] = tags yield best_intent
Given an utterance, provide a valid intent. Args: utterance(str): an ascii or unicode string representing natural language speech include_tags(list): includes the parsed tags (including position and confidence) as part of result context_manager(list): a context manager to provide context to the utterance num_results(int): a maximum number of results to be returned. Returns: A generator that yields dictionaries.
juraj-google-style
def create_struct(name): sid = idc.GetStrucIdByName(name) if (sid != idaapi.BADADDR): raise exceptions.SarkStructAlreadyExists('A struct names {!r} already exists.'.format(name)) sid = idc.AddStrucEx((- 1), name, 0) if (sid == idaapi.BADADDR): raise exceptions.SarkStructCreationFailed('Struct creation failed.') return sid
Create a structure. Args: name: The structure's name Returns: The sturct ID Raises: exceptions.SarkStructAlreadyExists: A struct with the same name already exists exceptions.SarkCreationFailed: Struct creation failed
codesearchnet
def update_configuration(self, timeout=(- 1)): uri = '{}/configuration'.format(self.data['uri']) return self._helper.update(None, uri=uri, timeout=timeout)
Asynchronously applies or re-applies the logical interconnect configuration to all managed interconnects. Args: timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView; it just stops waiting for its completion. Returns: dict: Logical Interconnect.
codesearchnet
def block(inputs, activation='swish', drop_rate=0.0, name='', filters_in=32, filters_out=16, kernel_size=3, strides=1, expand_ratio=1, se_ratio=0.0, id_skip=True): bn_axis = 3 if backend.image_data_format() == 'channels_last' else 1 filters = filters_in * expand_ratio if expand_ratio != 1: x = layers.Conv2D(filters, 1, padding='same', use_bias=False, kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'expand_conv')(inputs) x = layers.BatchNormalization(axis=bn_axis, name=name + 'expand_bn')(x) x = layers.Activation(activation, name=name + 'expand_activation')(x) else: x = inputs if strides == 2: x = layers.ZeroPadding2D(padding=imagenet_utils.correct_pad(x, kernel_size), name=name + 'dwconv_pad')(x) conv_pad = 'valid' else: conv_pad = 'same' x = layers.DepthwiseConv2D(kernel_size, strides=strides, padding=conv_pad, use_bias=False, depthwise_initializer=CONV_KERNEL_INITIALIZER, name=name + 'dwconv')(x) x = layers.BatchNormalization(axis=bn_axis, name=name + 'bn')(x) x = layers.Activation(activation, name=name + 'activation')(x) if 0 < se_ratio <= 1: filters_se = max(1, int(filters_in * se_ratio)) se = layers.GlobalAveragePooling2D(name=name + 'se_squeeze')(x) if bn_axis == 1: se_shape = (filters, 1, 1) else: se_shape = (1, 1, filters) se = layers.Reshape(se_shape, name=name + 'se_reshape')(se) se = layers.Conv2D(filters_se, 1, padding='same', activation=activation, kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'se_reduce')(se) se = layers.Conv2D(filters, 1, padding='same', activation='sigmoid', kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'se_expand')(se) x = layers.multiply([x, se], name=name + 'se_excite') x = layers.Conv2D(filters_out, 1, padding='same', use_bias=False, kernel_initializer=CONV_KERNEL_INITIALIZER, name=name + 'project_conv')(x) x = layers.BatchNormalization(axis=bn_axis, name=name + 'project_bn')(x) if id_skip and strides == 1 and (filters_in == filters_out): if drop_rate > 0: x = layers.Dropout(drop_rate, noise_shape=(None, 1, 1, 1), name=name + 'drop')(x) x = layers.add([x, inputs], name=name + 'add') return x
An inverted residual block. Args: inputs: input tensor. activation: activation function. drop_rate: float between 0 and 1, fraction of the input units to drop. name: string, block label. filters_in: integer, the number of input filters. filters_out: integer, the number of output filters. kernel_size: integer, the dimension of the convolution window. strides: integer, the stride of the convolution. expand_ratio: integer, scaling coefficient for the input filters. se_ratio: float between 0 and 1, fraction to squeeze the input filters. id_skip: boolean. Returns: output tensor for the block.
github-repos
def validate_full_name(self, full_name, timeout=-1): uri = self.URI + '/validateUserName/' + full_name return self._client.create_with_zero_body(uri=uri, timeout=timeout)
Verifies if a fullName is already in use. Args: full_name: The fullName to be verified. timeout: Timeout in seconds. Wait for task completion by default. The timeout does not abort the operation in OneView, just stops waiting for its completion. Returns: True if full name is in use, False if it is not.
juraj-google-style
def get_tensors_by_names(names): ret = [] G = tfv1.get_default_graph() for n in names: (opn, varn) = get_op_tensor_name(n) ret.append(G.get_tensor_by_name(varn)) return ret
Get a list of tensors in the default graph by a list of names. Args: names (list):
codesearchnet
def unravel_staff(staff_data): staff_list = [] for role, staff_members in staff_data['data'].items(): for member in staff_members: member['role'] = role staff_list.append(member) return staff_list
Unravels staff role dictionary into flat list of staff members with ``role`` set as an attribute. Args: staff_data(dict): Data return from py:method::get_staff Returns: list: Flat list of staff members with ``role`` set to role type (i.e. course_admin, instructor, TA, etc)
juraj-google-style
def pad_nested_sequences(sequences, dtype='int32'): max_sent_len = 0 max_word_len = 0 for sent in sequences: max_sent_len = max(len(sent), max_sent_len) for word in sent: max_word_len = max(len(word), max_word_len) x = np.zeros((len(sequences), max_sent_len, max_word_len)).astype(dtype) for i, sent in enumerate(sequences): for j, word in enumerate(sent): x[i, j, :len(word)] = word return x
Pads nested sequences to the same length. This function transforms a list of list sequences into a 3D Numpy array of shape `(num_samples, max_sent_len, max_word_len)`. Args: sequences: List of lists of lists. dtype: Type of the output sequences. # Returns x: Numpy array.
juraj-google-style
def str2tuple(str_in): tuple_out = safe_eval(str_in) if (not isinstance(tuple_out, tuple)): tuple_out = None return tuple_out
Extracts a tuple from a string. Args: str_in (string) that contains python tuple Returns: (dict) or None if no valid tuple was found Raises: -
codesearchnet
class Swinv2PatchMerging(nn.Module): def __init__(self, input_resolution: Tuple[int], dim: int, norm_layer: nn.Module=nn.LayerNorm) -> None: super().__init__() self.input_resolution = input_resolution self.dim = dim self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False) self.norm = norm_layer(2 * dim) def maybe_pad(self, input_feature, height, width): should_pad = height % 2 == 1 or width % 2 == 1 if should_pad: pad_values = (0, 0, 0, width % 2, 0, height % 2) input_feature = nn.functional.pad(input_feature, pad_values) return input_feature def forward(self, input_feature: torch.Tensor, input_dimensions: Tuple[int, int]) -> torch.Tensor: height, width = input_dimensions batch_size, dim, num_channels = input_feature.shape input_feature = input_feature.view(batch_size, height, width, num_channels) input_feature = self.maybe_pad(input_feature, height, width) input_feature_0 = input_feature[:, 0::2, 0::2, :] input_feature_1 = input_feature[:, 1::2, 0::2, :] input_feature_2 = input_feature[:, 0::2, 1::2, :] input_feature_3 = input_feature[:, 1::2, 1::2, :] input_feature = torch.cat([input_feature_0, input_feature_1, input_feature_2, input_feature_3], -1) input_feature = input_feature.view(batch_size, -1, 4 * num_channels) input_feature = self.reduction(input_feature) input_feature = self.norm(input_feature) return input_feature
Patch Merging Layer. Args: input_resolution (`Tuple[int]`): Resolution of input feature. dim (`int`): Number of input channels. norm_layer (`nn.Module`, *optional*, defaults to `nn.LayerNorm`): Normalization layer class.
github-repos
def setData(self, data, setName=None): if not isinstance(data, DataFrame): if pd is not None and isinstance(data, pd.DataFrame): data = DataFrame.fromPandas(data) if setName is None: lock_and_call( lambda: self._impl.setData(data._impl), self._lock ) else: lock_and_call( lambda: self._impl.setData(data._impl, setName), self._lock )
Assign the data in the dataframe to the AMPL entities with the names corresponding to the column names. Args: data: The dataframe containing the data to be assigned. setName: The name of the set to which the indices values of the DataFrame are to be assigned. Raises: AMPLException: if the data assignment procedure was not successful.
juraj-google-style
def glue_convert_examples_to_features(examples: Union[List[InputExample], 'tf.data.Dataset'], tokenizer: PreTrainedTokenizer, max_length: Optional[int]=None, task=None, label_list=None, output_mode=None): warnings.warn(DEPRECATION_WARNING.format('function'), FutureWarning) if is_tf_available() and isinstance(examples, tf.data.Dataset): if task is None: raise ValueError('When calling glue_convert_examples_to_features from TF, the task parameter is required.') return _tf_glue_convert_examples_to_features(examples, tokenizer, max_length=max_length, task=task) return _glue_convert_examples_to_features(examples, tokenizer, max_length=max_length, task=task, label_list=label_list, output_mode=output_mode)
Loads a data file into a list of `InputFeatures` Args: examples: List of `InputExamples` or `tf.data.Dataset` containing the examples. tokenizer: Instance of a tokenizer that will tokenize the examples max_length: Maximum example length. Defaults to the tokenizer's max_len task: GLUE task label_list: List of labels. Can be obtained from the processor using the `processor.get_labels()` method output_mode: String indicating the output mode. Either `regression` or `classification` Returns: If the `examples` input is a `tf.data.Dataset`, will return a `tf.data.Dataset` containing the task-specific features. If the input is a list of `InputExamples`, will return a list of task-specific `InputFeatures` which can be fed to the model.
github-repos
def import_from_file_path(path): if not os.path.exists(path): raise OSError('Given file path does not exist.') module_name = os.path.basename(path) spec = util.spec_from_file_location(module_name, path) if spec is None: raise OSError('Unable to load module from specified path.') module = util.module_from_spec(spec) spec.loader.exec_module(module) return (module, module_name)
Performs a module import given the filename. Args: path (str): the path to the file to be imported. Raises: IOError: if the given file does not exist or importlib fails to load it. Returns: Tuple[ModuleType, str]: returns the imported module and the module name, usually extracted from the path itself.
github-repos
def _get_non_string_match(self, key): expression = '(?:\\s*)'.join(['^', 'define', '\\(', "'{}'".format(key), ',', '(.*)', '\\)', ';']) pattern = re.compile(expression, re.MULTILINE) return pattern.search(self._content)
Gets a MatchObject for the given key, assuming a non-string value. Args: key (str): Key of the property to look-up. Return: MatchObject: The discovered match.
codesearchnet
def stat(filename): return stat_v2(filename)
Returns file statistics for a given path. Args: filename: string, path to a file Returns: FileStatistics struct that contains information about the path Raises: errors.OpError: If the operation fails.
github-repos
def add_notification_listener(self, notification_type, notification_callback): if notification_type not in self.notifications: self.notifications[notification_type] = [(self.notification_id, notification_callback)] else: if reduce(lambda a, b: a + 1, filter(lambda tup: tup[1] == notification_callback, self.notifications[notification_type]), 0) > 0: return -1 self.notifications[notification_type].append((self.notification_id, notification_callback)) ret_val = self.notification_id self.notification_id += 1 return ret_val
Add a notification callback to the notification center. Args: notification_type: A string representing the notification type from .helpers.enums.NotificationTypes notification_callback: closure of function to call when event is triggered. Returns: Integer notification id used to remove the notification or -1 if the notification has already been added.
juraj-google-style
def destroy_s3(app='', env='dev', **_): session = boto3.Session(profile_name=env) client = session.resource('s3') generated = get_details(app=app, env=env) archaius = generated.archaius() bucket = client.Bucket(archaius['bucket']) for item in bucket.objects.filter(Prefix=archaius['path']): item.Object().delete() LOG.info('Deleted: %s/%s', item.bucket_name, item.key) return True
Destroy S3 Resources for _app_ in _env_. Args: app (str): Application name env (str): Deployment environment/account name Returns: boolean: True if destroyed sucessfully
juraj-google-style
def new_from_json(cls, json_data): json_data_as_unicode = _helpers._from_bytes(json_data) data = json.loads(json_data_as_unicode) module_name = data['_module'] try: module_obj = __import__(module_name) except ImportError: module_name = module_name.replace('.googleapiclient', '') module_obj = __import__(module_name) module_obj = __import__(module_name, fromlist=module_name.split('.')[:-1]) kls = getattr(module_obj, data['_class']) return kls.from_json(json_data_as_unicode)
Utility class method to instantiate a Credentials subclass from JSON. Expects the JSON string to have been produced by to_json(). Args: json_data: string or bytes, JSON from to_json(). Returns: An instance of the subclass of Credentials that was serialized with to_json().
juraj-google-style
def add_sched_block_instance(self, config_dict): schema = self._get_schema() LOG.debug('Adding SBI with config: %s', config_dict) validate(config_dict, schema) updated_block = self._add_status(config_dict) scheduling_block_data, processing_block_data = \ self._split_sched_block_instance(updated_block) name = "scheduling_block:" + updated_block["id"] self._db.set_specified_values(name, scheduling_block_data) self._db.push_event(self.scheduling_event_name, updated_block["status"], updated_block["id"]) for value in processing_block_data: name = ("scheduling_block:" + updated_block["id"] + ":processing_block:" + value['id']) self._db.set_specified_values(name, value) self._db.push_event(self.processing_event_name, value["status"], value["id"])
Add Scheduling Block to the database. Args: config_dict (dict): SBI configuration
juraj-google-style
def double_width(self, action): if action == 'on': action = '1' elif action == 'off': action = '0' else: raise RuntimeError('Invalid action for function doubleWidth. Options are on and off') self.send(chr(27)+'W'+action)
Enable/cancel doublewidth printing Args: action: Enable or disable doublewidth printing. Options are 'on' and 'off' Returns: None Raises: RuntimeError: Invalid action.
juraj-google-style
def installed(name, source): ret = {'name': name, 'changes': {}, 'result': False, 'comment': ''} if not name: raise SaltInvocationError('Must specify a KB "name"') if not source: raise SaltInvocationError('Must specify a "source" file to install') if __salt__['wusa.is_installed'](name): ret['result'] = True ret['comment'] = '{0} already installed'.format(name) return ret if __opts__['test'] is True: ret['result'] = None ret['comment'] = '{0} would be installed'.format(name) ret['result'] = None return ret cached_source_path = __salt__['cp.cache_file'](path=source, saltenv=__env__) if not cached_source_path: msg = 'Unable to cache {0} from saltenv "{1}"'.format( salt.utils.url.redact_http_basic_auth(source), __env__) ret['comment'] = msg return ret __salt__['wusa.install'](cached_source_path) if __salt__['wusa.is_installed'](name): ret['comment'] = '{0} was installed'.format(name) ret['changes'] = {'old': False, 'new': True} ret['result'] = True else: ret['comment'] = '{0} failed to install'.format(name) return ret
Ensure an update is installed on the minion Args: name(str): Name of the Windows KB ("KB123456") source (str): Source of .msu file corresponding to the KB Example: .. code-block:: yaml KB123456: wusa.installed: - source: salt://kb123456.msu
juraj-google-style
def __init__(self, _lenient=False, **kwds): self._verify_keys(kwds, _lenient) self._set_values(kwds, _lenient)
Init. Args: _lenient: When true, no option is required. **kwds: keyword arguments for options and their values.
juraj-google-style
async def forget_ticket(self, request): session = (await get_session(request)) session.pop(self.cookie_name, '')
Called to forget the ticket data a request Args: request: aiohttp Request object.
codesearchnet
def infer_steps_for_dataset(model, dataset, steps, epochs=1, steps_name='steps'): assert isinstance(dataset, data_types.DatasetV2) if model._in_multi_worker_mode() and dataset.options().experimental_distribute.auto_shard_policy != options_lib.AutoShardPolicy.OFF: return None size = backend.get_value(cardinality.cardinality(dataset)) if size == cardinality.INFINITE and steps is None: raise ValueError('When passing an infinitely repeating dataset, you must specify the `%s` argument.' % (steps_name,)) if size >= 0: if steps is not None and steps * epochs > size: if epochs > 1: raise ValueError('The dataset you passed contains %s batches, but you passed `epochs=%s` and `%s=%s`, which is a total of %s steps. We cannot draw that many steps from this dataset. We suggest to set `%s=%s`.' % (size, epochs, steps_name, steps, steps * epochs, steps_name, size else: raise ValueError('The dataset you passed contains %s batches, but you passed `%s=%s`. We cannot draw that many steps from this dataset. We suggest to set `%s=%s`.' % (size, steps_name, steps, steps_name, size)) if steps is None: if size >= 0: return size return None return steps
Infers steps_per_epoch needed to loop through a dataset. Args: model: Keras model instance. dataset: Input data of type tf.data.Dataset. steps: Number of steps to draw from the dataset (may be None if unknown). epochs: Number of times to iterate over the dataset. steps_name: The string name of the steps argument, either `steps`, `validation_steps`, or `steps_per_epoch`. Only used for error message formatting. Returns: Integer or `None`. Inferred number of steps to loop through the dataset. `None` is returned if 1) the size of the dataset is unknown and `steps` was not specified, or 2) this is multi-worker training and auto sharding is enabled. Raises: ValueError: In case of invalid argument values.
github-repos
def plot_time_elapsed(filename, elapsed=False, unit='s', plot_kwargs=None): import matplotlib.pyplot as plt if (plot_kwargs is None): plot_kwargs = {} data_column = (3 if elapsed else 1) data = np.genfromtxt(filename, dtype='i8,f4', usecols=(0, data_column), names=['k', 'v']) index = data['k'] values = data['v'] if (unit == 's'): pass elif (unit == 'm'): values /= 60 elif (unit == 'h'): values /= 3600 elif (unit == 'd'): values /= (3600 * 24) else: raise ValueError('The argument `unit` must be chosen from {s|m|h|d}.') plt.plot(index, values, **plot_kwargs)
Plot series data from MonitorTimeElapsed output text file. Args: filename (str): Path to *.series.txt file produced by :obj:`~nnabla.MonitorSeries` class. elapsed (bool): If ``True``, it plots the total elapsed time. unit (str): Time unit chosen from ``'s'``, ``'m'``, ``'h'``, or ``'d'``. plot_kwags (dict, optional): Keyward arguments passed to :function:`matplotlib.pyplot.plot`. Note: matplotlib package is required.
codesearchnet
def flatten(schedule: ScheduleComponent, name: str = None) -> Schedule: if name is None: name = schedule.name return Schedule(*schedule.instructions, name=name)
Create a flattened schedule. Args: schedule: Schedules to flatten name: Name of the new schedule. Defaults to first element of `schedules`
juraj-google-style
def SetTimelineOwner(self, username): self._timeline_owner = username logger.info('Owner of the timeline: {0!s}'.format(self._timeline_owner))
Sets the username of the user that should own the timeline. Args: username (str): username.
juraj-google-style
def to_las3(self, use_descriptions=False, dlm=',', source='Striplog'): data = self.to_csv(use_descriptions=use_descriptions, dlm=dlm, header=False) return templates.section.format(name='Lithology', short='LITH', source=source, data=data)
Returns an LAS 3.0 section string. Args: use_descriptions (bool): Whether to use descriptions instead of summaries, if available. dlm (str): The delimiter. source (str): The sourse of the data. Returns: str: A string forming Lithology section of an LAS3 file.
codesearchnet
def _initialize_physical_devices(self, reinitialize=False): with self._device_lock: if not reinitialize and self._physical_devices is not None: return devs = pywrap_tfe.TF_ListPhysicalDevices() self._physical_devices = [PhysicalDevice(name=d.decode(), device_type=d.decode().split(':')[1]) for d in devs] self._physical_device_to_index = {p: i for i, p in enumerate(self._physical_devices)} pluggable_devs = pywrap_tfe.TF_ListPluggablePhysicalDevices() self._pluggable_devices = [PhysicalDevice(name=d.decode(), device_type=d.decode().split(':')[1]) for d in pluggable_devs] self._visible_device_list = list(self._physical_devices) self._memory_growth_map = {d: None for d in self._physical_devices if d.device_type == 'GPU' or d in self._pluggable_devices} self._import_config()
Gets local devices visible to the system. Args: reinitialize: If True, reinitializes self._physical_devices so that dynamic registered devices will also be visible to the python front-end.
github-repos
def related(self, *, exclude_self=False): manager = type(self)._default_manager queryset = manager.related_to(self) if exclude_self: queryset = queryset.exclude(id=self.id) return queryset
Get a QuerySet for all trigger log objects for the same connected model. Args: exclude_self (bool): Whether to exclude this log object from the result list
juraj-google-style
def __init__(self, value): if not (isinstance(value, tensor.Tensor) and value.dtype.is_floating): raise ValueError('Regression output value must be a float32 Tensor; got {}'.format(value)) self._value = value
Constructor for `RegressionOutput`. Args: value: a float `Tensor` giving the predicted values. Required. Raises: ValueError: if the value is not a `Tensor` with dtype tf.float32.
github-repos
def _validate_alias_command(alias_command): if (not alias_command): raise CLIError(EMPTY_ALIAS_ERROR) split_command = shlex.split(alias_command) boundary_index = len(split_command) for (i, subcommand) in enumerate(split_command): if ((not re.match('^[a-z]', subcommand.lower())) or (i > COLLISION_CHECK_LEVEL_DEPTH)): boundary_index = i break command_to_validate = ' '.join(split_command[:boundary_index]).lower() for command in azext_alias.cached_reserved_commands: if re.match('([a-z\\-]*\\s)*{}($|\\s)'.format(command_to_validate), command): return _validate_positional_arguments(shlex.split(alias_command))
Check if the alias command is valid. Args: alias_command: The command to validate.
codesearchnet
def genfile(*paths): path = genpath(*paths) gendir(os.path.dirname(path)) if not os.path.isfile(path): return io.open(path, 'w+b') return io.open(path, 'r+b')
Create or open ( for read/write ) a file path join. Args: *paths: A list of paths to join together to make the file. Notes: If the file already exists, the fd returned is opened in ``r+b`` mode. Otherwise, the fd is opened in ``w+b`` mode. Returns: io.BufferedRandom: A file-object which can be read/written too.
juraj-google-style
def _build_udf(name, code, return_type, params, language, imports): params = ','.join([('%s %s' % named_param) for named_param in params]) imports = ','.join([('library="%s"' % i) for i in imports]) if (language.lower() == 'sql'): udf = (((('CREATE TEMPORARY FUNCTION {name} ({params})\n' + 'RETURNS {return_type}\n') + 'AS (\n') + '{code}\n') + ');') else: udf = (((((((('CREATE TEMPORARY FUNCTION {name} ({params})\n' + 'RETURNS {return_type}\n') + 'LANGUAGE {language}\n') + 'AS \n') + 'OPTIONS (\n') + '{imports}\n') + ');') return udf.format(name=name, params=params, return_type=return_type, language=language, code=code, imports=imports)
Creates the UDF part of a BigQuery query using its pieces Args: name: the name of the javascript function code: function body implementing the logic. return_type: BigQuery data type of the function return. See supported data types in the BigQuery docs params: dictionary of parameter names and types language: see list of supported languages in the BigQuery docs imports: a list of GCS paths containing further support code.
codesearchnet
def __call__(self, shape, dtype=None): dtype = tf.as_dtype(dtype or tf.keras.backend.floatx()) if isinstance(shape, tf.TensorShape): shape_dtype = tf.int32 shape_ = np.int32(shape) else: if not tf.is_tensor(shape): shape = tf.convert_to_tensor( value=shape, dtype_hint=tf.int32, name='shape') shape_dtype = shape.dtype.base_dtype shape_ = tf.get_static_value(shape, partial=True) sizes_ = tf.get_static_value(self.sizes) if sizes_ is not None: sizes_ = np.array(sizes_, shape_dtype.as_numpy_dtype) assertions = [] message = 'Rightmost dimension of shape must equal `sum(sizes)`.' n = shape[-1] if shape_ is None or shape_[-1] is None else shape_[-1] if sizes_ is not None and not tf.is_tensor(n): if sum(sizes_) != n: raise ValueError(message) elif self.validate_args: assertions.append(tf.compat.v1.assert_equal( shape[-1], tf.reduce_sum(input_tensor=self.sizes), message=message)) s = (shape[:-1] if shape_ is None or any(s is None for s in shape_[:-1]) else shape_[:-1]) if sizes_ is not None and isinstance(s, (np.ndarray, np.generic)): return tf.concat([ tf.keras.initializers.get(init)(np.concatenate([ s, np.array([e], shape_dtype.as_numpy_dtype)], axis=-1), dtype) for init, e in zip(self.initializers, sizes_.tolist()) ], axis=-1) sizes = tf.split(self.sizes, len(self.initializers)) return tf.concat([ tf.keras.initializers.get(init)(tf.concat([s, e], axis=-1), dtype) for init, e in zip(self.initializers, sizes) ], axis=-1)
Returns a tensor object initialized as specified by the initializer. Args: shape: Shape of the tensor. dtype: Optional dtype of the tensor. If not provided will return tensor of `tf.float32`.
juraj-google-style
def read_knmi_dataset(directory): filemask = '%s*.txt' % directory filelist = glob.glob(filemask) columns_hourly = ['temp', 'precip', 'glob', 'hum', 'wind', 'ssd'] ts = pd.DataFrame(columns=columns_hourly) first_call = True for file_i in filelist: print(file_i) current = read_single_knmi_file(file_i) if(first_call): ts = current first_call = False else: ts = pd.concat([ts, current]) return ts
Reads files from a directory and merges the time series Please note: For each station, a separate directory must be provided! data availability: www.knmi.nl/nederland-nu/klimatologie/uurgegevens Args: directory: directory including the files Returns: pandas data frame including time series
juraj-google-style
def get_auth_header(self, user_payload): auth_token = self.get_auth_token(user_payload) return '{auth_header_prefix} {auth_token}'.format( auth_header_prefix=self.auth_header_prefix, auth_token=auth_token )
Returns the value for authorization header Args: user_payload(dict, required): A `dict` containing required information to create authentication token
juraj-google-style
def stream(self, accountID, **kwargs): request = Request('GET', '/v3/accounts/{accountID}/transactions/stream') request.set_path_param('accountID', accountID) request.set_stream(True) class Parser(): def __init__(self, ctx): self.ctx = ctx def __call__(self, line): j = json.loads(line.decode('utf-8')) type = j.get('type') if (type is None): return ('unknown', j) elif (type == 'HEARTBEAT'): return ('transaction.TransactionHeartbeat', self.ctx.transaction.TransactionHeartbeat.from_dict(j, self.ctx)) transaction = self.ctx.transaction.Transaction.from_dict(j, self.ctx) return ('transaction.Transaction', transaction) request.set_line_parser(Parser(self.ctx)) response = self.ctx.request(request) return response
Get a stream of Transactions for an Account starting from when the request is made. Args: accountID: Account Identifier Returns: v20.response.Response containing the results from submitting the request
codesearchnet
def arcsin(x): if any_symbolic_tensors((x,)): return Arcsin().symbolic_call(x) return backend.numpy.arcsin(x)
Inverse sine, element-wise. Args: x: Input tensor. Returns: Tensor of the inverse sine of each element in `x`, in radians and in the closed interval `[-pi/2, pi/2]`. Example: >>> x = keras.ops.convert_to_tensor([1, -1, 0]) >>> keras.ops.arcsin(x) array([ 1.5707964, -1.5707964, 0.], dtype=float32)
github-repos
def _validate_input(flattened_layouts: Sequence[layout_lib.Layout], flattened_elem_spec: Sequence[tensor_spec.TensorSpec], dataset_already_batched: bool): if not flattened_elem_spec: raise ValueError('Expected input element spec of at least one element, was empty.') first_elem_shape = flattened_elem_spec[0].shape for layout, elem_spec in zip(flattened_layouts, flattened_elem_spec): if elem_spec.shape.rank is None: raise ValueError('Dataset element shape must have a valid rank, got spec %s.' % elem_spec) expected_rank = elem_spec.shape.rank if not dataset_already_batched: expected_rank += 1 if layout.rank != expected_rank: raise ValueError('Expected layout with rank %d for element spec %s, got layout %s. Check that the dataset is not batched before passing to DTensorDataset.' % (expected_rank, elem_spec, layout.sharding_specs)) if dataset_already_batched: batch_dim_size = first_elem_shape.as_list()[0] if batch_dim_size is None: raise ValueError('Size of batch dimension of element spec %s is None. Ensure drop_remainder=True when batching the dataset.' % elem_spec) if elem_spec.shape.as_list()[0] != batch_dim_size: raise ValueError('Size of batch dimension of element spec %s does not match expected size %d.' % (elem_spec, batch_dim_size))
Checks that the dataset's layouts and element specs are compatible. Args: flattened_layouts: the flattened list of layouts used to distribute the dataset. flattened_elem_spec: the flattened list of element specs used in the dataset's components. dataset_already_batched: whether the dataset to be validated is already batched. Raises: ValueError: if the dataset's inputs are incompatible.
github-repos
def _add_cadd_score(self, variant_obj, info_dict): cadd_score = info_dict.get('CADD') if cadd_score: logger.debug("Updating cadd_score to: {0}".format( cadd_score)) variant_obj.cadd_score = float(cadd_score)
Add the cadd score to the variant Args: variant_obj (puzzle.models.Variant) info_dict (dict): A info dictionary
juraj-google-style
class Buffer(Generic[T]): queue: List[T] max_size: int flusher: Union[FlushFunction, NoReturn] def __init__(self, initlist: List[T], max_size: int, flusher: FlushFunction) -> None: self.queue = initlist self.max_size = max_size self.flusher = flusher def flush(self, force: bool=False) -> bool | Any: if force or len(self.queue) > self.max_size: result = self.flusher(self.queue) self.queue.clear() return result or True else: return False def push(self, item: T) -> bool | Any: self.queue.append(item) return self.flush()
Representation of a Buffer (FIFO queue) with the ability to consume the current queue into a flush function when max_size is reached. It can queue any list of items, e.g. logs, rows, and API calls. Args: * initlist: Initial list of items * max_size: Maximum queue size * flusher: Function to be called with list of items
github-repos
def download(self, file: Optional[IO[bytes]]=None, duration_timeout: Optional[float]=None): (yield from self._current_session.download(file, duration_timeout=duration_timeout))
Download content. Args: file: An optional file object for the document contents. duration_timeout: Maximum time in seconds of which the entire file must be read. Returns: Response: An instance of :class:`.http.request.Response`. See :meth:`WebClient.session` for proper usage of this function. Coroutine.
codesearchnet
def on_http_error(error): def wrap(f): @functools.wraps(f) def wrapped_f(*args, **kwargs): try: return f(*args, **kwargs) except GitlabHttpError as e: raise error(e.error_message, e.response_code, e.response_body) return wrapped_f return wrap
Manage GitlabHttpError exceptions. This decorator function can be used to catch GitlabHttpError exceptions raise specialized exceptions instead. Args: error(Exception): The exception type to raise -- must inherit from GitlabError
codesearchnet
def is_periodically_contiguous( self ): edges = self.sites_at_edges() is_contiguous = [ False, False, False ] along_x = any( [ s2 in s1.p_neighbours for s1 in edges[0] for s2 in edges[1] ] ) along_y = any( [ s2 in s1.p_neighbours for s1 in edges[2] for s2 in edges[3] ] ) along_z = any( [ s2 in s1.p_neighbours for s1 in edges[4] for s2 in edges[5] ] ) return ( along_x, along_y, along_z )
logical check whether a cluster connects with itself across the simulation periodic boundary conditions. Args: none Returns ( Bool, Bool, Bool ): Contiguity along the x, y, and z coordinate axes
juraj-google-style
def Create(self, request, global_params=None): config = self.GetMethodConfig('Create') return self._RunMethod(config, request, global_params=global_params)
Creates a `WorkerPool`. Args: request: (CloudbuildProjectsLocationsWorkerPoolsCreateRequest) input message global_params: (StandardQueryParameters, default: None) global arguments Returns: (Operation) The response message.
github-repos
def get_urls(self): urls = super(DashboardSite, self).get_urls() custom_urls = [url('^$', self.admin_view(HomeView.as_view()), name='index'), url('^logs/', include(logs_urlpatterns(self.admin_view)))] custom_urls += get_realtime_urls(self.admin_view) del urls[0] return (custom_urls + urls)
Get urls method. Returns: list: the list of url objects.
codesearchnet
def filesystem_set_configuration(scheme, key, value, name=None): return _gen_filesystem_ops.file_system_set_configuration(scheme, key=key, value=value, name=name)
Set configuration of the file system. Args: scheme: File system scheme. key: The name of the configuration option. value: The value of the configuration option. name: A name for the operation (optional). Returns: None.
github-repos
def degrees_to_compass(value): if value is None: return None if value >= 348.75 and value <= 360 or (value >= 0 and value <= 11.25): return 'N' else: for direction in WIND_DIRECTION_MAP.keys(): if value >= WIND_DIRECTION_MAP[direction]['f'] and value <= WIND_DIRECTION_MAP[direction]['t']: return direction return None
Turns direction from degrees value to compass direction Args: value: floating point representing the degrees from 0 to 360 Returns: String representing the compass direction.
github-repos
def write_dftbp(filename, atoms): scale_pos = dftbpToBohr lines = "" natoms = atoms.get_number_of_atoms() lines += str(natoms) lines += ' S \n' expaned_symbols = atoms.get_chemical_symbols() symbols = get_reduced_symbols(expaned_symbols) lines += ' '.join(symbols) + '\n' atom_numbers = [] for ss in expaned_symbols: atom_numbers.append(symbols.index(ss) + 1) positions = atoms.get_positions()/scale_pos for ii in range(natoms): pos = positions[ii] pos_str = "{:3d} {:3d} {:20.15f} {:20.15f} {:20.15f}\n".format( ii + 1, atom_numbers[ii], pos[0], pos[1], pos[2]) lines += pos_str lines +='0.0 0.0 0.0\n' cell = atoms.get_cell()/scale_pos for ii in range(3): cell_str = "{:20.15f} {:20.15f} {:20.15f}\n".format( cell[ii][0], cell[ii][1], cell[ii][2]) lines += cell_str outfile = open(filename, 'w') outfile.write(lines)
Writes DFTB+ readable, gen-formatted structure files Args: filename: name of the gen-file to be written atoms: object containing information about structure
juraj-google-style
def read_video_pyav(container, indices): frames = [] container.seek(0) start_index = indices[0] end_index = indices[-1] for i, frame in enumerate(container.decode(video=0)): if i > end_index: break if i >= start_index and i in indices: frames.append(frame) return np.stack([x.to_ndarray(format='rgb24') for x in frames])
Decode the video with PyAV decoder. Args: container (`av.container.input.InputContainer`): PyAV container. indices (`List[int]`): List of frame indices to decode. Returns: result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3).
github-repos
def search_point(self, lat, lng, filters=None, startDate=None, endDate=None, types=None, type=None): searchAreaWkt = ('POLYGON ((%s %s, %s %s, %s %s, %s %s, %s %s))' % (lng, lat, lng, lat, lng, lat, lng, lat, lng, lat)) return self.search(searchAreaWkt=searchAreaWkt, filters=filters, startDate=startDate, endDate=endDate, types=types)
Perform a catalog search over a specific point, specified by lat,lng Args: lat: latitude lng: longitude filters: Array of filters. Optional. Example: [ "(sensorPlatformName = 'WORLDVIEW01' OR sensorPlatformName ='QUICKBIRD02')", "cloudCover < 10", "offNadirAngle < 10" ] startDate: string. Optional. Example: "2004-01-01T00:00:00.000Z" endDate: string. Optional. Example: "2004-01-01T00:00:00.000Z" types: Array of types to search for. Optional. Example (and default): ["Acquisition"] Returns: catalog search resultset
codesearchnet
def RegisterSourceType(cls, source_type_class): if source_type_class.TYPE_INDICATOR in cls._source_type_classes: raise KeyError( 'Source type already set for type: {0:s}.'.format( source_type_class.TYPE_INDICATOR)) cls._source_type_classes[source_type_class.TYPE_INDICATOR] = ( source_type_class)
Registers a source type. Source types are identified based on their type indicator. Args: source_type_class (type): source type. Raises: KeyError: if source types is already set for the corresponding type indicator.
juraj-google-style
def _get_name(self): if self._known_keys[_InstrumentationKnownStatusKeys.TEST]: return self._known_keys[_InstrumentationKnownStatusKeys.TEST] else: return self.DEFAULT_INSTRUMENTATION_METHOD_NAME
Gets the method name of the test method for the instrumentation method block. Returns: A string containing the name of the instrumentation test method's test or a default name if no name was parsed.
github-repos
def write_env_vars(env_vars=None): env_vars = env_vars or {} env_vars['PYTHONPATH'] = ':'.join(sys.path) for name, value in env_vars.items(): os.environ[name] = value
Write the dictionary env_vars in the system, as environment variables. Args: env_vars (): Returns:
juraj-google-style
def locate_resource(name, lang, filter=None): task_dir = resource_dir.get(name, name) package_id = u"{}.{}".format(task_dir, lang) p = path.join(polyglot_path, task_dir, lang) if not path.isdir(p): if downloader.status(package_id) != downloader.INSTALLED: raise ValueError("This resource is available in the index " "but not downloaded, yet. Try to run\n\n" "polyglot download {}".format(package_id)) return path.join(p, os.listdir(p)[0])
Return filename that contains specific language resource name. Args: name (string): Name of the resource. lang (string): language code to be loaded.
juraj-google-style
def __rmul__(self, left: torch.Tensor) -> Rotation: return self.__mul__(left)
Reverse pointwise multiplication of the rotation with a tensor. Args: left: The left multiplicand Returns: The product
github-repos
def json_merge_fields(recipe, parameters): if isinstance(recipe, dict): for key, value in list(recipe.items()): if isinstance(value, dict) and 'field' in value: if value['field']['name'] in parameters: recipe[key] = json_merge_field(value, parameters[value['field']['name']]) else: json_merge_fields(value, parameters) elif isinstance(recipe, list) or isinstance(recipe, tuple): for index, value in enumerate(recipe): if isinstance(value, dict) and 'field' in value: if value['field']['name'] in parameters: recipe[index] = json_merge_field(value, parameters[value['field']['name']]) else: json_merge_fields(value, parameters) return recipe
Recusrsively merges fields from an include. Field has format: { "field":{ "name":"???", "kind":"???", "default":???, "description":"???" }} Args: recipe: (dict) A dictionary representation fo the JSON script. parameters: (dict) A key value pair, where the value could be another field. Returns: fields: (list or dictionary) A list or dictionary representing each field recipe found in the JSON.
github-repos
def dump_begin(self, selector_id): if (self.dump_walker is not None): self.storage.destroy_walker(self.dump_walker) selector = DataStreamSelector.FromEncoded(selector_id) self.dump_walker = self.storage.create_walker(selector, skip_all=False) return (Error.NO_ERROR, Error.NO_ERROR, self.dump_walker.count())
Start dumping a stream. Args: selector_id (int): The buffered stream we want to dump. Returns: (int, int, int): Error code, second error code, number of available readings
codesearchnet
def _ragged_tensor_mse(y_true, y_pred): return _ragged_tensor_apply_loss(mean_squared_error, y_true, y_pred)
Implements support for handling RaggedTensors. Args: y_true: RaggedTensor truth values. shape = `[batch_size, d0, .. dN]`. y_pred: RaggedTensor predicted values. shape = `[batch_size, d0, .. dN]`. Returns: Mean squared error values. shape = `[batch_size, d0, .. dN-1]`. When the number of dimensions of the batch feature vector [d0, .. dN] is greater than one the return value is a RaggedTensor. Otherwise a Dense tensor with dimensions [batch_size] is returned.
github-repos
def set_pattern_step_setpoint(self, patternnumber, stepnumber, setpointvalue): _checkPatternNumber(patternnumber) _checkStepNumber(stepnumber) _checkSetpointValue(setpointvalue, self.setpoint_max) address = _calculateRegisterAddress('setpoint', patternnumber, stepnumber) self.write_register(address, setpointvalue, 1)
Set the setpoint value for a step. Args: * patternnumber (integer): 0-7 * stepnumber (integer): 0-7 * setpointvalue (float): Setpoint value
juraj-google-style
def json_using_iso8601(__obj: Dict) -> Dict: for (key, value) in __obj.items(): with suppress(TypeError, ValueError): __obj[key] = parse_datetime(value) with suppress(TypeError, ValueError): __obj[key] = parse_delta(value) return __obj
Parse ISO-8601 values from JSON databases. See :class:`json.JSONDecoder` Args: __obj: Object to decode
codesearchnet
def splat(f: Callable[..., A]) -> Callable[[Iterable], A]: def splatted(args): return f(*args) return splatted
Convert a function taking multiple arguments into a function taking a single iterable argument. Args: f: Any function Returns: A function that accepts a single iterable argument. Each element of this iterable argument is passed as an argument to ``f``. Example: $ def f(a, b, c): $ return a + b + c $ $ f(1, 2, 3) # 6 $ g = splat(f) $ g([1, 2, 3]) # 6
juraj-google-style