code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def parse(self, filepath, content): try: parsed = yaml.load(content) except yaml.YAMLError as exc: msg = 'No YAML object could be decoded from file: {}\n{}' raise SettingsBackendError(msg.format(filepath, exc)) return parsed
Parse opened settings content using YAML parser. Args: filepath (str): Settings object, depends from backend content (str): Settings content from opened file, depends from backend. Raises: boussole.exceptions.SettingsBackendError: If parser can not decode a valid YAML object. Returns: dict: Dictionnary containing parsed setting elements.
codesearchnet
def find_centroid_alleles(alleles, bp=28, t=0.025): centroid_alleles = set() len_allele = group_alleles_by_size(alleles) for (length, seqs) in len_allele.items(): if (len(seqs) == 1): centroid_alleles.add(seqs[0]) continue seq_arr = seq_int_arr(seqs) starts_ends_idxs = group_alleles_by_start_end_Xbp(seq_arr, bp=bp) for (k, idxs) in starts_ends_idxs.items(): if (len(idxs) == 1): centroid_alleles.add(seqs[idxs[0]]) continue seq_arr_subset = seq_arr[idxs] dists = pdist(seq_arr_subset, 'hamming') cl = allele_clusters(dists, t=t) dm_sq = squareform(dists) for (cl_key, cl_idxs) in cl.items(): if ((len(cl_idxs) == 1) or (len(cl_idxs) == 2)): centroid_alleles.add(seq_int_arr_to_nt(seq_arr_subset[cl_idxs[0]])) continue dm_sub = dm_subset(dm_sq, cl_idxs) min_idx = min_row_dist_sum_idx(dm_sub) centroid_alleles.add(seq_int_arr_to_nt(seq_arr_subset[min_idx])) return centroid_alleles
Reduce list of alleles to set of centroid alleles based on size grouping, ends matching and hierarchical clustering Workflow for finding centroid alleles: - grouping by size (e.g. 100bp, 101bp, 103bp, etc) - then grouped by `bp` nucleotides at ends matching - size and ends grouped alleles hierarchically clustered (Hamming distance, complete linkage) - tree cutting at threshold `t` - select allele with minimum distance to other alleles in cluster as centroid Args: alleles (iterable): collection of allele nucleotide sequences bp (int): number of bp matching at allele ends for size grouping (default=28 due to default blastn megablast word size) t (float): cluster generation (tree cutting) distance threshold for size grouped alleles Returns: set of str: centroid alleles
codesearchnet
def _get_reference_classnames(self, classname, namespace, resultclass_name, role): self._validate_namespace(namespace) result_classes = self._classnamedict(resultclass_name, namespace) rtn_classnames_set = set() role = (role.lower() if role else role) for cl in self._get_association_classes(namespace): for prop in six.itervalues(cl.properties): if ((prop.type == 'reference') and self._ref_prop_matches(prop, classname, cl.classname, result_classes, role)): rtn_classnames_set.add(cl.classname) return list(rtn_classnames_set)
Get list of classnames that are references for which this classname is a target filtered by the result_class and role parameters if they are none. This is a common method used by all of the other reference and associator methods to create a list of reference classnames Returns: list of classnames that satisfy the criteria.
codesearchnet
def __init__(self, error_name, error_id, error_msg, token_value): self.error_name = error_name self.error_id = error_id self.error_msg = error_msg self._token_value = token_value
Create a LexerError that matches |token_value|. Args: error_name: A short, human readable name for the error, using lowercase-with-dashes-format. error_id: An integer to identify a specific error: 100s: Lexer errors. 200s: Low level parsing errors. 300s: High level parsing errors. error_msg: A message to display with this error that describes clearly what caused the error. token_value: A string to match against the token that the lexer failed at (or None to match against every token). Returns: LexerError that matches against |token_value|.
github-repos
def nuc_p(msg): tc = typecode(msg) if typecode(msg) < 5 or typecode(msg) > 22: raise RuntimeError( "%s: Not a surface position message (5<TC<8), \ airborne position message (8<TC<19), \ or airborne position with GNSS height (20<TC<22)" % msg ) try: NUCp = uncertainty.TC_NUCp_lookup[tc] HPL = uncertainty.NUCp[NUCp]['HPL'] RCu = uncertainty.NUCp[NUCp]['RCu'] RCv = uncertainty.NUCp[NUCp]['RCv'] except KeyError: HPL, RCu, RCv = uncertainty.NA, uncertainty.NA, uncertainty.NA if tc in [20, 21]: RCv = uncertainty.NA return HPL, RCu, RCv
Calculate NUCp, Navigation Uncertainty Category - Position (ADS-B version 1) Args: msg (string): 28 bytes hexadecimal message string, Returns: int: Horizontal Protection Limit int: 95% Containment Radius - Horizontal (meters) int: 95% Containment Radius - Vertical (meters)
juraj-google-style
def _update_in_hdx(self, object_type, id_field_name, file_to_upload=None, **kwargs): self._check_load_existing_object(object_type, id_field_name) self._merge_hdx_update(object_type, id_field_name, file_to_upload, **kwargs)
Helper method to check if HDX object exists in HDX and if so, update it Args: object_type (str): Description of HDX object type (for messages) id_field_name (str): Name of field containing HDX object identifier file_to_upload (Optional[str]): File to upload to HDX **kwargs: See below operation (string): Operation to perform eg. patch. Defaults to update. Returns: None
codesearchnet
def build_inputs_with_special_tokens(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]: if token_ids_1 is None: return self.prefix_tokens + token_ids_0 + self.suffix_tokens return self.prefix_tokens + token_ids_0 + token_ids_1 + self.suffix_tokens
Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. The special tokens depend on calling set_lang. An NLLB sequence has the following format, where `X` represents the sequence: - `input_ids` (for encoder) `X [eos, src_lang_code]` - `decoder_input_ids`: (for decoder) `X [eos, tgt_lang_code]` BOS is never used. Pairs of sequences are not the expected use case, but they will be handled without a separator. Args: token_ids_0 (`List[int]`): List of IDs to which the special tokens will be added. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: list of [input IDs](../glossary#input-ids) with the appropriate special tokens.
github-repos
def print_stack_events(self): first_token = '7be7981bd6287dd8112305e8f3822a6f' keep_going = True next_token = first_token current_request_token = None rows = [] try: while (keep_going and next_token): if (next_token == first_token): response = self._cf_client.describe_stack_events(StackName=self._stack_name) else: response = self._cf_client.describe_stack_events(StackName=self._stack_name, NextToken=next_token) next_token = response.get('NextToken', None) for event in response['StackEvents']: row = [] event_time = event.get('Timestamp') request_token = event.get('ClientRequestToken', 'unknown') if (current_request_token is None): current_request_token = request_token elif (current_request_token != request_token): keep_going = False break row.append(event_time.strftime('%x %X')) row.append(event.get('LogicalResourceId')) row.append(event.get('ResourceStatus')) row.append(event.get('ResourceStatusReason', '')) rows.append(row) if (len(rows) > 0): print('\nEvents for the current upsert:') print(tabulate(rows, headers=['Time', 'Logical ID', 'Status', 'Message'])) return True else: print('\nNo stack events found\n') except Exception as wtf: print(wtf) return False
List events from the given stack Args: None Returns: None
codesearchnet
def ssh(container, cmd='', user='root', password='root'): ip = get_ip(container) ssh_cmd = 'sshpass -p \'%s\' ssh -A -t -o StrictHostKeyChecking=no \'%s\'@%s' % (password, user, ip) local('ssh -A -t -o StrictHostKeyChecking=no -i "%s" %s@%s %s %s' % ( env.key_filename, env.user, env.host, ssh_cmd, cmd))
SSH into a running container, using the host as a jump host. This requires the container to have a running sshd process. Args: * container: Container name or ID * cmd='': Command to run in the container * user='root': SSH username * password='root': SSH password
juraj-google-style
def targets(self): return self._targets
Return the unique names of ops to run. Returns: A list of strings.
github-repos
def switch_to_window(page_class, webdriver): window_list = list(webdriver.window_handles) original_window = webdriver.current_window_handle for window_handle in window_list: webdriver.switch_to_window(window_handle) try: return PageFactory.create_page(page_class, webdriver) except: pass webdriver.switch_to_window(original_window) raise WindowNotFoundError( u("Window {0} not found.").format(page_class.__class__.__name__))
Utility method for switching between windows. It will search through currently open windows, then switch to the window matching the provided PageObject class. Args: page_class (PageObject): Page class to search for/instantiate. webdriver (WebDriver): Selenium webdriver. Usage:: WebUtils.switch_to_window(DetailsPopUpPage, driver) # switches to the pop up window.
juraj-google-style
def set(self, key, value, **kwargs): path = '%s/%s' % (self.path, key.replace('/', '%2F')) data = {'value': value} server_data = self.gitlab.http_put(path, post_data=data, **kwargs) return self._obj_cls(self, server_data)
Create or update the object. Args: key (str): The key of the object to create/update value (str): The value to set for the object **kwargs: Extra options to send to the server (e.g. sudo) Raises: GitlabAuthenticationError: If authentication is not correct GitlabSetError: If an error occured Returns: obj: The created/updated attribute
juraj-google-style
def write_fasta_file(self, outfile, force_rerun=False): if ssbio.utils.force_rerun(flag=force_rerun, outfile=outfile): SeqIO.write(self, outfile, "fasta") self.sequence_path = outfile
Write a FASTA file for the protein sequence, ``seq`` will now load directly from this file. Args: outfile (str): Path to new FASTA file to be written to force_rerun (bool): If an existing file should be overwritten
juraj-google-style
def client(self): if self.proxy: proxyhandler = urllib.ProxyHandler({'http': self.proxy}) opener = urllib.build_opener(proxyhandler) urllib.install_opener(opener) transport = ProxyTransport() if (not hasattr(self, '_client')): transport = None if self.pypi: if self.proxy: logger.info('Using provided proxy: {0}.'.format(self.proxy)) self._client = xmlrpclib.ServerProxy(settings.PYPI_URL, transport=transport) self._client_set = True else: self._client = None return self._client
XMLRPC client for PyPI. Always returns the same instance. If the package is provided as a path to compressed source file, PyPI will not be used and the client will not be instantiated. Returns: XMLRPC client for PyPI or None.
codesearchnet
def psq2(d1, d2): d1, d2 = flatten(d1), flatten(d2) def f(p): return sum((p ** 2) * np.nan_to_num(np.log(p * len(p)))) return abs(f(d1) - f(d2))
Compute the PSQ2 measure. Args: d1 (np.ndarray): The first distribution. d2 (np.ndarray): The second distribution.
juraj-google-style
def myRank(grade, badFormat, year, length): return int((sorted(everyonesAverage(year, badFormat, length), reverse=True).index(grade) + 1))
rank of candidateNumber in year Arguments: grade {int} -- a weighted average for a specific candidate number and year badFormat {dict} -- candNumber : [results for candidate] year {int} -- year you are in length {int} -- length of each row in badFormat divided by 2 Returns: int -- rank of candidateNumber in year
codesearchnet
def conversations_replies(self, *, channel: str, ts: str, **kwargs) -> SlackResponse: kwargs.update({"channel": channel, "ts": ts}) return self.api_call("conversations.replies", http_verb="GET", params=kwargs)
Retrieve a thread of messages posted to a conversation Args: channel (str): Conversation ID to fetch thread from. e.g. 'C1234567890' ts (str): Unique identifier of a thread's parent message. e.g. '1234567890.123456'
juraj-google-style
def __init__(self, scope, parent, name, result, args=None, paren=False): CodeExpression.__init__(self, scope, parent, name, result, paren) self.arguments = args or ()
Constructor for operators. Args: scope (CodeEntity): The program scope where this object belongs. parent (CodeEntity): This object's parent in the program tree. name (str): The name of the operator in the program. result (str): The return type of the operator in the program. Kwargs: args (tuple): Initial tuple of arguments. paren (bool): Whether the expression is enclosed in parentheses.
juraj-google-style
def NewFromJSON(data): if data.get('shakes', None): shakes = [Shake.NewFromJSON(shk) for shk in data.get('shakes')] else: shakes = None return User( id=data.get('id', None), name=data.get('name', None), profile_image_url=data.get('profile_image_url', None), about=data.get('about', None), website=data.get('website', None), shakes=shakes)
Create a new User instance from a JSON dict. Args: data (dict): JSON dictionary representing a user. Returns: A User instance.
juraj-google-style
def get_backend(self, name=None, **kwargs): backends = self.backends(name, **kwargs) if (len(backends) > 1): raise QiskitBackendNotFoundError('More than one backend matches the criteria') elif (not backends): raise QiskitBackendNotFoundError('No backend matches the criteria') return backends[0]
Return a single backend matching the specified filtering. Args: name (str): name of the backend. **kwargs (dict): dict used for filtering. Returns: BaseBackend: a backend matching the filtering. Raises: QiskitBackendNotFoundError: if no backend could be found or more than one backend matches.
codesearchnet
def decompress_dir(path): for parent, subdirs, files in os.walk(path): for f in files: decompress_file(os.path.join(parent, f))
Recursively decompresses all files in a directory. Args: path (str): Path to parent directory.
juraj-google-style
def validate_all_values_for_key(obj, key, validation_fun): for vkey, value in obj.items(): if vkey == key: validation_fun(value) elif isinstance(value, dict): validate_all_values_for_key(value, key, validation_fun)
Validate value for all (nested) occurrence of `key` in `obj` using `validation_fun`. Args: obj (dict): dictionary object. key (str): key whose value is to be validated. validation_fun (function): function used to validate the value of `key`. Raises: ValidationError: `validation_fun` will raise this error on failure
juraj-google-style
def be2le_state_by_state(tpm): le = np.empty(tpm.shape) N = tpm.shape[0] n = int(log2(N)) for i in range(N): le[(i, :)] = tpm[(be2le(i, n), :)] return le
Convert a state-by-state TPM from big-endian to little-endian or vice versa. Args: tpm (np.ndarray): A state-by-state TPM. Returns: np.ndarray: The state-by-state TPM in the other indexing format. Example: >>> tpm = np.arange(16).reshape([4, 4]) >>> be2le_state_by_state(tpm) array([[ 0., 1., 2., 3.], [ 8., 9., 10., 11.], [ 4., 5., 6., 7.], [12., 13., 14., 15.]])
codesearchnet
def generate_orders(events, sell_delay=5, sep=','): sell_delay = (float(unicode(sell_delay)) or 1) for (i, (t, row)) in enumerate(events.iterrows()): for (sym, event) in row.to_dict().iteritems(): if (event and (not np.isnan(event))): if (event > 0): sell_event_i = min((i + sell_delay), (len(events) - 1)) sell_event_t = events.index[sell_event_i] sell_event = events[sym][sell_event_i] if np.isnan(sell_event): events[sym][sell_event_t] = (- 1) else: events[sym][sell_event_t] += (- 1) order = (t.year, t.month, t.day, sym, ('Buy' if (event > 0) else 'Sell'), (abs(event) * 100)) if isinstance(sep, basestring): (yield sep.join(order)) (yield order)
Generate CSV orders based on events indicated in a DataFrame Arguments: events (pandas.DataFrame): Table of NaNs or 1's, one column for each symbol. 1 indicates a BUY event. -1 a SELL event. nan or 0 is a nonevent. sell_delay (float): Number of days to wait before selling back the shares bought sep (str or None): if sep is None, orders will be returns as tuples of `int`s, `float`s, and `str`s otherwise the separator will be used to join the order parameters into the yielded str Returns: generator of str: yielded CSV rows in the format (yr, mo, day, symbol, Buy/Sell, shares)
codesearchnet
def __query_node(self, ip, host): host = util.shorten_host_name(host, self.config.host_domains) (node, node_updated) = self.__get_known_node(ip, host) if (node == None): node = natlas_node() node.name = host node.ip = [ip] state = NODE_NEW else: if (node.snmpobj.success == 1): return (node, NODE_KNOWN) if (node_updated == 1): state = NODE_NEWIP else: state = NODE_KNOWN node.name = host if (ip == 'UNKNOWN'): return (node, state) if ((ip == '0.0.0.0') | (ip == '')): return (node, state) if (node.try_snmp_creds(self.config.snmp_creds) == 0): return (node, state) node.name = node.get_system_name(self.config.host_domains) if (node.name != host): if (state == NODE_NEW): (node2, node_updated2) = self.__get_known_node(ip, host) if ((node2 != None) & (node_updated2 == 0)): return (node, NODE_KNOWN) if (node_updated2 == 1): state = NODE_NEWIP if ((node.name == None) | (node.name == '')): node.name = node.get_ipaddr() node.opts.get_serial = True node.query_node() return (node, state)
Query this node. Return node details and if we already knew about it or if this is a new node. Don't save the node to the known list, just return info about it. Args: ip: IP Address of the node. host: Hostname of this known (if known from CDP/LLDP) Returns: natlas_node: Node of this object int: NODE_NEW = Newly discovered node NODE_NEWIP = Already knew about this node but not by this IP NODE_KNOWN = Already knew about this node
codesearchnet
def replace_batch_norm(model): for name, module in model.named_children(): if isinstance(module, nn.BatchNorm2d): new_module = DetaFrozenBatchNorm2d(module.num_features) if not module.weight.device == torch.device('meta'): new_module.weight.data.copy_(module.weight) new_module.bias.data.copy_(module.bias) new_module.running_mean.data.copy_(module.running_mean) new_module.running_var.data.copy_(module.running_var) model._modules[name] = new_module if len(list(module.children())) > 0: replace_batch_norm(module)
Recursively replace all `torch.nn.BatchNorm2d` with `DetaFrozenBatchNorm2d`. Args: model (torch.nn.Module): input model
github-repos
def locate_module(module_id: str, module_type: str=None): entry_point = None if module_type: entry_point = ('ehforwarderbot.%s' % module_type) module_id = module_id.split(' if entry_point: for i in pkg_resources.iter_entry_points(entry_point): if (i.name == module_id): return i.load() return pydoc.locate(module_id)
Locate module by module ID Args: module_id: Module ID module_type: Type of module, one of ``'master'``, ``'slave'`` and ``'middleware'``
codesearchnet
def change_t(self, t): t = super().change_t(t) self.__now_cycles += 1 if self.__now_cycles % self.__reannealing_per == 0: t = t * self.__thermostat if t < self.__t_min: t = self.__t_default return t
Change temperature. Override. Args: t: Now temperature. Returns: Next temperature.
juraj-google-style
def set_parameter_vector(self, vector, include_frozen=False): v = self.parameter_vector if include_frozen: v[:] = vector else: v[self.unfrozen_mask] = vector self.parameter_vector = v self.dirty = True
Set the parameter values to the given vector Args: vector (array[vector_size] or array[full_size]): The target parameter vector. This must be in the same order as ``parameter_names`` and it should only include frozen parameters if ``include_frozen`` is ``True``. include_frozen (Optional[bool]): Should the frozen parameters be included in the returned value? (default: ``False``)
juraj-google-style
def _pearson_correlation(self, imgs_to_decode): (x, y) = (imgs_to_decode.astype(float), self.feature_images.astype(float)) return self._xy_corr(x, y)
Decode images using Pearson's r. Computes the correlation between each input image and each feature image across voxels. Args: imgs_to_decode: An ndarray of images to decode, with voxels in rows and images in columns. Returns: An n_features x n_images 2D array, with each cell representing the pearson correlation between the i'th feature and the j'th image across all voxels.
codesearchnet
def process(self, tensor): for processor in self.preprocessors: tensor = processor.process(tensor=tensor) return tensor
Process state. Args: tensor: tensor to process Returns: processed state
juraj-google-style
def substitute(self, var_map, cont=False, tag=None): return self.apply(substitute, var_map=var_map, cont=cont, tag=tag)
Substitute sub-expressions both on the lhs and rhs Args: var_map (dict): Dictionary with entries of the form ``{expr: substitution}``
juraj-google-style
def to_sql(self, view: views.View, limit: Optional[int]=None) -> str: sql_generator = self._build_sql_generator(view) sql_statement = sql_generator.build_sql_statement() view_table_name = f'{self._value_set_codes_table.project}.{self._value_set_codes_table.dataset_id}.{self._value_set_codes_table.table_id}' valuesets_clause = sql_generator.build_valueset_expression(view_table_name) if limit is not None and limit < 1: raise ValueError('Query limits must be positive integers.') limit_clause = '' if limit is None else f' LIMIT {limit}' return f'{valuesets_clause}{sql_statement}{limit_clause}'
Returns the SQL used to run the given view in BigQuery. Args: view: the view used to generate the SQL. limit: optional limit to attach to the generated SQL. Returns: The SQL used to run the given view.
github-repos
def etm_supported(self): res = self._dll.JLINKARM_ETM_IsPresent() if (res == 1): return True info = ctypes.c_uint32(0) index = enums.JLinkROMTable.ETM res = self._dll.JLINKARM_GetDebugInfo(index, ctypes.byref(info)) if (res == 1): return False return True
Returns if the CPU core supports ETM. Args: self (JLink): the ``JLink`` instance. Returns: ``True`` if the CPU has the ETM unit, otherwise ``False``.
juraj-google-style
def pred(scores: jax.Array, rows: jax.Array, cols: jax.Array, N: int) -> jax.Array: r: jax.Array = 2 * jax.ops.segment_sum(scores.take(cols), rows, N) - scores.sum() return r > 0
Predicts the target output from the learned scores and input entries. Args: scores (jax.Array): Contribution scores of features. rows (jax.Array): Row indices of True values in the input. cols (jax.Array): Column indices of True values in the input. N (int): The number of input entries. Returns: res (jax.Array): A prediction of the target.
github-repos
def convert_source_tokens_to_target_tokens(self, input_ids, source_tokenizer, destination_tokenizer): text = source_tokenizer.batch_decode(input_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) dest_ids = destination_tokenizer(text, add_special_tokens=True, return_tensors='pt')['input_ids'] return dest_ids.to(input_ids.device)
Convert token IDs from one tokenizer to another. Args: input_ids: The input token IDs. source_tokenizer: The source tokenizer. destination_tokenizer: The destination tokenizer. Returns: The converted token IDs.
github-repos
def _CreatePerformanceTarget(client, campaign_group_id): cgpt_service = client.GetService('CampaignGroupPerformanceTargetService', version='v201809') operations = [{'operator': 'ADD', 'operand': {'campaignGroupId': campaign_group_id, 'performanceTarget': {'efficiencyTargetType': 'CPC_LESS_THAN_OR_EQUAL_TO', 'efficiencyTargetValue': 3000000, 'spendTargetType': 'MAXIMUM', 'spendTarget': {'microAmount': 500000000}, 'volumeGoalType': 'MAXIMIZE_CLICKS', 'volumeTargetValue': 3000, 'startDate': datetime.datetime.now().strftime('%Y%m%d'), 'endDate': (datetime.datetime.now() + datetime.timedelta(90)).strftime('%Y%m%d')}}}] cgpt = cgpt_service.mutate(operations)['value'][0] print(('Campaign performance target with ID "%d" was added for campaign group ID "%d".' % (cgpt['id'], cgpt['campaignGroupId'])))
Creates a performance target for the campaign group. Args: client: an AdWordsClient instance. campaign_group_id: an integer ID for the campaign group.
codesearchnet
def create_proxy_api_files(output_files, proxy_module_root, output_dir): for file_path in output_files: module = get_module(os.path.dirname(file_path), output_dir) if not os.path.isdir(os.path.dirname(file_path)): os.makedirs(os.path.dirname(file_path)) contents = f'from {proxy_module_root}.{module} import *' with open(file_path, 'w') as fp: fp.write(contents)
Creates __init__.py files in proxy format for the Python API. Args: output_files: List of __init__.py file paths to create. proxy_module_root: Module root for proxy-import format. If specified, proxy files with content like `from proxy_module_root.proxy_module import *` will be created to enable import resolution under TensorFlow. output_dir: output API root directory.
github-repos
def __init__(self, to_track: Dict): self.to_track = to_track self._seen: Set[str] = set()
This class "tracks" a python dictionary by keeping track of which item is accessed. Args: to_track (Dict): The dictionary we wish to track
github-repos
def major_complex(network, state): log.info('Calculating major complex...') result = complexes(network, state) if result: result = max(result) else: empty_subsystem = Subsystem(network, state, ()) result = _null_sia(empty_subsystem) log.info('Finished calculating major complex.') return result
Return the major complex of the network. Args: network (Network): The |Network| of interest. state (tuple[int]): The state of the network (a binary tuple). Returns: SystemIrreducibilityAnalysis: The |SIA| for the |Subsystem| with maximal |big_phi|.
codesearchnet
def show_available_noise_curves(return_curves=True, print_curves=False): if ((return_curves is False) and (print_curves is False)): raise ValueError(('Both return curves and print_curves are False.' + ' You will not see the options')) cfd = os.path.dirname(os.path.abspath(__file__)) curves = [curve.split('.')[0] for curve in os.listdir((cfd + '/noise_curves/'))] if print_curves: for f in curves: print(f) if return_curves: return curves return
List available sensitivity curves This function lists the available sensitivity curve strings in noise_curves folder. Args: return_curves (bool, optional): If True, return a list of curve options. print_curves (bool, optional): If True, print each curve option. Returns: (optional list of str): List of curve options. Raises: ValueError: Both args are False.
codesearchnet
def run_processor( processorClass, ocrd_tool=None, mets_url=None, resolver=None, workspace=None, page_id=None, log_level=None, input_file_grp=None, output_file_grp=None, parameter=None, working_dir=None, ): workspace = _get_workspace( workspace, resolver, mets_url, working_dir ) if parameter is not None: if not ': fname = os.path.abspath(parameter) else: fname = workspace.download_url(parameter) with open(fname, 'r') as param_json_file: parameter = json.load(param_json_file) else: parameter = {} log.debug("Running processor %s", processorClass) processor = processorClass( workspace, ocrd_tool=ocrd_tool, page_id=page_id, input_file_grp=input_file_grp, output_file_grp=output_file_grp, parameter=parameter ) ocrd_tool = processor.ocrd_tool name = '%s v%s' % (ocrd_tool['executable'], processor.version) otherrole = ocrd_tool['steps'][0] log.debug("Processor instance %s (%s doing %s)", processor, name, otherrole) processor.process() workspace.mets.add_agent( name=name, _type='OTHER', othertype='SOFTWARE', role='OTHER', otherrole=otherrole ) workspace.save_mets() return processor
Create a workspace for mets_url and run processor through it Args: parameter (string): URL to the parameter
juraj-google-style
def set_status(self, trial, status): trial.status = status if status in [Trial.TERMINATED, Trial.ERROR]: self.try_checkpoint_metadata(trial)
Sets status and checkpoints metadata if needed. Only checkpoints metadata if trial status is a terminal condition. PENDING, PAUSED, and RUNNING switches have checkpoints taken care of in the TrialRunner. Args: trial (Trial): Trial to checkpoint. status (Trial.status): Status to set trial to.
juraj-google-style
def unnest_collection(collection, df_list): for item in collection['link']['item']: if item['class'] == 'dataset': df_list.append(Dataset.read(item['href']).write('dataframe')) elif item['class'] == 'collection': nested_collection = request(item['href']) unnest_collection(nested_collection, df_list)
Unnest collection structure extracting all its datasets and converting \ them to Pandas Dataframes. Args: collection (OrderedDict): data in JSON-stat format, previously \ deserialized to a python object by \ json.load() or json.loads(), df_list (list): list variable which will contain the converted \ datasets. Returns: Nothing.
juraj-google-style
def retransmit(self, data): if data["method"] == "REGISTER": if not self.registered and self.register_retries < self.max_retries: logger.debug("<%s> Timeout exceeded. " % str(self.cuuid) + \ "Retransmitting REGISTER request.") self.register_retries += 1 self.register(data["address"], retry=False) else: logger.debug("<%s> No need to retransmit." % str(self.cuuid)) if data["method"] == "EVENT": if data["euuid"] in self.event_uuids: self.event_uuids[data["euuid"]]["retry"] += 1 if self.event_uuids[data["euuid"]]["retry"] > self.max_retries: logger.debug("<%s> Max retries exceeded. Timed out waiting " "for server for event: %s" % (data["cuuid"], data["euuid"])) logger.debug("<%s> <euuid:%s> Deleting event from currently " "processing event uuids" % (data["cuuid"], str(data["euuid"]))) del self.event_uuids[data["euuid"]] else: self.listener.send_datagram( serialize_data(data, self.compression, self.encryption, self.server_key), self.server) logger.debug("<%s> <euuid:%s> Scheduling to retry in %s " "seconds" % (data["cuuid"], str(data["euuid"]), str(self.timeout))) self.listener.call_later( self.timeout, self.retransmit, data) else: logger.debug("<%s> <euuid:%s> No need to " "retransmit." % (str(self.cuuid), str(data["euuid"])))
Processes messages that have been delivered from the transport protocol. Args: data (dict): A dictionary containing the packet data to resend. Returns: None Examples: >>> data {'method': 'REGISTER', 'address': ('192.168.0.20', 40080)}
juraj-google-style
class DPTFeatureFusionLayer(nn.Module): def __init__(self, config, align_corners=True): super().__init__() self.align_corners = align_corners self.projection = nn.Conv2d(config.fusion_hidden_size, config.fusion_hidden_size, kernel_size=1, bias=True) self.residual_layer1 = DPTPreActResidualLayer(config) self.residual_layer2 = DPTPreActResidualLayer(config) def forward(self, hidden_state, residual=None): if residual is not None: if hidden_state.shape != residual.shape: residual = nn.functional.interpolate(residual, size=(hidden_state.shape[2], hidden_state.shape[3]), mode='bilinear', align_corners=False) hidden_state = hidden_state + self.residual_layer1(residual) hidden_state = self.residual_layer2(hidden_state) hidden_state = nn.functional.interpolate(hidden_state, scale_factor=2, mode='bilinear', align_corners=self.align_corners) hidden_state = self.projection(hidden_state) return hidden_state
Feature fusion layer, merges feature maps from different stages. Args: config (`[DPTConfig]`): Model configuration class defining the model architecture. align_corners (`bool`, *optional*, defaults to `True`): The align_corner setting for bilinear upsample.
github-repos
def proto_refactor_files(dest_dir, namespace, namespace_path): for dn, dns, fns in os.walk(dest_dir): for fn in fns: fn = os.path.join(dn, fn) if fnmatch.fnmatch(fn, '*.proto'): data = proto_refactor(fn, namespace, namespace_path) with open(fn, 'w') as f: f.write(data)
This method runs the refactoring on all the Protobuf files in the Dropsonde repo. Args: dest_dir (str): directory where the Protobuf files lives. namespace (str): the desired package name (i.e. "dropsonde.py2") namespace_path (str): the desired path corresponding to the package name (i.e. "dropsonde/py2")
juraj-google-style
def _create(self, monomer, mon_vector): while self.length != (self.n_units-1): if self.linear_chain: move_direction = np.array(mon_vector) / np.linalg.norm(mon_vector) else: move_direction = self._next_move_direction() self._add_monomer(monomer.copy(), mon_vector, move_direction)
create the polymer from the monomer Args: monomer (Molecule) mon_vector (numpy.array): molecule vector that starts from the start atom index to the end atom index
juraj-google-style
def save_as(self, filename: str) -> None: lib.TCOD_image_save(self.image_c, filename.encode('utf-8'))
Save the Image to a 32-bit .bmp or .png file. Args: filename (Text): File path to same this Image.
codesearchnet
def start(self): if (not self.started): self.started = True self.executor = ThreadPoolExecutor(max_workers=32) self.poller = self.executor.submit(self.poll_events) else: raise IllegalStateError('Dispatcher is already started.')
Starts the event dispatcher. Initiates executor and start polling events. Raises: IllegalStateError: Can't start a dispatcher again when it's already running.
codesearchnet
def report_fhir_path_warning(self, element_path: str, fhir_path_constraint: str, msg: str) -> None:
Reports a FHIRPath constraint warning during validation and/or encoding. Args: element_path: The path to the field that the constraint is defined on. fhir_path_constraint: The FHIRPath constraint expression. msg: The warning message produced.
github-repos
def color_gen_map(colors: Iterable[Tuple[(int, int, int)]], indexes: Iterable[int]) -> List[Color]: ccolors = ffi.new('TCOD_color_t[]', colors) cindexes = ffi.new('int[]', indexes) cres = ffi.new('TCOD_color_t[]', (max(indexes) + 1)) lib.TCOD_color_gen_map(cres, len(ccolors), ccolors, cindexes) return [Color._new_from_cdata(cdata) for cdata in cres]
Return a smoothly defined scale of colors. If ``indexes`` is [0, 3, 9] for example, the first color from ``colors`` will be returned at 0, the 2nd will be at 3, and the 3rd will be at 9. All in-betweens will be filled with a gradient. Args: colors (Iterable[Union[Tuple[int, int, int], Sequence[int]]]): Array of colors to be sampled. indexes (Iterable[int]): A list of indexes. Returns: List[Color]: A list of Color instances. Example: >>> tcod.color_gen_map([(0, 0, 0), (255, 128, 0)], [0, 5]) [Color(0, 0, 0), Color(51, 25, 0), Color(102, 51, 0), \ Color(153, 76, 0), Color(204, 102, 0), Color(255, 128, 0)]
codesearchnet
def from_node(cls, node): if (not isinstance(node, aioxmpp.stanza.Message)): raise AttributeError('node must be a aioxmpp.stanza.Message instance') msg = cls() msg._to = node.to msg._sender = node.from_ if (None in node.body): msg.body = node.body[None] else: for key in node.body.keys(): msg.body = node.body[key] break for data in node.xep0004_data: if (data.title == SPADE_X_METADATA): for field in data.fields: if (field.var != '_thread_node'): msg.set_metadata(field.var, field.values[0]) else: msg.thread = field.values[0] return msg
Creates a new spade.message.Message from an aixoxmpp.stanza.Message Args: node (aioxmpp.stanza.Message): an aioxmpp Message Returns: spade.message.Message: a new spade Message
codesearchnet
def _render_fluent_timestep(self, fluent_type: str, fluents: Sequence[Tuple[str, np.array]], fluent_variables: Sequence[Tuple[str, List[str]]]) -> None: for fluent_pair, variable_list in zip(fluents, fluent_variables): name, fluent = fluent_pair _, variables = variable_list print(name) fluent = fluent.flatten() for variable, value in zip(variables, fluent): print('- {}: {} = {}'.format(fluent_type, variable, value)) print()
Prints `fluents` of given `fluent_type` as list of instantiated variables with corresponding values. Args: fluent_type (str): Fluent type. fluents (Sequence[Tuple[str, np.array]]): List of pairs (fluent_name, fluent_values). fluent_variables (Sequence[Tuple[str, List[str]]]): List of pairs (fluent_name, args).
juraj-google-style
def console_print_ex( con: tcod.console.Console, x: int, y: int, flag: int, alignment: int, fmt: str, ) -> None: lib.TCOD_console_printf_ex(_console(con), x, y, flag, alignment, _fmt(fmt))
Print a string on a console using a blend mode and alignment mode. Args: con (Console): Any Console instance. x (int): Character x position from the left. y (int): Character y position from the top. .. deprecated:: 8.5 Use :any:`Console.print_` instead.
juraj-google-style
def bulkWrite(self, endpoint, buffer, timeout = 100): r return self.dev.write(endpoint, buffer, timeout)
r"""Perform a bulk write request to the endpoint specified. Arguments: endpoint: endpoint number. buffer: sequence data buffer to write. This parameter can be any sequence type. timeout: operation timeout in milliseconds. (default: 100) Returns the number of bytes written.
juraj-google-style
def _recursive_remove_blank_dirs(self, path): path = os.path.abspath(path) if path == self.path or len(path) <= len(self.path): return if not os.path.exists(path): return self._recursive_remove_blank_dirs( os.path.dirname(path) ) if os.listdir(path): return shutil.rmtree(path) return self._recursive_remove_blank_dirs( os.path.dirname(path) )
Make sure, that blank directories are removed from the storage. Args: path (str): Path which you suspect that is blank.
juraj-google-style
def triangle_area(point1, point2, point3): a = point_distance(point1, point2) b = point_distance(point1, point3) c = point_distance(point2, point3) s = (a + b + c) / 2.0 return math.sqrt(s * (s - a) * (s - b) * (s - c))
Uses Heron's formula to find the area of a triangle based on the coordinates of three points. Args: point1: list or tuple, the x y coordinate of point one. point2: list or tuple, the x y coordinate of point two. point3: list or tuple, the x y coordinate of point three. Returns: The area of a triangle as a floating point number. Requires: The math module, point_distance().
juraj-google-style
def grad_dot(dy, x1, x2): if len(numpy.shape(x1)) == 1: dy = numpy.atleast_2d(dy) elif len(numpy.shape(x2)) == 1: dy = numpy.transpose(numpy.atleast_2d(dy)) x2 = numpy.transpose(numpy.atleast_2d(x2)) x2_t = numpy.transpose(numpy.atleast_2d( numpy.sum(x2, axis=tuple(numpy.arange(numpy.ndim(x2) - 2))))) dy_x2 = numpy.sum(dy, axis=tuple(-numpy.arange(numpy.ndim(x2) - 2) - 2)) return numpy.reshape(numpy.dot(dy_x2, x2_t), numpy.shape(x1))
Gradient of NumPy dot product w.r.t. to the left hand side. Args: dy: The gradient with respect to the output. x1: The left hand side of the `numpy.dot` function. x2: The right hand side Returns: The gradient with respect to `x1` i.e. `x2.dot(dy.T)` with all the broadcasting involved.
juraj-google-style
def _publish_internal(self, push_messages): import requests response = requests.post(((self.host + self.api_url) + '/push/send'), data=json.dumps([pm.get_payload() for pm in push_messages]), headers={'accept': 'application/json', 'accept-encoding': 'gzip, deflate', 'content-type': 'application/json'}) try: response_data = response.json() except ValueError: response.raise_for_status() raise PushServerError('Invalid server response', response) if ('errors' in response_data): raise PushServerError('Request failed', response, response_data=response_data, errors=response_data['errors']) if ('data' not in response_data): raise PushServerError('Invalid server response', response, response_data=response_data) response.raise_for_status() if (len(push_messages) != len(response_data['data'])): raise PushServerError(('Mismatched response length. Expected %d %s but only received %d' % (len(push_messages), ('receipt' if (len(push_messages) == 1) else 'receipts'), len(response_data['data']))), response, response_data=response_data) receipts = [] for (i, receipt) in enumerate(response_data['data']): receipts.append(PushResponse(push_message=push_messages[i], status=receipt.get('status', PushResponse.ERROR_STATUS), message=receipt.get('message', ''), details=receipt.get('details', None))) return receipts
Send push notifications The server will validate any type of syntax errors and the client will raise the proper exceptions for the user to handle. Each notification is of the form: { 'to': 'ExponentPushToken[xxx]', 'body': 'This text gets display in the notification', 'badge': 1, 'data': {'any': 'json object'}, } Args: push_messages: An array of PushMessage objects.
codesearchnet
def pubsub_pop_message(self, deadline=None): if not self.subscribed: excep = ClientError("you must subscribe before using " "pubsub_pop_message") raise tornado.gen.Return(excep) reply = None try: reply = self._reply_list.pop(0) raise tornado.gen.Return(reply) except IndexError: pass if deadline is not None: td = timedelta(seconds=deadline) yield self._condition.wait(timeout=td) else: yield self._condition.wait() try: reply = self._reply_list.pop(0) except IndexError: pass raise tornado.gen.Return(reply)
Pops a message for a subscribed client. Args: deadline (int): max number of seconds to wait (None => no timeout) Returns: Future with the popped message as result (or None if timeout or ConnectionError object in case of connection errors or ClientError object if you are not subscribed)
juraj-google-style
def _prepare_for_training(self, job_name=None): if (job_name is not None): self._current_job_name = job_name else: if self.base_job_name: base_name = self.base_job_name elif isinstance(self, sagemaker.algorithm.AlgorithmEstimator): base_name = self.algorithm_arn.split('/')[(- 1)] else: base_name = base_name_from_image(self.train_image()) self._current_job_name = name_from_base(base_name) if (self.output_path is None): local_code = get_config_value('local.local_code', self.sagemaker_session.config) if (self.sagemaker_session.local_mode and local_code): self.output_path = '' else: self.output_path = 's3:
Set any values in the estimator that need to be set before training. Args: * job_name (str): Name of the training job to be created. If not specified, one is generated, using the base name given to the constructor if applicable.
codesearchnet
def DeserializeExclusiveData(self, reader): self.Type = TransactionType.StateTransaction self.Descriptors = reader.ReadSerializableArray('neo.Core.State.StateDescriptor.StateDescriptor')
Deserialize full object. Args: reader (neo.IO.BinaryReader): Raises: Exception: If the transaction type is incorrect or if there are no claims.
juraj-google-style
def __call__(self, raw_speech: Union[np.ndarray, List[float], List[np.ndarray], List[List[float]]], padding: Union[bool, str, PaddingStrategy]=False, max_length: Optional[int]=None, pad_to_multiple_of: Optional[int]=None, padding_side: Optional[str]=None, return_tensors: Optional[Union[str, TensorType]]=None, verbose: bool=True, **kwargs) -> BatchEncoding: is_batched_numpy = isinstance(raw_speech, np.ndarray) and len(raw_speech.shape) > 1 if is_batched_numpy and len(raw_speech.shape) > 2: raise ValueError(f'Only mono-channel audio is supported for input to {self}') is_batched = is_batched_numpy or (isinstance(raw_speech, (list, tuple)) and isinstance(raw_speech[0], (np.ndarray, tuple, list))) if is_batched and (not isinstance(raw_speech[0], np.ndarray)): raw_speech = [np.asarray(speech) for speech in raw_speech] elif not is_batched and (not isinstance(raw_speech, np.ndarray)): raw_speech = np.asarray(raw_speech) if not is_batched: raw_speech = [raw_speech] if self.do_normalize: raw_speech = [(x - np.mean(x)) / np.sqrt(np.var(x) + 1e-05) for x in raw_speech] encoded_inputs = BatchEncoding({'input_values': raw_speech}) padded_inputs = self.pad(encoded_inputs, padding=padding, max_length=max_length, pad_to_multiple_of=pad_to_multiple_of, padding_side=padding_side, return_attention_mask=self.return_attention_mask, return_tensors=return_tensors, verbose=verbose) return padded_inputs
Main method to tokenize and prepare for the model one or several sequence(s) or one or several pair(s) of sequences. Args: raw_speech (`np.ndarray`, `List[float]`, `List[np.ndarray]`, `List[List[float]]`): The sequence or batch of sequences to be padded. Each sequence can be a numpy array, a list of float values, a list of numpy array or a list of list of float values. Must be mono channel audio, not stereo, i.e. single float per timestep. padding_side (`str`, *optional*): The side on which the model should have padding applied. Should be selected between ['right', 'left']. Default value is picked from the class attribute of the same name.
github-repos
def run(self, args): jlink = self.create_jlink(args) if args.list: print('Built-in Licenses: %s' % ', '.join(jlink.licenses.split(','))) print('Custom Licenses: %s' % ', '.join(jlink.custom_licenses.split(','))) elif args.add is not None: if jlink.add_license(args.add): print('Successfully added license.') else: print('License already exists.') elif args.erase: if jlink.erase_licenses(): print('Successfully erased all custom licenses.') else: print('Failed to erase custom licenses.')
Runs the license command. Args: self (LicenseCommand): the ``LicenseCommand`` instance args (Namespace): the arguments passed on the command-line Returns: ``None``
juraj-google-style
def use_gradient(grad_f): grad_f_name = register_to_random_name(grad_f) def function_wrapper(f): def inner(*inputs): state = {'out_value': None} out = f(*inputs) def store_out(out_value): 'Store the value of out to a python variable.' state['out_value'] = out_value store_name = ('store_' + f.__name__) store = tf.py_func(store_out, [out], (), stateful=True, name=store_name) def mock_f(*inputs): 'Mimic f by retrieving the stored value of out.' return state['out_value'] with tf.control_dependencies([store]): with gradient_override_map({'PyFunc': grad_f_name}): mock_name = ('mock_' + f.__name__) mock_out = tf.py_func(mock_f, inputs, out.dtype, stateful=True, name=mock_name) mock_out.set_shape(out.get_shape()) return mock_out return inner return function_wrapper
Decorator for easily setting custom gradients for TensorFlow functions. * DO NOT use this function if you need to serialize your graph. * This function will cause the decorated function to run slower. Example: def _foo_grad(op, grad): ... @use_gradient(_foo_grad) def foo(x1, x2, x3): ... Args: grad_f: function to use as gradient. Returns: A decorator to apply to the function you wish to override the gradient of.
codesearchnet
def MergeBaseClass(cls, base): bases = tuple((b for b in cls.bases if b != base)) bases += tuple((b for b in base.bases if b not in bases)) method_names = [m.name for m in cls.methods] methods = cls.methods + tuple((m for m in base.methods if m.name not in method_names)) constant_names = [c.name for c in cls.constants] constants = cls.constants + tuple((c for c in base.constants if c.name not in constant_names)) class_names = [c.name for c in cls.classes] classes = cls.classes + tuple((c for c in base.classes if c.name not in class_names)) decorators = cls.decorators or base.decorators if cls.slots: slots = cls.slots + tuple((s for s in base.slots or () if s not in cls.slots)) else: slots = base.slots return pytd.Class(name=cls.name, keywords=cls.keywords or base.keywords, bases=bases, methods=methods, constants=constants, classes=classes, decorators=decorators, slots=slots, template=cls.template or base.template)
Merge a base class into a subclass. Arguments: cls: The subclass to merge values into. pytd.Class. base: The superclass whose values will be merged. pytd.Class. Returns: a pytd.Class of the two merged classes.
github-repos
def check_mailfy(self, query, kwargs={}): import re import requests s = requests.Session() r1 = s.get("https: csrf_token = re.findall("csrf_token", r1.text)[0] r2 = s.post( 'https: data={"email": query}, headers={"X-CSRFToken": csrf_token} ) if '{"email": [{"message": "Another account is using' in r2.text: return r2.text else: return None
Verifying a mailfy query in this platform. This might be redefined in any class inheriting from Platform. The only condition is that any of this should return a dictionary as defined. Args: ----- query: The element to be searched. kwargs: Dictionary with extra parameters. Just in case. Return: ------- Returns the collected data if exists or None if not.
juraj-google-style
def get_frame(self, index=None, onset=None): if onset: index = int(onset * self.fps) return super(VideoStim, self).get_frame(index)
Overrides the default behavior by giving access to the onset argument. Args: index (int): Positional index of the desired frame. onset (float): Onset (in seconds) of the desired frame.
juraj-google-style
def is_process_running(process_name): is_running = False if os.path.isfile('/usr/bin/pgrep'): dev_null = open(os.devnull, 'wb') returncode = subprocess.call(['/usr/bin/pgrep', process_name], stdout=dev_null) is_running = bool(returncode == 0) return is_running
Check if a process with the given name is running. Args: (str): Process name, e.g. "Sublime Text" Returns: (bool): True if the process is running
juraj-google-style
def get_variant_genotypes(self, variant): if not self.has_index: raise NotImplementedError("Not implemented when IMPUTE2 file is " "not indexed (see genipe)") try: impute2_chrom = CHROM_STR_TO_INT[variant.chrom.name] except KeyError: raise ValueError( "Invalid chromosome ('{}') for IMPUTE2.".format(variant.chrom) ) variant_info = self._impute2_index[ (self._impute2_index.chrom == impute2_chrom) & (self._impute2_index.pos == variant.pos) ] if variant_info.shape[0] == 0: logging.variant_not_found(variant) return [] elif variant_info.shape[0] == 1: return self._get_biallelic_variant(variant, variant_info) else: return self._get_multialleic_variant(variant, variant_info)
Get the genotypes from a well formed variant instance. Args: marker (Variant): A Variant instance. Returns: A list of Genotypes instance containing a pointer to the variant as well as a vector of encoded genotypes.
juraj-google-style
def post_warning(self, name, message): self.post_command(OPERATIONS.CMD_POST_MESSAGE, _create_message(name, states.WARNING_LEVEL, message))
Asynchronously post a user facing warning message about a service. Args: name (string): The name of the service message (string): The user facing warning message that will be stored for the service and can be queried later.
codesearchnet
def list(self, container_or_share_name, container=None, account=None): key = self.storage_client.storage_accounts.list_keys(self.resource_group_name, account).keys[0].value if container: bs = BlockBlobService(account_name=account, account_key=key) container_list = [] for i in bs.list_blobs(container_or_share_name).items: container_list.append(i.name) return container_list elif not container: fs = FileService(account_name=account, account_key=key) container_list = [] for i in fs.list_directories_and_files(container_or_share_name).items: container_list.append(i.name) return container_list else: raise ValueError("You have to pass a value for container param")
List the blobs/files inside a container/share_name. Args: container_or_share_name(str): Name of the container/share_name where we want to list the blobs/files. container(bool): flag to know it you are listing files or blobs. account(str): The name of the storage account.
juraj-google-style
def step1_get_authorize_url(self, redirect_uri=None, state=None): if (redirect_uri is not None): logger.warning('The redirect_uri parameter for OAuth2WebServerFlow.step1_get_authorize_url is deprecated. Please move to passing the redirect_uri in via the constructor.') self.redirect_uri = redirect_uri if (self.redirect_uri is None): raise ValueError('The value of redirect_uri must not be None.') query_params = {'client_id': self.client_id, 'redirect_uri': self.redirect_uri, 'scope': self.scope} if (state is not None): query_params['state'] = state if (self.login_hint is not None): query_params['login_hint'] = self.login_hint if self._pkce: if (not self.code_verifier): self.code_verifier = _pkce.code_verifier() challenge = _pkce.code_challenge(self.code_verifier) query_params['code_challenge'] = challenge query_params['code_challenge_method'] = 'S256' query_params.update(self.params) return _helpers.update_query_params(self.auth_uri, query_params)
Returns a URI to redirect to the provider. Args: redirect_uri: string, Either the string 'urn:ietf:wg:oauth:2.0:oob' for a non-web-based application, or a URI that handles the callback from the authorization server. This parameter is deprecated, please move to passing the redirect_uri in via the constructor. state: string, Opaque state string which is passed through the OAuth2 flow and returned to the client as a query parameter in the callback. Returns: A URI as a string to redirect the user to begin the authorization flow.
codesearchnet
def setup_service(api_name, api_version, credentials=None): if not credentials: credentials = oauth2client.client.GoogleCredentials.get_application_default( ) return apiclient.discovery.build( api_name, api_version, credentials=credentials)
Configures genomics API client. Args: api_name: Name of the Google API (for example: "genomics") api_version: Version of the API (for example: "v2alpha1") credentials: Credentials to be used for the gcloud API calls. Returns: A configured Google Genomics API client with appropriate credentials.
juraj-google-style
def flatten(self, max_value: int) -> FrozenSet[int]: return frozenset(self.iter(max_value))
Return a set of all values contained in the sequence set. Args: max_value: The maximum value, in place of any ``*``.
juraj-google-style
def recoverURL(self, url): self.setUserAgent() if "https: self.setProxy(protocol = "https") else: self.setProxy(protocol = "http") if ".onion" in url: try: pass except: pass url = url.replace(".onion", ".onion.cab") try: recurso = self.br.open(url) except: return None html = recurso.read() return html
Public method to recover a resource. Args: ----- url: The URL to be collected. Returns: -------- Returns a resource that has to be read, for instance, with html = self.br.read()
juraj-google-style
def print_type(self, t, literal=False) -> str:
Returns a string of the type of t. For example, if t is `0`, then this method returns "int" with literal=False or `Literal[0]` with literal=True. Args: t: An abstract value. literal: Whether to print literals literally.
github-repos
def render_table(data, headers=None): builder = HtmlBuilder() builder._render_objects(data, headers, datatype='dict') return builder._to_html()
Return a dictionary list formatted as a HTML table. Args: data: a list of dictionaries, one per row. headers: the keys in the dictionary to use as table columns, in order.
juraj-google-style
def run_benchmarks(benchmark_suite, verbose=True): def run(benchmark: BenchmarkFactoryFn, size: int): benchmark_instance_callable = benchmark(size) start = time.time() _ = benchmark_instance_callable() return time.time() - start cost_series = collections.defaultdict(list) size_series = collections.defaultdict(list) for benchmark_config in benchmark_suite: name = str(benchmark_config) num_runs = benchmark_config.num_runs if isinstance(benchmark_config, LinearRegressionBenchmarkConfig): size = benchmark_config.starting_point step = benchmark_config.increment else: assert isinstance(benchmark_config, BenchmarkConfig) size = benchmark_config.size step = 0 for run_id in range(num_runs): gc.collect() time_cost = run(benchmark_config.benchmark, size) cost_series[name].append(time_cost) size_series[name].append(size) if verbose: per_element_cost = time_cost / size print('%s: run %d of %d, per element time cost: %g sec' % (name, run_id + 1, num_runs, per_element_cost)) size += step if verbose: print('') if verbose: pad_length = max([len(str(bc)) for bc in benchmark_suite]) for benchmark_config in benchmark_suite: name = str(benchmark_config) if isinstance(benchmark_config, LinearRegressionBenchmarkConfig): from scipy import stats print() gradient, intercept, r_value, p_value, std_err = stats.linregress(size_series[name], cost_series[name]) print('Fixed cost ', intercept) print('Per-element ', gradient) print('R^2 ', r_value ** 2) else: assert isinstance(benchmark_config, BenchmarkConfig) per_element_median_cost = numpy.median(cost_series[name]) / benchmark_config.size std = numpy.std(cost_series[name]) / benchmark_config.size print('%s: p. element median time cost: %g sec, relative std: %.2f%%' % (name.ljust(pad_length, ' '), per_element_median_cost, std * 100 / per_element_median_cost)) return (size_series, cost_series)
Runs benchmarks, and collects execution times. A simple instrumentation to run a callable several times, collect and print its execution times. Args: benchmark_suite: A list of BenchmarkConfig. verbose: bool, whether to print benchmark results to stdout. Returns: A dictionary of the form string -> list of floats. Keys of the dictionary are benchmark names, values are execution times in seconds for each run.
github-repos
def erfinv(x): if any_symbolic_tensors((x,)): return Erfinv().symbolic_call(x) x = backend.convert_to_tensor(x) return backend.math.erfinv(x)
Computes the inverse error function of `x`, element-wise. Args: x: Input tensor. Returns: A tensor with the same dtype as `x`. Example: >>> x = np.array([-0.5, -0.2, -0.1, 0.0, 0.3]) >>> keras.ops.erfinv(x) array([-0.47694, -0.17914, -0.08886, 0. , 0.27246], dtype=float32)
github-repos
def closest_point(a, b, p): ap = [(p[0] - a[0]), (p[1] - a[1])] ab = [(b[0] - a[0]), (b[1] - a[1])] mag = float(((ab[0] ** 2) + (ab[1] ** 2))) proj = dot(ap, ab) if (mag == 0): dist = 0 else: dist = (proj / mag) if (dist < 0): return [a[0], a[1]] elif (dist > 1): return [b[0], b[1]] else: return [(a[0] + (ab[0] * dist)), (a[1] + (ab[1] * dist))]
Finds closest point in a line segment Args: a ([float, float]): x and y coordinates. Line start b ([float, float]): x and y coordinates. Line end p ([float, float]): x and y coordinates. Point to find in the segment Returns: (float, float): x and y coordinates of the closest point
codesearchnet
def getfileversion(self): (status, major_v, minor_v, release, info) = _C.Hgetfileversion(self._id) _checkErr('getfileversion', status, 'cannot get file version') return (major_v, minor_v, release, info)
Get file version info. Args: no argument Returns: 4-element tuple with the following components: -major version number (int) -minor version number (int) -complete library version number (int) -additional information (string) C library equivalent : Hgetlibversion
codesearchnet
def __init__(self, *args, **kwargs): super(ClaimTransaction, self).__init__(*args, **kwargs) self.Type = TransactionType.ClaimTransaction
Create an instance. Args: *args: **kwargs:
juraj-google-style
def add_ensembl_info(genes, ensembl_lines): LOG.info("Adding ensembl coordinates") if isinstance(ensembl_lines, DataFrame): ensembl_genes = parse_ensembl_gene_request(ensembl_lines) else: ensembl_genes = parse_ensembl_genes(ensembl_lines) for ensembl_gene in ensembl_genes: gene_obj = genes.get(ensembl_gene['hgnc_id']) if not gene_obj: continue gene_obj['chromosome'] = ensembl_gene['chrom'] gene_obj['start'] = ensembl_gene['gene_start'] gene_obj['end'] = ensembl_gene['gene_end'] gene_obj['ensembl_gene_id'] = ensembl_gene['ensembl_gene_id']
Add the coordinates from ensembl Args: genes(dict): Dictionary with all genes ensembl_lines(iteable): Iteable with raw ensembl info
juraj-google-style
def AddArguments(cls, argument_group): argument_group.add_argument( '--server', dest='server', type=str, action='store', default=cls._DEFAULT_SERVER, metavar='HOSTNAME', help='The hostname or server IP address of the server.') argument_group.add_argument( '--port', dest='port', type=int, action='store', default=cls._DEFAULT_PORT, metavar='PORT', help='The port number of the server.')
Adds command line arguments the helper supports to an argument group. This function takes an argument parser or an argument group object and adds to it all the command line arguments this helper supports. Args: argument_group (argparse._ArgumentGroup|argparse.ArgumentParser): argparse group.
juraj-google-style
def _clean_url(url): if url == 'default': url = DEFAULT_SERVER_HTTP_URL if url.startswith("ws"): raise ValueError("url should be the http or https URL for the server, not the websocket URL") return url.rstrip("/")
Produce a canonical Bokeh server URL. Args: url (str) A URL to clean, or "defatul". If "default" then the ``BOKEH_SERVER_HTTP_URL`` will be returned. Returns: str
juraj-google-style
async def skip(self, query="1"): if not self.state == 'ready': logger.debug("Trying to skip from wrong state '{}'".format(self.state)) return if query == "": query = "1" elif query == "all": query = str(len(self.queue) + 1) try: num = int(query) except TypeError: self.statuslog.error("Skip argument must be a number") except ValueError: self.statuslog.error("Skip argument must be a number") else: self.statuslog.info("Skipping") for i in range(num - 1): if len(self.queue) > 0: self.prev_queue.append(self.queue.pop(0)) try: self.streamer.stop() except Exception as e: logger.exception(e)
The skip command Args: query (str): The number of items to skip
juraj-google-style
def rotate(self, vecs): assert vecs.dtype == np.float32 assert vecs.ndim in [1, 2] if vecs.ndim == 2: return vecs @ self.R elif vecs.ndim == 1: return (vecs.reshape(1, -1) @ self.R).reshape(-1)
Rotate input vector(s) by the rotation matrix.` Args: vecs (np.ndarray): Input vector(s) with dtype=np.float32. The shape can be a single vector (D, ) or several vectors (N, D) Returns: np.ndarray: Rotated vectors with the same shape and dtype to the input vecs.
juraj-google-style
def commit(self, sourcedir, targetdir, abs_config, abs_sourcedir, abs_targetdir): config_path, config_filename = os.path.split(abs_config) if not os.path.exists(config_path): os.makedirs(config_path) if not os.path.exists(abs_sourcedir): os.makedirs(abs_sourcedir) if not os.path.exists(abs_targetdir): os.makedirs(abs_targetdir) self.backend_engine.dump({ 'SOURCES_PATH': sourcedir, 'TARGET_PATH': targetdir, "LIBRARY_PATHS": [], "OUTPUT_STYLES": "nested", "SOURCE_COMMENTS": False, "EXCLUDES": [] }, abs_config, indent=4)
Commit project structure and configuration file Args: sourcedir (string): Source directory path. targetdir (string): Compiled files target directory path. abs_config (string): Configuration file absolute path. abs_sourcedir (string): ``sourcedir`` expanded as absolute path. abs_targetdir (string): ``targetdir`` expanded as absolute path.
juraj-google-style
def abspath(self, path): if not path.startswith(os.path.sep) or path.startswith('~'): path = os.path.expanduser(os.path.join(self.base_path, path)) return path
Transform the path to an absolute path Args: path (string): The path to transform to an absolute path Returns: string: The absolute path to the file
juraj-google-style
def LockedWrite(self, cache_data): if isinstance(cache_data, six.text_type): cache_data = cache_data.encode(encoding=self._encoding) with self._thread_lock: if not self._EnsureFileExists(): return False with self._process_lock_getter() as acquired_plock: if not acquired_plock: return False with open(self._filename, 'wb') as f: f.write(cache_data) return True
Acquire an interprocess lock and write a string. This method safely acquires the locks then writes a string to the cache file. If the string is written successfully the function will return True, if the write fails for any reason it will return False. Args: cache_data: string or bytes to write. Returns: bool: success
juraj-google-style
def __clean__(struct: Union[dict, list]) -> Union[dict, list]: if isinstance(struct, dict): for key, value in struct.items(): if isinstance(value, bytes): struct[key] = base64.standard_b64encode(value).decode('ascii') elif isinstance(value, date): struct[key] = str(value) else: API.__clean__(value) elif isinstance(struct, list): for index, value in enumerate(struct): if isinstance(value, bytes): struct[index] = base64.standard_b64encode(value).decode('ascii') elif isinstance(value, date): struct[index] = str(value) else: API.__clean__(value) return struct
Helper to recursively clean up JSON data for API call. Converts bytes -> base64. Converts date -> str (yyyy-mm-dd). TODO: Add Converts datetime, time -> string. Args: struct: The kwargs being cleaned up. Returns: struct: The kwargs with replacments.
github-repos
def get_public_tokens(self): r = self.remote_utils.get_url((self.url() + 'public_tokens/')) return r.json()
Get a list of public tokens available on this server. Arguments: None Returns: str[]: list of public tokens
codesearchnet
def setOutputHandler(self, outputhandler): class OutputHandlerInternal(amplpython.OutputHandler): def output(self, kind, msg): outputhandler.output(kind, msg) self._outputhandler = outputhandler self._outputhandler_internal = OutputHandlerInternal() lock_and_call( lambda: self._impl.setOutputHandler( self._outputhandler_internal ), self._lock )
Sets a new output handler. Args: outputhandler: The function handling the AMPL output derived from interpreting user commands.
juraj-google-style
def add_scalar_value(self, value_buf): self.__container_node.add_child(_Node(value_buf)) self.current_container_length += len(value_buf)
Add a node to the tree containing a scalar value. Args: value_buf (bytearray): bytearray containing the scalar value.
codesearchnet
def _move_bee(self, bee, new_values): score = np.nan_to_num(new_values[0]) if (bee.score > score): bee.failed_trials += 1 else: bee.values = new_values[1] bee.score = score bee.error = new_values[2] bee.failed_trials = 0 self._logger.log('debug', 'Bee assigned to new merged position')
Moves a bee to a new position if new fitness score is better than the bee's current fitness score Args: bee (EmployerBee): bee to move new_values (tuple): (new score, new values, new fitness function return value)
codesearchnet
def merge(profile, branch, merge_into): data = merges.merge(profile, branch, merge_into) return data
Merge a branch into another branch. Args: profile A profile generated from ``simplygithub.authentication.profile``. Such profiles tell this module (i) the ``repo`` to connect to, and (ii) the ``token`` to connect with. branch The name of the branch to merge. merge_into The name of the branch you want to merge into. Returns: A dict wtih data about the merge.
codesearchnet
def set_brightness(self, brightness): if not 25 <= brightness <= 255: raise ValueError("The brightness needs to be between 25 and 255.") payload = self.generate_payload(SET, {self.DPS_INDEX_BRIGHTNESS: brightness}) data = self._send_receive(payload) return data
Set the brightness value of an rgb bulb. Args: brightness(int): Value for the brightness (25-255).
juraj-google-style
def _accept(random_sample: float, cost_diff: float, temp: float) -> Tuple[bool, float]: exponent = -cost_diff / temp if exponent >= 0.0: return True, 1.0 else: probability = math.exp(exponent) return probability > random_sample, probability
Calculates probability and draws if solution should be accepted. Based on exp(-Delta*E/T) formula. Args: random_sample: Uniformly distributed random number in the range [0, 1). cost_diff: Cost difference between new and previous solutions. temp: Current temperature. Returns: Tuple of boolean and float, with boolean equal to True if solution is accepted, and False otherwise. The float value is acceptance probability.
juraj-google-style