code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def write_makeconfig(_path): http_proxy = str(CFG["gentoo"]["http_proxy"]) ftp_proxy = str(CFG["gentoo"]["ftp_proxy"]) rsync_proxy = str(CFG["gentoo"]["rsync_proxy"]) path.mkfile_uchroot(local.path('/') / _path) with open(_path, 'w') as makeconf: lines = makeconf.write(lines) mounts = CFG["container"]["mounts"].value tmp_dir = str(CFG["tmp_dir"]) mounts.append({"src": tmp_dir, "tgt": "/mnt/distfiles"}) CFG["container"]["mounts"] = mounts if http_proxy is not None: http_s = "http_proxy={0}".format(http_proxy) https_s = "https_proxy={0}".format(http_proxy) makeconf.write(http_s + "\n") makeconf.write(https_s + "\n") if ftp_proxy is not None: fp_s = "ftp_proxy={0}".format(ftp_proxy) makeconf.write(fp_s + "\n") if rsync_proxy is not None: rp_s = "RSYNC_PROXY={0}".format(rsync_proxy) makeconf.write(rp_s + "\n")
Write a valid gentoo make.conf file to :path:. Args: path - The output path of the make.conf
juraj-google-style
def _get_lr_tensor(self): lr = (tf.squared_difference(1.0, tf.sqrt(self._mu)) / self._h_min) return lr
Get lr minimizing the surrogate. Returns: The lr_t.
codesearchnet
def _print_download_progress_msg(self, msg, flush=False): if self._interactive_mode(): self._max_prog_str = max(self._max_prog_str, len(msg)) sys.stdout.write("\r%-{}s".format(self._max_prog_str) % msg) sys.stdout.flush() if flush: print("\n") else: logging.info(msg)
Prints a message about download progress either to the console or TF log. Args: msg: Message to print. flush: Indicates whether to flush the output (only used in interactive mode).
juraj-google-style
def get_images_by_tail_number(self, tail_number, page=1, limit=100): url = REG_BASE.format(tail_number, str(self.AUTH_TOKEN), page, limit) return self._fr24.get_aircraft_image_data(url)
Fetch the images of a particular aircraft by its tail number. This method can be used to get the images of the aircraft. The images are in 3 sizes and you can use what suits your need. Args: tail_number (str): The tail number, e.g. VT-ANL page (int): Optional page number; for users who are on a plan with flightradar24 they can pass in higher page numbers to get more data limit (int): Optional limit on number of records returned Returns: A dict with the images of the aircraft in various sizes Example:: from pyflightdata import FlightData f=FlightData() #optional login f.login(myemail,mypassword) f.get_images_by_flight_number('VT-ANL') f.get_images_by_flight_number('VT-ANL',page=1,limit=10)
codesearchnet
def generate(cls, country_code, bank_code, account_code): spec = _get_iban_spec(country_code) bank_code_length = code_length(spec, 'bank_code') branch_code_length = code_length(spec, 'branch_code') bank_and_branch_code_length = (bank_code_length + branch_code_length) account_code_length = code_length(spec, 'account_code') if (len(bank_code) > bank_and_branch_code_length): raise ValueError('Bank code exceeds maximum size {}'.format(bank_and_branch_code_length)) if (len(account_code) > account_code_length): raise ValueError('Account code exceeds maximum size {}'.format(account_code_length)) bank_code = bank_code.rjust(bank_and_branch_code_length, '0') account_code = account_code.rjust(account_code_length, '0') iban = (((country_code + '??') + bank_code) + account_code) return cls(iban)
Generate an IBAN from it's components. If the bank-code and/or account-number have less digits than required by their country specific representation, the respective component is padded with zeros. Examples: To generate an IBAN do the following:: >>> bank_code = '37040044' >>> account_code = '532013000' >>> iban = IBAN.generate('DE', bank_code, account_code) >>> iban.formatted 'DE89 3704 0044 0532 0130 00' Args: country_code (str): The ISO 3166 alpha-2 country code. bank_code (str): The country specific bank-code. account_code (str): The customer specific account-code.
codesearchnet
def process_files(self, path, recursive=False): self._logger.info('Processing files in "%s"', path) for (path, file) in files_generator(path, recursive): if not file.endswith(BATCH_EXTENSION): self.process_file(os.path.join(path, file))
Apply normalizations over all files in the given directory. Iterate over all files in a given directory. Normalizations will be applied to each file, storing the result in a new file. The extension for the new file will be the one defined in BATCH_EXTENSION. Args: path: Path to the directory. recursive: Whether to find files recursively or not.
juraj-google-style
def _FindFileContainingSymbolInDb(self, symbol): try: file_proto = self._internal_db.FindFileContainingSymbol(symbol) except KeyError as error: if self._descriptor_db: file_proto = self._descriptor_db.FindFileContainingSymbol(symbol) else: raise error if not file_proto: raise KeyError('Cannot find a file containing %s' % symbol) return self._ConvertFileProtoToFileDescriptor(file_proto)
Finds the file in descriptor DB containing the specified symbol. Args: symbol: The name of the symbol to search for. Returns: A FileDescriptor that contains the specified symbol. Raises: KeyError: if the file cannot be found in the descriptor database.
juraj-google-style
def __init__(self, agent_interface_format=None, map_size=None): if not agent_interface_format: raise ValueError("Please specify agent_interface_format") self._agent_interface_format = agent_interface_format aif = self._agent_interface_format if (aif.use_feature_units or aif.use_camera_position or aif.use_raw_units): self.init_camera( aif.feature_dimensions, map_size, aif.camera_width_world_units) self._valid_functions = _init_valid_functions( aif.action_dimensions)
Initialize a Features instance matching the specified interface format. Args: agent_interface_format: See the documentation for `AgentInterfaceFormat`. map_size: The size of the map in world units, needed for feature_units. Raises: ValueError: if agent_interface_format isn't specified. ValueError: if map_size isn't specified when use_feature_units or use_camera_position is.
juraj-google-style
def _create_or_restore_slot_variable(self, slot_variable_position, slot_name, variable): named_slots = self._slot_dict(slot_name) variable_key = _var_key(variable) slot_variable = named_slots.get(variable_key, None) if slot_variable is None and context.executing_eagerly() and slot_variable_position.is_simple_variable() and (not ops.get_default_graph()._variable_creator_stack): initializer = trackable.CheckpointInitialValueCallable(checkpoint_position=slot_variable_position) slot_variable = self._get_or_make_slot_with_initializer(var=variable, initializer=initializer, shape=variable.shape, dtype=variable.dtype, slot_name=slot_name, op_name=self._name) if slot_variable is not None: slot_variable_position.restore(slot_variable) else: self._deferred_slot_restorations.setdefault(slot_name, {}).setdefault(variable_key, []).append(slot_variable_position)
Restore a slot variable's value, possibly creating it. Called when a variable which has an associated slot variable is created or restored. When executing eagerly, we create the slot variable with a restoring initializer. No new variables are created when graph building. Instead, _restore_slot_variable catches these after normal creation and adds restore ops to the graph. This method is nonetheless important when graph building for the case when a slot variable has already been created but `variable` has just been added to a dependency graph (causing us to realize that the slot variable needs to be restored). Args: slot_variable_position: A `trackable._CheckpointPosition` object indicating the slot variable `Trackable` object to be restored. slot_name: The name of this `Optimizer`'s slot to restore into. variable: The variable object this slot is being created for.
github-repos
def _ReadSelectedVolumes(self, volume_system, prefix='v'): volume_identifiers_string = self._input_reader.Read() volume_identifiers_string = volume_identifiers_string.strip() if (not volume_identifiers_string): return [] selected_volumes = self._ParseVolumeIdentifiersString(volume_identifiers_string, prefix=prefix) if (selected_volumes == ['all']): return ['{0:s}{1:d}'.format(prefix, volume_index) for volume_index in range(1, (volume_system.number_of_volumes + 1))] return selected_volumes
Reads the selected volumes provided by the user. Args: volume_system (APFSVolumeSystem): volume system. prefix (Optional[str]): volume identifier prefix. Returns: list[str]: selected volume identifiers including prefix. Raises: KeyboardInterrupt: if the user requested to abort. ValueError: if the volume identifiers string could not be parsed.
codesearchnet
def image_from_console(console: tcod.console.Console) -> tcod.image.Image: return tcod.image.Image._from_cdata(ffi.gc(lib.TCOD_image_from_console(_console(console)), lib.TCOD_image_delete))
Return an Image with a Consoles pixel data. This effectively takes a screen-shot of the Console. Args: console (Console): Any Console instance.
codesearchnet
def __exit__(self, exc_type, exc_val, exc_tb) -> bool: if self._worker_pool is not None: self._worker_pool.close() self._worker_pool = None self._context_manager_active = False if exc_type is ChildProcessError: sys.stderr.write(str(exc_val)) return True return False
Context manager cleanup. Closes the worker pool if it exists. Args: exc_type: The type of the raised exception, if any. exc_val: The raised exception, if any. exc_tb: The traceback of the raised exception, if any. Returns: `True` if an exception should be suppressed, `False` otherwise.
github-repos
def __mul__(self, rhs): if isinstance(rhs, scipy.sparse.spmatrix): def qIter(qs): for j in range(qs.shape[1]): qi = qs.getcol(j).toarray().ravel() yield qi return else: def qIter(qs): for j in range(qs.shape[1]): qi = qs[:, j] yield qi return result = np.empty(rhs.shape, dtype=np.complex128) for i, q in enumerate(qIter(rhs)): result[:, i] = self._solve(q) return result
Carries out the action of solving for wavefields. Args: rhs (sparse matrix): Right-hand side vector(s) Returns: np.ndarray: Wavefields
juraj-google-style
def _FormatAttrToken(self, token_data): return {'mode': token_data.file_mode, 'uid': token_data.user_identifier, 'gid': token_data.group_identifier, 'system_id': token_data.file_system_identifier, 'node_id': token_data.file_identifier, 'device': token_data.device}
Formats an attribute token as a dictionary of values. Args: token_data (bsm_token_data_attr32|bsm_token_data_attr64): AUT_ATTR32 or AUT_ATTR64 token data. Returns: dict[str, str]: token values.
codesearchnet
def conv_block(name, x, mid_channels, dilations=None, activation="relu", dropout=0.0): with tf.variable_scope(name, reuse=tf.AUTO_REUSE): x_shape = common_layers.shape_list(x) is_2d = len(x_shape) == 4 num_steps = x_shape[1] if is_2d: first_filter = [3, 3] second_filter = [1, 1] else: if num_steps == 1: first_filter = [1, 3, 3] else: first_filter = [2, 3, 3] second_filter = [1, 1, 1] x = conv("1_1", x, output_channels=mid_channels, filter_size=first_filter, dilations=dilations) x = tf.nn.relu(x) x = get_dropout(x, rate=dropout) if activation == "relu": x = conv("1_2", x, output_channels=mid_channels, filter_size=second_filter, dilations=dilations) x = tf.nn.relu(x) elif activation == "gatu": x_tanh = conv("1_tanh", x, output_channels=mid_channels, filter_size=second_filter, dilations=dilations) x_sigm = conv("1_sigm", x, output_channels=mid_channels, filter_size=second_filter, dilations=dilations) x = tf.nn.tanh(x_tanh) * tf.nn.sigmoid(x_sigm) x = get_dropout(x, rate=dropout) return x
2 layer conv block used in the affine coupling layer. Args: name: variable scope. x: 4-D or 5-D Tensor. mid_channels: Output channels of the second layer. dilations: Optional, list of integers. activation: relu or gatu. If relu, the second layer is relu(W*x) If gatu, the second layer is tanh(W1*x) * sigmoid(W2*x) dropout: Dropout probability. Returns: x: 4-D Tensor: Output activations.
juraj-google-style
def assign_nested_vars(variables, tensors, indices=None): if isinstance(variables, (tuple, list)): return tf.group(*[ assign_nested_vars(variable, tensor) for variable, tensor in zip(variables, tensors)]) if indices is None: return variables.assign(tensors) else: return tf.scatter_update(variables, indices, tensors)
Assign tensors to matching nested tuple of variables. Args: variables: Nested tuple or list of variables to update. tensors: Nested tuple or list of tensors to assign. indices: Batch indices to assign to; default to all. Returns: Operation.
juraj-google-style
def set(self, key, value): changed = super().set(key=key, value=value) if not changed: return False self._log.info('Saving configuration to "%s"...', self._filename) with open(self._filename, 'w') as stream: stream.write(self.content) self._log.info('Saved configuration to "%s".', self._filename) return True
Updates the value of the given key in the file. Args: key (str): Key of the property to update. value (str): New value of the property. Return: bool: Indicates whether or not a change was made.
juraj-google-style
def delete(self, file_path): now = datetime.datetime.now().isoformat() url = nurls['put'] + upload_path + file_name headers = {'userid': self.user_id, 'useridx': self.useridx, 'Content-Type': "application/x-www-form-urlencoded; charset=UTF-8", 'charset': 'UTF-8', 'Origin': 'http: } r = self.session.delete(url = url, headers = headers) return self.resultManager(r.text)
DELETE Args: file_path: Full path for a file you want to delete upload_path: Ndrive path where you want to delete file ex) /Picture/ Returns: True: Delete success False: Delete failed
juraj-google-style
def sections_list(self, cmd=None): sections = list(self.common.sections) if not cmd: if self.bare is not None: sections.extend(self.bare.sections) return sections return [] sections.extend(self.subcmds[cmd].sections) if cmd in self._conf: sections.append(cmd) return sections
List of config sections used by a command. Args: cmd (str): command name, set to ``None`` or ``''`` for the bare command. Returns: list of str: list of configuration sections used by that command.
juraj-google-style
def handle_message_registered(self, msg_data, host): response = None if msg_data["method"] == "EVENT": logger.debug("<%s> <euuid:%s> Event message " "received" % (msg_data["cuuid"], msg_data["euuid"])) response = self.event(msg_data["cuuid"], host, msg_data["euuid"], msg_data["event_data"], msg_data["timestamp"], msg_data["priority"]) elif msg_data["method"] == "OK EVENT": logger.debug("<%s> <euuid:%s> Event confirmation message " "received" % (msg_data["cuuid"], msg_data["euuid"])) try: del self.event_uuids[msg_data["euuid"]] except KeyError: logger.warning("<%s> <euuid:%s> Euuid does not exist in event " "buffer. Key was removed before we could process " "it." % (msg_data["cuuid"], msg_data["euuid"])) elif msg_data["method"] == "OK NOTIFY": logger.debug("<%s> <euuid:%s> Ok notify " "received" % (msg_data["cuuid"], msg_data["euuid"])) try: del self.event_uuids[msg_data["euuid"]] except KeyError: logger.warning("<%s> <euuid:%s> Euuid does not exist in event " "buffer. Key was removed before we could process " "it." % (msg_data["cuuid"], msg_data["euuid"])) return response
Processes messages that have been delivered by a registered client. Args: msg (string): The raw packet data delivered from the listener. This data will be unserialized and then processed based on the packet's method. host (tuple): The (address, host) tuple of the source message. Returns: A response that will be sent back to the client via the listener.
juraj-google-style
def _map_condition(self, wire_map, condition): if (condition is None): new_condition = None else: bit0 = (condition[0], 0) new_condition = (wire_map.get(bit0, bit0)[0], condition[1]) return new_condition
Use the wire_map dict to change the condition tuple's creg name. Args: wire_map (dict): a map from wires to wires condition (tuple): (ClassicalRegister,int) Returns: tuple(ClassicalRegister,int): new condition
codesearchnet
def compose_q(self, r: Rotation, normalize_quats: bool=True) -> Rotation: q1 = self.get_quats() q2 = r.get_quats() new_quats = quat_multiply(q1, q2) return Rotation(rot_mats=None, quats=new_quats, normalize_quats=normalize_quats)
Compose the quaternions of the current Rotation object with those of another. Depending on whether either Rotation was initialized with quaternions, this function may call torch.linalg.eigh. Args: r: An update rotation object Returns: An updated rotation object
github-repos
def ssa(scatterer, h_pol=True): ext_xs = ext_xsect(scatterer, h_pol=h_pol) return sca_xsect(scatterer, h_pol=h_pol)/ext_xs if ext_xs > 0.0 else 0.0
Single-scattering albedo for the current setup, with polarization. Args: scatterer: a Scatterer instance. h_pol: If True (default), use horizontal polarization. If False, use vertical polarization. Returns: The single-scattering albedo.
juraj-google-style
def __init__( self, optimizer, ls_max_iterations=10, ls_accept_ratio=0.9, ls_mode='exponential', ls_parameter=0.5, ls_unroll_loop=False, scope='optimized-step', summary_labels=() ): self.solver = LineSearch( max_iterations=ls_max_iterations, accept_ratio=ls_accept_ratio, mode=ls_mode, parameter=ls_parameter, unroll_loop=ls_unroll_loop ) super(OptimizedStep, self).__init__(optimizer=optimizer, scope=scope, summary_labels=summary_labels)
Creates a new optimized step meta optimizer instance. Args: optimizer: The optimizer which is modified by this meta optimizer. ls_max_iterations: Maximum number of line search iterations. ls_accept_ratio: Line search acceptance ratio. ls_mode: Line search mode, see LineSearch solver. ls_parameter: Line search parameter, see LineSearch solver. ls_unroll_loop: Unroll line search loop if true.
juraj-google-style
def __init__(self, unit_def): if isinstance(unit_def, str): unit = collections.defaultdict(int) import re for m in re.finditer(r"([A-Za-z]+)\s*\^*\s*([\-0-9]*)", unit_def): p = m.group(2) p = 1 if not p else int(p) k = m.group(1) unit[k] += p else: unit = {k: v for k, v in dict(unit_def).items() if v != 0} self._unit = check_mappings(unit)
Constructs a unit. Args: unit_def: A definition for the unit. Either a mapping of unit to powers, e.g., {"m": 2, "s": -1} represents "m^2 s^-1", or simply as a string "kg m^2 s^-1". Note that the supported format uses "^" as the power operator and all units must be space-separated.
juraj-google-style
def state_scope(self, state_fluents: Sequence[tf.Tensor]) -> Dict[(str, TensorFluent)]: return dict(zip(self.rddl.domain.state_fluent_ordering, state_fluents))
Returns a partial scope with current state-fluents. Args: state_fluents (Sequence[tf.Tensor]): The current state fluents. Returns: A mapping from state fluent names to :obj:`rddl2tf.fluent.TensorFluent`.
codesearchnet
def _GetDecodedStreamSize(self): self._file_object.seek(0, os.SEEK_SET) self._decoder = self._GetDecoder() self._decoded_data = b'' encoded_data_offset = 0 encoded_data_size = self._file_object.get_size() decoded_stream_size = 0 while (encoded_data_offset < encoded_data_size): read_count = self._ReadEncodedData(self._ENCODED_DATA_BUFFER_SIZE) if (read_count == 0): break encoded_data_offset += read_count decoded_stream_size += self._decoded_data_size return decoded_stream_size
Retrieves the decoded stream size. Returns: int: decoded stream size.
codesearchnet
def make_sample_her_transitions(replay_strategy, replay_k, reward_fun): if (replay_strategy == 'future'): future_p = (1 - (1.0 / (1 + replay_k))) else: future_p = 0 def _sample_her_transitions(episode_batch, batch_size_in_transitions): 'episode_batch is {key: array(buffer_size x T x dim_key)}\n ' T = episode_batch['u'].shape[1] rollout_batch_size = episode_batch['u'].shape[0] batch_size = batch_size_in_transitions episode_idxs = np.random.randint(0, rollout_batch_size, batch_size) t_samples = np.random.randint(T, size=batch_size) transitions = {key: episode_batch[key][(episode_idxs, t_samples)].copy() for key in episode_batch.keys()} her_indexes = np.where((np.random.uniform(size=batch_size) < future_p)) future_offset = (np.random.uniform(size=batch_size) * (T - t_samples)) future_offset = future_offset.astype(int) future_t = ((t_samples + 1) + future_offset)[her_indexes] future_ag = episode_batch['ag'][(episode_idxs[her_indexes], future_t)] transitions['g'][her_indexes] = future_ag info = {} for (key, value) in transitions.items(): if key.startswith('info_'): info[key.replace('info_', '')] = value reward_params = {k: transitions[k] for k in ['ag_2', 'g']} reward_params['info'] = info transitions['r'] = reward_fun(**reward_params) transitions = {k: transitions[k].reshape(batch_size, *transitions[k].shape[1:]) for k in transitions.keys()} assert (transitions['u'].shape[0] == batch_size_in_transitions) return transitions return _sample_her_transitions
Creates a sample function that can be used for HER experience replay. Args: replay_strategy (in ['future', 'none']): the HER replay strategy; if set to 'none', regular DDPG experience replay is used replay_k (int): the ratio between HER replays and regular replays (e.g. k = 4 -> 4 times as many HER replays as regular replays are used) reward_fun (function): function to re-compute the reward with substituted goals
codesearchnet
async def find(self, seq_set: SequenceSet, selected: SelectedMailbox, requirement: FetchRequirement=FetchRequirement.METADATA) -> AsyncIterable[Tuple[(int, MessageT)]]: for (seq, cached_msg) in selected.messages.get_all(seq_set): msg = (await self.get(cached_msg.uid, cached_msg, requirement)) if (msg is not None): (yield (seq, msg))
Find the active message UID and message pairs in the mailbox that are contained in the given sequences set. Message sequence numbers are resolved by the selected mailbox session. Args: seq_set: The sequence set of the desired messages. selected: The selected mailbox session. requirement: The data required from each message.
codesearchnet
def impulse_noise(x, severity=1): c = [0.03, 0.06, 0.09, 0.17, 0.27][(severity - 1)] x = tfds.core.lazy_imports.skimage.util.random_noise((np.array(x) / 255.0), mode='s&p', amount=c) x_clip = (np.clip(x, 0, 1) * 255) return around_and_astype(x_clip)
Impulse noise corruption to images. Args: x: numpy array, uncorrupted image, assumed to have uint8 pixel in [0,255]. severity: integer, severity of corruption. Returns: numpy array, image with uint8 pixels in [0,255]. Added impulse noise.
codesearchnet
def loader(self, file_name, bad_steps=None, **kwargs): new_tests = [] if (not os.path.isfile(file_name)): self.logger.info(('Missing file_\n %s' % file_name)) return None filesize = os.path.getsize(file_name) hfilesize = humanize_bytes(filesize) txt = ('Filesize: %i (%s)' % (filesize, hfilesize)) self.logger.debug(txt) temp_dir = tempfile.gettempdir() temp_filename = os.path.join(temp_dir, os.path.basename(file_name)) shutil.copy2(file_name, temp_dir) self.logger.debug(('tmp file: %s' % temp_filename)) self.logger.debug('HERE WE LOAD THE DATA') data = DataSet() fid = FileID(file_name) test_no = 1 data.test_no = test_no data.loaded_from = file_name data.channel_index = None data.channel_number = None data.creator = None data.item_ID = None data.schedule_file_name = None data.start_datetime = None data.test_ID = None data.test_name = None data.raw_data_files.append(fid) self.logger.debug('reading raw-data') self.mpr_data = None self.mpr_log = None self.mpr_settings = None self._load_mpr_data(temp_filename, bad_steps) length_of_test = self.mpr_data.shape[0] self.logger.debug(f'length of test: {length_of_test}') self.logger.debug('renaming columns') self._rename_headers() summary_df = self._create_summary_data() if summary_df.empty: txt = '\nCould not find any summary (stats-file)!' txt += ' (summary_df.empty = True)' txt += '\n -> issue make_summary(use_cellpy_stat_file=False)' warnings.warn(txt) data.dfsummary = summary_df data.dfdata = self.mpr_data data.raw_data_files_length.append(length_of_test) new_tests.append(data) self._clean_up(temp_filename) return new_tests
Loads data from biologics .mpr files. Args: file_name (str): path to .res file. bad_steps (list of tuples): (c, s) tuples of steps s (in cycle c) to skip loading. Returns: new_tests (list of data objects)
codesearchnet
def _table_viewer(table, rows_per_page=25, fields=None): if not table.exists(): raise Exception('Table %s does not exist' % table.full_name) if not table.is_listable(): return "Done" _HTML_TEMPLATE = u if fields is None: fields = google.datalab.utils.commands.get_field_list(fields, table.schema) div_id = google.datalab.utils.commands.Html.next_id() meta_count = ('rows: %d' % table.length) if table.length >= 0 else '' meta_name = table.full_name if table.job is None else ('job: %s' % table.job.id) if table.job: if table.job.cache_hit: meta_cost = 'cached' else: bytes = bigquery._query_stats.QueryStats._size_formatter(table.job.bytes_processed) meta_cost = '%s processed' % bytes meta_time = 'time: %.1fs' % table.job.total_time else: meta_cost = '' meta_time = '' data, total_count = google.datalab.utils.commands.get_data(table, fields, first_row=0, count=rows_per_page) if total_count < 0: fetched_count = len(data['rows']) if fetched_count < rows_per_page: total_count = fetched_count chart = 'table' if 0 <= total_count <= rows_per_page else 'paged_table' meta_entries = [meta_count, meta_time, meta_cost, meta_name] meta_data = '(%s)' % (', '.join([entry for entry in meta_entries if len(entry)])) return _HTML_TEMPLATE.format(div_id=div_id, static_table=google.datalab.utils.commands.HtmlBuilder .render_chart_data(data), meta_data=meta_data, chart_style=chart, source_index=google.datalab.utils.commands .get_data_source_index(table.full_name), fields=','.join(fields), total_rows=total_count, rows_per_page=rows_per_page, data=json.dumps(data, cls=google.datalab.utils.JSONEncoder))
Return a table viewer. This includes a static rendering of the first page of the table, that gets replaced by the charting code in environments where Javascript is executable and BQ is available. Args: table: the table to view. rows_per_page: how many rows to display at one time. fields: an array of field names to display; default is None which uses the full schema. Returns: A string containing the HTML for the table viewer.
juraj-google-style
def update_paths_and_config(self, config, pkg_dir_name, pkg_cache_dir=None): if pkg_cache_dir is None: pkg_cache_dir = self.package_cache_dir cached_dir_path = os.path.join(pkg_cache_dir, pkg_dir_name) if config.get('paths'): for path in config['paths']: path_to_append = os.path.join(cached_dir_path, path) logger.debug("Appending \"%s\" to python sys.path", path_to_append) sys.path.append(path_to_append) else: sys.path.append(cached_dir_path) if config.get('configs'): for config_filename in config['configs']: self.configs_to_merge.append(os.path.join(cached_dir_path, config_filename))
Handle remote source defined sys.paths & configs. Args: config (dict): git config dictionary pkg_dir_name (string): directory name of the stacker archive pkg_cache_dir (string): fully qualified path to stacker cache cache directory
juraj-google-style
def __init__(self, left, right): if isinstance(left, Dist) and len(left) > 1: if (not isinstance(left, J) or evaluation.get_dependencies(*list(left.inverse_map))): raise StochasticallyDependentError( "Joint distribution with dependencies not supported.") if isinstance(right, Dist) and len(right) > 1: if (not isinstance(right, J) or evaluation.get_dependencies(*list(right.inverse_map))): raise StochasticallyDependentError( "Joint distribution with dependencies not supported.") assert isinstance(left, Dist) or isinstance(right, Dist) Dist.__init__(self, left=left, right=right)
Constructor. Args: left (Dist, numpy.ndarray) : Left hand side. right (Dist, numpy.ndarray) : Right hand side.
juraj-google-style
def add_update_user(self, user, capacity=None): if isinstance(user, str): user = hdx.data.user.User.read_from_hdx(user, configuration=self.configuration) elif isinstance(user, dict): user = hdx.data.user.User(user, configuration=self.configuration) if isinstance(user, hdx.data.user.User): users = self.data.get('users') if users is None: users = list() self.data['users'] = users if capacity is not None: user['capacity'] = capacity self._addupdate_hdxobject(users, 'name', user) return raise HDXError('Type %s cannot be added as a user!' % type(user).__name__)
Add new or update existing user in organization with new metadata. Capacity eg. member, admin must be supplied either within the User object or dictionary or using the capacity argument (which takes precedence). Args: user (Union[User,Dict,str]): Either a user id or user metadata either from a User object or a dictionary capacity (Optional[str]): Capacity of user eg. member, admin. Defaults to None. Returns: None
juraj-google-style
def __extract_directory(self, path, files, destination): destination_path = os.path.join(destination, path) if not os.path.exists(destination_path): os.makedirs(destination_path) for name, contents in files.items(): item_path = os.path.join(path, name) if 'files' in contents: self.__extract_directory( item_path, contents['files'], destination ) continue self.__extract_file(item_path, contents, destination)
Extracts a single directory to the specified directory on disk. Args: path (str): Relative (to the root of the archive) path of the directory to extract. files (dict): A dictionary of files from a *.asar file header. destination (str): The path to extract the files to.
juraj-google-style
def calculate(self, token_list_x, token_list_y): x, y = self.unique(token_list_x, token_list_y) try: result = len(x & y) / len(x | y) except ZeroDivisionError: result = 0.0 return result
Calculate similarity with the Jaccard coefficient. Concrete method. Args: token_list_x: [token, token, token, ...] token_list_y: [token, token, token, ...] Returns: Similarity.
juraj-google-style
def plot_seebeck_temp(self, doping='all', output='average'): import matplotlib.pyplot as plt if output == 'average': sbk = self._bz.get_seebeck(output='average') elif output == 'eigs': sbk = self._bz.get_seebeck(output='eigs') plt.figure(figsize=(22, 14)) tlist = sorted(sbk['n'].keys()) doping = self._bz.doping['n'] if doping == 'all' else doping for i, dt in enumerate(['n', 'p']): plt.subplot(121 + i) for dop in doping: d = self._bz.doping[dt].index(dop) sbk_temp = [] for temp in tlist: sbk_temp.append(sbk[dt][temp][d]) if output == 'average': plt.plot(tlist, sbk_temp, marker='s', label=str(dop) + ' $cm^{-3}$') elif output == 'eigs': for xyz in range(3): plt.plot(tlist, zip(*sbk_temp)[xyz], marker='s', label=str(xyz) + ' ' + str(dop) + ' $cm^{-3}$') plt.title(dt + '-type', fontsize=20) if i == 0: plt.ylabel("Seebeck \n coefficient ($\\mu$V/K)", fontsize=30.0) plt.xlabel('Temperature (K)', fontsize=30.0) p = 'lower right' if i == 0 else '' plt.legend(loc=p, fontsize=15) plt.grid() plt.xticks(fontsize=25) plt.yticks(fontsize=25) plt.tight_layout() return plt
Plot the Seebeck coefficient in function of temperature for different doping levels. Args: dopings: the default 'all' plots all the doping levels in the analyzer. Specify a list of doping levels if you want to plot only some. output: with 'average' you get an average of the three directions with 'eigs' you get all the three directions. Returns: a matplotlib object
juraj-google-style
def random_array(shape, mean=128., std=20.): x = np.random.random(shape) x = (x - np.mean(x)) / (np.std(x) + K.epsilon()) x = (x * std) + mean return x
Creates a uniformly distributed random array with the given `mean` and `std`. Args: shape: The desired shape mean: The desired mean (Default value = 128) std: The desired std (Default value = 20) Returns: Random numpy array of given `shape` uniformly distributed with desired `mean` and `std`.
juraj-google-style
def start(self): self._server.start()
Starts this server. >>> dispatcher = tf.data.experimental.service.DispatchServer(start=False) >>> dispatcher.start() Raises: tf.errors.OpError: Or one of its subclasses if an error occurs while starting the server.
github-repos
def create_dir(path): full_path = abs_path(path) if (not os.path.exists(full_path)): try: os.makedirs(full_path) except OSError as e: if (e.errno != os.errno.EEXIST): raise
Creates a directory if it does not exist already. Args: path: The path of the directory to create.
codesearchnet
def response(self, in_thread: Optional[bool] = None) -> "Message": data = {"channel": self["channel"]} if in_thread: if "message" in self: data["thread_ts"] = ( self["message"].get("thread_ts") or self["message"]["ts"] ) else: data["thread_ts"] = self.get("thread_ts") or self["ts"] elif in_thread is None: if "message" in self and "thread_ts" in self["message"]: data["thread_ts"] = self["message"]["thread_ts"] elif "thread_ts" in self: data["thread_ts"] = self["thread_ts"] return Message(data)
Create a response message. Depending on the incoming message the response can be in a thread. By default the response follow where the incoming message was posted. Args: in_thread (boolean): Overwrite the `threading` behaviour Returns: a new :class:`slack.event.Message`
juraj-google-style
def __init__(self, optimizer_path: str, optimizer_args: ListOrTuple[str], worker_count: Optional[int]=None, expand_to_input: str=_DEFAULT_INPUT_FILE_EXPANSION_TOKEN): if worker_count is not None and worker_count < 1: raise ValueError(f'The `worker_count` argument must be either `None` or a positive integer; got {worker_count}.') self._optimizer_path: str = optimizer_path self._optimizer_args: list[str] = list(optimizer_args) self._worker_count: Optional[int] = worker_count self._expand_to_input: str = expand_to_input self._worker_pool: Optional[multiprocessing.pool.Pool] = None self._context_manager_active: bool = False
TestCheckWriter constructor. Args: optimizer_path: The program to use for optimizing the HLO. optimizer_args: The arguments to pass into the optimizer tool. worker_count: The number of worker threads to use for parallel test-case transformations. If `None`, the worker count will be inferred. If 1, or if the instance is used without a context manager (i.e. a `with` block), the transformations will be performed sequentially. expand_to_input: When running the optimizer on a test case, all instances of this substring in `optimizer_args` will expand to the path of a temporary file containing the text of that test case.
github-repos
def prod(x, axis=None, keepdims=False): from .function_bases import prod as prod_base if axis is None: axis = range(x.ndim) elif not hasattr(axis, '__iter__'): axis = [axis] return prod_base(x, axis, keepdims)
Reduction along axes with product operation. Args: x (Variable): An input variable. axis (None, int or tuple of ints): Axis or axes along which product is calculated. Passing the default value `None` will reduce all dimensions. keepdims (bool): Flag whether the reduced axes are kept as a dimension with 1 element. Returns: ~nnabla.Variable: N-D array. Note: Backward computation is not accurate in a zero value input.
juraj-google-style
def get_special_tokens_mask(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None, already_has_special_tokens: bool=False) -> List[int]: if already_has_special_tokens: return super().get_special_tokens_mask(token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True) if token_ids_1 is None: return [0] * len(token_ids_0) + [1] return [0] * len(token_ids_0) + [1] + [0] * len(token_ids_1) + [1]
Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer `prepare_for_model` method. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. already_has_special_tokens (`bool`, *optional*, defaults to `False`): Whether or not the token list is already formatted with special tokens for the model. Returns: `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token.
github-repos
def _load_from_file_object(self, f): subtoken_strings = [] for line in f: s = line.strip() if ((s.startswith("'") and s.endswith("'")) or (s.startswith('"') and s.endswith('"'))): s = s[1:(- 1)] subtoken_strings.append(native_to_unicode(s)) self._init_subtokens_from_list(subtoken_strings) self._init_alphabet_from_tokens(subtoken_strings)
Load from a file object. Args: f: File object to load vocabulary from
codesearchnet
def get_rms_dist(self, struct1, struct2): struct1, struct2 = self._process_species([struct1, struct2]) struct1, struct2, fu, s1_supercell = self._preprocess(struct1, struct2) match = self._match(struct1, struct2, fu, s1_supercell, use_rms=True, break_on_match=False) if match is None: return None else: return match[0], max(match[1])
Calculate RMS displacement between two structures Args: struct1 (Structure): 1st structure struct2 (Structure): 2nd structure Returns: rms displacement normalized by (Vol / nsites) ** (1/3) and maximum distance between paired sites. If no matching lattice is found None is returned.
juraj-google-style
def gene_to_panels(self, case_obj): LOG.info("Building gene to panels") gene_dict = {} for panel_info in case_obj.get('panels', []): panel_name = panel_info['panel_name'] panel_version = panel_info['version'] panel_obj = self.gene_panel(panel_name, version=panel_version) if not panel_obj: LOG.warning("Panel: {0}, version {1} does not exist in database".format(panel_name, panel_version)) for gene in panel_obj['genes']: hgnc_id = gene['hgnc_id'] if hgnc_id not in gene_dict: gene_dict[hgnc_id] = set([panel_name]) continue gene_dict[hgnc_id].add(panel_name) LOG.info("Gene to panels done") return gene_dict
Fetch all gene panels and group them by gene Args: case_obj(scout.models.Case) Returns: gene_dict(dict): A dictionary with gene as keys and a set of panel names as value
juraj-google-style
def run_pass_pipeline(mlir_txt, pass_pipeline, show_debug_info=False): return pywrap_mlir.experimental_run_pass_pipeline(mlir_txt, pass_pipeline, show_debug_info)
Runs a pipeline over input module. Args: mlir_txt: Textual representation of the MLIR module. pass_pipeline: Pass pipeline to run on module. show_debug_info: Whether to include locations in the emitted textual form. Returns: A textual representation of the MLIR module corresponding to the transformed module.
github-repos
def make_merged_spec(self, dev): return self.__class__(*self._get_combined_properties(dev))
Returns a new DeviceSpec which incorporates `dev`. When combining specs, `dev` will take precedence over the current spec. So for instance: ``` first_spec = tf.DeviceSpec(job=0, device_type="CPU") second_spec = tf.DeviceSpec(device_type="GPU") combined_spec = first_spec.make_merged_spec(second_spec) ``` is equivalent to: ``` combined_spec = tf.DeviceSpec(job=0, device_type="GPU") ``` Args: dev: a `DeviceSpec` Returns: A new `DeviceSpec` which combines `self` and `dev`
github-repos
def append(self, data): if (isinstance(data, list) and (len(data) > 0)): self.nodes.append(data) else: self.nodes.append([data])
Appends items or lists to the Lattice Args: data (item,list) : The Item or List to be added to the Lattice
codesearchnet
def tersoff_input(self, structure, periodic=False, uc=True, *keywords): gin = self.keyword_line(*keywords) gin += self.structure_lines(structure, cell_flg=periodic, frac_flg=periodic, anion_shell_flg=False, cation_shell_flg=False, symm_flg=(not uc)) gin += self.tersoff_potential(structure) return gin
Gets a GULP input with Tersoff potential for an oxide structure Args: structure: pymatgen.core.structure.Structure periodic (Default=False): Flag denoting whether periodic boundary conditions are used library (Default=None): File containing the species and potential. uc (Default=True): Unit Cell Flag. keywords: GULP first line keywords.
codesearchnet
def reply(self, text): data = {'text': text, 'vchannel_id': self['vchannel_id']} if self.is_p2p(): data['type'] = RTMMessageType.P2PMessage data['to_uid'] = self['uid'] else: data['type'] = RTMMessageType.ChannelMessage data['channel_id'] = self['channel_id'] return RTMMessage(data)
Replys a text message Args: text(str): message content Returns: RTMMessage
codesearchnet
def check_list_type(objects, allowed_type, name, allow_none=True): if (objects is None): if (not allow_none): raise TypeError(('%s is None, which is not allowed.' % name)) return objects if (not isinstance(objects, (tuple, list))): raise TypeError(('%s is not a list.' % name)) if (not all((isinstance(i, allowed_type) for i in objects))): type_list = sorted(list(set((type(obj) for obj in objects)))) raise TypeError(("%s contains types that don't match %s: %s" % (name, allowed_type.__name__, type_list))) return objects
Verify that objects in list are of the allowed type or raise TypeError. Args: objects: The list of objects to check. allowed_type: The allowed type of items in 'settings'. name: Name of the list of objects, added to the exception. allow_none: If set, None is also allowed. Raises: TypeError: if object is not of the allowed type. Returns: The list of objects, for convenient use in assignment.
codesearchnet
def sanitize_spec_name(name: str) -> str: if not name: return 'unknown' swapped = ''.join([c if c.isalnum() else '_' for c in name.lower()]) if swapped[0].isalpha(): return swapped else: return 'tensor_' + swapped
Sanitizes Spec names. Matches Graph Node and Python naming conventions. Without sanitization, names that are not legal Python parameter names can be set which makes it challenging to represent callables supporting the named calling capability. Args: name: The name to sanitize. Returns: A string that meets Python parameter conventions.
github-repos
def __init__(self, channel): self.Login = channel.unary_unary( '/api.Dgraph/Login', request_serializer=api__pb2.LoginRequest.SerializeToString, response_deserializer=api__pb2.Response.FromString, ) self.Query = channel.unary_unary( '/api.Dgraph/Query', request_serializer=api__pb2.Request.SerializeToString, response_deserializer=api__pb2.Response.FromString, ) self.Mutate = channel.unary_unary( '/api.Dgraph/Mutate', request_serializer=api__pb2.Mutation.SerializeToString, response_deserializer=api__pb2.Assigned.FromString, ) self.Alter = channel.unary_unary( '/api.Dgraph/Alter', request_serializer=api__pb2.Operation.SerializeToString, response_deserializer=api__pb2.Payload.FromString, ) self.CommitOrAbort = channel.unary_unary( '/api.Dgraph/CommitOrAbort', request_serializer=api__pb2.TxnContext.SerializeToString, response_deserializer=api__pb2.TxnContext.FromString, ) self.CheckVersion = channel.unary_unary( '/api.Dgraph/CheckVersion', request_serializer=api__pb2.Check.SerializeToString, response_deserializer=api__pb2.Version.FromString, )
Constructor. Args: channel: A grpc.Channel.
juraj-google-style
def save(nifti_filename, numpy_data): nifti_filename = os.path.expanduser(nifti_filename) try: nifti_img = nib.Nifti1Image(numpy_data, numpy.eye(4)) nib.save(nifti_img, nifti_filename) except Exception as e: raise ValueError("Could not save file {0}.".format(nifti_filename)) return nifti_filename
Export a numpy array to a nifti file. TODO: currently using dummy headers and identity matrix affine transform. This can be expanded. Arguments: nifti_filename (str): A filename to which to save the nifti data numpy_data (numpy.ndarray): The numpy array to save to nifti Returns: String. The expanded filename that now holds the nifti data
juraj-google-style
def get_authority(config, metrics, rrset_channel, **kwargs): builder = authority.GCEAuthorityBuilder(config, metrics, rrset_channel, **kwargs) return builder.build_authority()
Get a GCEAuthority client. A factory function that validates configuration and creates a proper GCEAuthority. Args: config (dict): GCEAuthority related configuration. metrics (obj): :interface:`IMetricRelay` implementation. rrset_channel (asyncio.Queue): Queue used for sending messages to the reconciler plugin. kw (dict): Additional keyword arguments to pass to the Authority. Returns: A :class:`GCEAuthority` instance.
codesearchnet
def has_no_unchecked_field(self, locator, **kwargs): kwargs['checked'] = False return self.has_no_selector('field', locator, **kwargs)
Checks if the page or current node has no radio button or checkbox with the given label, value, or id, that is currently unchecked. Args: locator (str): The label, name, or id of an unchecked field. **kwargs: Arbitrary keyword arguments for :class:`SelectorQuery`. Returns: bool: Whether it doesn't exist.
codesearchnet
def _real_set(self, obj, old, value, hint=None, setter=None): if (self.property.matches(value, old) and (hint is None)): return was_set = (self.name in obj._property_values) if was_set: old_attr_value = obj._property_values[self.name] else: old_attr_value = old if (old_attr_value is not value): if isinstance(old_attr_value, PropertyValueContainer): old_attr_value._unregister_owner(obj, self) if isinstance(value, PropertyValueContainer): value._register_owner(obj, self) if (self.name in obj._unstable_themed_values): del obj._unstable_themed_values[self.name] if (self.name in obj._unstable_default_values): del obj._unstable_default_values[self.name] obj._property_values[self.name] = value self._trigger(obj, old, value, hint=hint, setter=setter)
Internal implementation helper to set property values. This function handles bookkeeping around noting whether values have been explicitly set, etc. Args: obj (HasProps) The object the property is being set on. old (obj) : The previous value of the property to compare hint (event hint or None, optional) An optional update event hint, e.g. ``ColumnStreamedEvent`` (default: None) Update event hints are usually used at times when better update performance can be obtained by special-casing in some way (e.g. streaming or patching column data sources) setter (ClientSession or ServerSession or None, optional) : This is used to prevent "boomerang" updates to Bokeh apps. (default: None) In the context of a Bokeh server application, incoming updates to properties will be annotated with the session that is doing the updating. This value is propagated through any subsequent change notifications that the update triggers. The session can compare the event setter to itself, and suppress any updates that originate from itself. Returns: None
codesearchnet
def slice(self, begin, end): if begin < 0 or end < 0: raise ValueError('Encountered negative index.') lines = self.lines[begin:end] font_attr_segs = {} for key in self.font_attr_segs: if key >= begin and key < end: font_attr_segs[key - begin] = self.font_attr_segs[key] annotations = {} for key in self.annotations: if not isinstance(key, int): annotations[key] = self.annotations[key] elif key >= begin and key < end: annotations[key - begin] = self.annotations[key] return RichTextLines(lines, font_attr_segs=font_attr_segs, annotations=annotations)
Slice a RichTextLines object. The object itself is not changed. A sliced instance is returned. Args: begin: (int) Beginning line index (inclusive). Must be >= 0. end: (int) Ending line index (exclusive). Must be >= 0. Returns: (RichTextLines) Sliced output instance of RichTextLines. Raises: ValueError: If begin or end is negative.
github-repos
def _read_from_seg(self, n): result = self._seg.read(size=n) if (result == ''): return result offset = self._seg.tell() if (offset > self._seg_valid_length): extra = (offset - self._seg_valid_length) result = result[:((- 1) * extra)] self._offset += len(result) return result
Read from current seg. Args: n: max number of bytes to read. Returns: valid bytes from the current seg. "" if no more is left.
codesearchnet
def GetUnicodeString(value): if isinstance(value, list): value = [GetUnicodeString(item) for item in value] return ''.join(value) if isinstance(value, py2to3.INTEGER_TYPES): value = '{0:d}'.format(value) if (not isinstance(value, py2to3.UNICODE_TYPE)): return codecs.decode(value, 'utf8', 'ignore') return value
Attempts to convert the argument to a Unicode string. Args: value (list|int|bytes|str): value to convert. Returns: str: string representation of the argument.
codesearchnet
def prune_non_existent_outputs(compound_match_query): if (len(compound_match_query.match_queries) == 1): return compound_match_query elif (len(compound_match_query.match_queries) == 0): raise AssertionError(u'Received CompoundMatchQuery with an empty list of MatchQuery objects.') else: match_queries = [] for match_query in compound_match_query.match_queries: match_traversals = match_query.match_traversals output_block = match_query.output_block present_locations_tuple = _get_present_locations(match_traversals) (present_locations, present_non_optional_locations) = present_locations_tuple new_output_fields = {} for (output_name, expression) in six.iteritems(output_block.fields): if isinstance(expression, OutputContextField): (location_name, _) = expression.location.get_location_name() if (location_name not in present_locations): raise AssertionError(u'Non-optional output location {} was not found in present_locations: {}'.format(expression.location, present_locations)) new_output_fields[output_name] = expression elif isinstance(expression, FoldedContextField): base_location = expression.fold_scope_location.base_location (location_name, _) = base_location.get_location_name() if (location_name not in present_locations): raise AssertionError(u'Folded output location {} was found in present_locations: {}'.format(base_location, present_locations)) new_output_fields[output_name] = expression elif isinstance(expression, TernaryConditional): (location_name, _) = expression.if_true.location.get_location_name() if (location_name in present_locations): if (location_name in present_non_optional_locations): new_output_fields[output_name] = expression.if_true else: new_output_fields[output_name] = expression else: raise AssertionError(u'Invalid expression of type {} in output block: {}'.format(type(expression).__name__, output_block)) match_queries.append(MatchQuery(match_traversals=match_traversals, folds=match_query.folds, output_block=ConstructResult(new_output_fields), where_block=match_query.where_block)) return CompoundMatchQuery(match_queries=match_queries)
Remove non-existent outputs from each MatchQuery in the given CompoundMatchQuery. Each of the 2^n MatchQuery objects (except one) has been pruned to exclude some Traverse blocks, For each of these, remove the outputs (that have been implicitly pruned away) from each corresponding ConstructResult block. Args: compound_match_query: CompoundMatchQuery object containing 2^n pruned MatchQuery objects (see convert_optional_traversals_to_compound_match_query) Returns: CompoundMatchQuery with pruned ConstructResult blocks for each of the 2^n MatchQuery objects
codesearchnet
def getServerSSLContext(self, hostname=None): sslctx = ssl.create_default_context(ssl.Purpose.CLIENT_AUTH) if (hostname is None): hostname = socket.gethostname() certfile = self.getHostCertPath(hostname) if (certfile is None): raise s_exc.NoCertKey(('Missing .crt for %s' % hostname)) keyfile = self.getHostKeyPath(hostname) if (keyfile is None): raise s_exc.NoCertKey(('Missing .key for %s' % hostname)) sslctx.load_cert_chain(certfile, keyfile) return sslctx
Returns an ssl.SSLContext appropriate to listen on a socket Args: hostname: if None, the value from socket.gethostname is used to find the key in the servers directory. This name should match the not-suffixed part of two files ending in .key and .crt in the hosts subdirectory
codesearchnet
def __setattr__(cls, name, value): if cls.__initialized and name not in _POST_INIT_ATTRIBUTE_NAMES: raise AttributeError('May not change values: %s' % name) else: type.__setattr__(cls, name, value)
Overridden to avoid setting variables after init. Setting attributes on a class must work during the period of initialization to set the enumation value class variables and build the name/number maps. Once __init__ has set the __initialized flag to True prohibits setting any more values on the class. The class is in effect frozen. Args: name: Name of value to set. value: Value to set.
juraj-google-style
def fit(self, X, truncated=3): self.n_sample, self.n_var = X.shape self.columns = X.columns self.tau_mat = X.corr(method='kendall').values self.u_matrix = np.empty([self.n_sample, self.n_var]) self.truncated = truncated self.depth = self.n_var - 1 self.trees = [] self.unis, self.ppfs = [], [] for i, col in enumerate(X): uni = self.model() uni.fit(X[col]) self.u_matrix[:, i] = uni.cumulative_distribution(X[col]) self.unis.append(uni) self.ppfs.append(uni.percent_point) self.train_vine(self.vine_type) self.fitted = True
Fit a vine model to the data. Args: X(numpy.ndarray): data to be fitted. truncated(int): max level to build the vine.
juraj-google-style
def create_identity_with_grad_check_fn(expected_gradient, expected_dtype=None): @custom_gradient.custom_gradient def _identity_with_grad_check(x): x = array_ops.identity(x) def grad(dx): if expected_dtype: assert dx.dtype == expected_dtype, 'dx.dtype should be %s but is: %s' % (expected_dtype, dx.dtype) expected_tensor = tensor_conversion.convert_to_tensor_v2(expected_gradient, dtype=dx.dtype, name='expected_gradient') with ops.control_dependencies([x]): assert_op = check_ops.assert_equal(dx, expected_tensor) with ops.control_dependencies([assert_op]): dx = array_ops.identity(dx) return dx return (x, grad) def identity_with_grad_check(x): return _identity_with_grad_check(x) return identity_with_grad_check
Returns a function that asserts it's gradient has a certain value. This serves as a hook to assert intermediate gradients have a certain value. This returns an identity function. The identity's gradient function is also the identity function, except it asserts that the gradient equals `expected_gradient` and has dtype `expected_dtype`. Args: expected_gradient: The gradient function asserts that the gradient is this value. expected_dtype: The gradient function asserts the gradient has this dtype. Returns: An identity function whose gradient function asserts the gradient has a certain value.
github-repos
def _prefix_exists_in_gcs(gcs_prefix, credentials=None): gcs_service = _get_storage_service(credentials) bucket_name, prefix = gcs_prefix[len('gs: request = gcs_service.objects().list( bucket=bucket_name, prefix=prefix, maxResults=1) response = request.execute() return response.get('items', None)
Check whether there is a GCS object whose name starts with the prefix. Since GCS doesn't actually have folders, this is how we check instead. Args: gcs_prefix: The path; should start with 'gs://'. credentials: Optional credential to be used to load the file from gcs. Returns: True if the prefix matches at least one object in GCS. Raises: errors.HttpError: if it can't talk to the server
juraj-google-style
def print_dict(d, show_missing=True): for (k, v) in sorted(d.items()): if ((not v) and show_missing): print('{} -'.format(k)) elif isinstance(v, list): print(k) for item in v: print(' {}'.format(item)) elif isinstance(v, dict): print(k) for (kk, vv) in sorted(v.items()): print(' {:<20} {}'.format(kk, vv))
Prints a shallow dict to console. Args: d: Dict to print. show_missing: Whether to show keys with empty values.
codesearchnet
def encode(self, vecs): assert vecs.dtype == np.float32 assert vecs.ndim == 2 N, D = vecs.shape assert D == self.Ds * self.M, "input dimension must be Ds * M" codes = np.empty((N, self.M), dtype=self.code_dtype) for m in range(self.M): if self.verbose: print("Encoding the subspace: {} / {}".format(m, self.M)) vecs_sub = vecs[:, m * self.Ds : (m+1) * self.Ds] codes[:, m], _ = vq(vecs_sub, self.codewords[m]) return codes
Encode input vectors into PQ-codes. Args: vecs (np.ndarray): Input vectors with shape=(N, D) and dtype=np.float32. Returns: np.ndarray: PQ codes with shape=(N, M) and dtype=self.code_dtype
juraj-google-style
def AddBackpropIndexedSlicesAccumulator(self, op: ops.Operation, grad): values = grad.values indices = grad.indices dense_shape = grad.dense_shape self.Exit() if self.outer_context: self.outer_context.Enter() if values.get_shape().is_fully_defined(): values_shape = tensor_shape.TensorShape([tensor_shape.Dimension(1)] + values.get_shape().dims[1:]) if self.outer_context: self.outer_context.Enter() values_acc = constant_op.constant(0, values.dtype, shape=values_shape, name='b_acc') if self.outer_context: self.outer_context.Exit() else: values_shape = _resource_safe_shape(op.inputs[0])[1:] values_shape = array_ops.concat([[1], values_shape], 0) values_acc = array_ops.zeros(values_shape, dtype=values.dtype) indices_acc = constant_op.constant([0], indices.dtype) shape_acc = None if dense_shape is not None: if dense_shape.get_shape().is_fully_defined(): if self.outer_context: self.outer_context.Enter() shape_acc = constant_op.constant(0, dense_shape.dtype, shape=dense_shape.get_shape()) if self.outer_context: self.outer_context.Exit() else: shape_acc = array_ops.zeros_like(array_ops.shape_internal(op.inputs[0], optimize=False, out_type=dense_shape.dtype), optimize=False) if self.outer_context: self.outer_context.Exit() self.Enter() self.AddName(values_acc.name) self.AddName(indices_acc.name) init_acc = [indices_acc, values_acc] if shape_acc is not None: self.AddName(shape_acc.name) init_acc.append(shape_acc) enter_acc = [_Enter(x, self._name, is_constant=False, parallel_iterations=self._parallel_iterations, use_input_shape=False, name='b_acc') for x in init_acc] enter_acc[0].set_shape([None]) if values_acc.shape.dims is not None: enter_acc[1].set_shape([None] + values_acc.shape.as_list()[1:]) self.loop_enters.extend(enter_acc) merge_acc = [merge([x, x], name='b_acc')[0] for x in enter_acc] switch_acc = [switch(x, self._pivot) for x in merge_acc] acc_indexed_slices = [array_ops.concat([xa[1], xv], 0) for xa, xv in zip(switch_acc[:2], [indices, values])] if shape_acc is not None: acc_indexed_slices.append(math_ops.maximum(dense_shape, switch_acc[2][1])) next_acc = [_NextIteration(x) for x in acc_indexed_slices] for xm, xn in zip(merge_acc, next_acc): xm.op._update_input(1, xn) exit_acc = [exit(x[0], name='b_acc') for x in switch_acc] self.loop_exits.extend(exit_acc) self.ExitResult(exit_acc) return indexed_slices.IndexedSlices(indices=exit_acc[0], values=exit_acc[1], dense_shape=exit_acc[2] if shape_acc is not None else None)
This is used for accumulating gradients that are IndexedSlices. This is essentially the equivalent of AddBackpropAccumulator but optimized for things like updating embeddings from within a while loop. Args: op: The Enter op for a loop invariant. grad: The partial gradients represented as an IndexedSlices. Returns: The accumulated IndexedSlices gradient of the loop invariant.
github-repos
def diff_toDelta(self, diffs): text = [] for (op, data) in diffs: if (op == self.DIFF_INSERT): data = data.encode('utf-8') text.append(('+' + urllib.quote(data, "!~*'();/?:@&=+$, elif (op == self.DIFF_DELETE): text.append(('-%d' % len(data))) elif (op == self.DIFF_EQUAL): text.append(('=%d' % len(data))) return '\t'.join(text)
Crush the diff into an encoded string which describes the operations required to transform text1 into text2. E.g. =3\t-2\t+ing -> Keep 3 chars, delete 2 chars, insert 'ing'. Operations are tab-separated. Inserted text is escaped using %xx notation. Args: diffs: Array of diff tuples. Returns: Delta text.
codesearchnet
def register(config_class, processor_class, exist_ok=False): PROCESSOR_MAPPING.register(config_class, processor_class, exist_ok=exist_ok)
Register a new processor for this class. Args: config_class ([`PretrainedConfig`]): The configuration corresponding to the model to register. processor_class ([`ProcessorMixin`]): The processor to register.
github-repos
async def storm(self, text, opts=None, num=None, cmdr=False): mesgs = (await self._runStorm(text, opts, cmdr)) if (num is not None): nodes = [m for m in mesgs if (m[0] == 'node')] if (len(nodes) != num): raise AssertionError(f'Expected {num} nodes, got {len(nodes)}') return mesgs
A helper for executing a storm command and getting a list of storm messages. Args: text (str): Storm command to execute. opts (dict): Opt to pass to the cortex during execution. num (int): Number of nodes to expect in the output query. Checks that with an assert statement. cmdr (bool): If True, executes the line via the Cmdr CLI and will send output to outp. Notes: The opts dictionary will not be used if cmdr=True. Returns: list: A list of storm messages.
codesearchnet
def __init__(self, input_energy: energy.BitstringEnergy, num_expectation_samples: int, num_burnin_samples: int, name: Union[None, str]=None): super().__init__(input_energy, num_expectation_samples, name=name) self._kernel = GibbsWithGradientsKernel(input_energy) self._chain_state = tf.Variable(tfp.distributions.Bernoulli(probs=[0.5] * self.energy.num_bits, dtype=tf.int8).sample(), trainable=False) self.num_burnin_samples = num_burnin_samples
Initializes a GibbsWithGradientsInference. Args: input_energy: The parameterized energy function which defines this distribution via the equations of an energy based model. This class assumes that all parameters of `energy` are `tf.Variable`s and that they are all returned by `energy.variables`. num_expectation_samples: Number of samples to draw and use for estimating the expectation value. num_burnin_samples: Number of samples to discard when letting the chain equilibrate after updating the parameters of `input_energy`. name: Optional name for the model.
github-repos
def build_and_pickle_dump(self, abivalidate=False): self.build() if not abivalidate: return self.pickle_dump() isok, errors = self.abivalidate_inputs() if isok: return self.pickle_dump() errlines = [] for i, e in enumerate(errors): errlines.append("[%d] %s" % (i, e)) raise ValueError("\n".join(errlines))
Build dirs and file of the `Flow` and save the object in pickle format. Returns 0 if success Args: abivalidate: If True, all the input files are validate by calling the abinit parser. If the validation fails, ValueError is raise.
juraj-google-style
def __init__(self, vlan_pcp=None): super().__init__(action_type=ActionType.OFPAT_SET_VLAN_PCP, length=8) self.vlan_pcp = vlan_pcp
Create an ActionVlanPCP with the optional parameters below. Args: vlan_pcp (int): VLAN Priority. .. note:: The vlan_pcp field is 8 bits long, but only the lower 3 bits have meaning.
juraj-google-style
def truncate_to_field_length(self, field, value): max_len = getattr(self.__class__, field).prop.columns[0].type.length if (value and (len(value) > max_len)): return value[:max_len] else: return value
Truncate the value of a string field to the field's max length. Use this in a validator to check/truncate values before inserting them into the database. Copy the below example code after ``@validates`` to your model class and replace ``field1`` and ``field2`` with your field name(s). :Example: from sqlalchemy.orm import validates # ... omitting other imports ... class MyModel(base.Base): field1 = Column(String(128)) field2 = Column(String(64)) @validates('field1', 'field2') def truncate(self, field, value): return self.truncate_to_field_length(field, value) Args: field (str): field name to validate value (str/unicode): value to validate Returns: str/unicode: value truncated to field max length
codesearchnet
def json_to_data(fn=None, return_json=True): def json_to_data_decorator(fn): @handle_type_error @wraps(fn) def get_data_wrapper(*args, **kwargs): kwargs['data'] = decode_json_body() if (not return_json): return fn(*args, **kwargs) return encode_json_body(fn(*args, **kwargs)) return get_data_wrapper if fn: return json_to_data_decorator(fn) return json_to_data_decorator
Decode JSON from the request and add it as ``data`` parameter for wrapped function. Args: return_json (bool, default True): Should the decorator automatically convert returned value to JSON?
codesearchnet
def _escaped_token_to_subtoken_strings(self, escaped_token): ret = [] start = 0 token_len = len(escaped_token) while start < token_len: for end in xrange(min(token_len, start + self._max_subtoken_len), start, -1): subtoken = escaped_token[start:end] if subtoken in self._all_subtoken_strings: ret.append(subtoken) start = end break else: assert False, "Token substring not found in subtoken vocabulary." return ret
Converts an escaped token string to a list of subtoken strings. Args: escaped_token: An escaped token as a unicode string. Returns: A list of subtokens as unicode strings.
juraj-google-style
def _FormatField(self, field): if (self._FIELD_DELIMITER and isinstance(field, py2to3.STRING_TYPES)): return field.replace(self._FIELD_DELIMITER, ' ') return field
Formats a field. Args: field (str): field value. Returns: str: formatted field value.
codesearchnet
def match(self, path): match = self._re.search(path) if match is None: return None kwargs_indexes = match.re.groupindex.values() args_indexes = [i for i in range(1, match.re.groups + 1) if i not in kwargs_indexes] args = [match.group(i) for i in args_indexes] kwargs = {} for name, index in match.re.groupindex.items(): kwargs[name] = match.group(index) return self._callback, args, kwargs
Return route handler with arguments if path matches this route. Arguments: path (str): Request path Returns: tuple or None: A tuple of three items: 1. Route handler (callable) 2. Positional arguments (list) 3. Keyword arguments (dict) ``None`` if the route does not match the path.
juraj-google-style
def strace_configure(self, port_width): if (port_width not in [1, 2, 4]): raise ValueError(('Invalid port width: %s' % str(port_width))) config_string = ('PortWidth=%d' % port_width) res = self._dll.JLINK_STRACE_Config(config_string.encode()) if (res < 0): raise errors.JLinkException('Failed to configure STRACE port') return None
Configures the trace port width for tracing. Note that configuration cannot occur while STRACE is running. Args: self (JLink): the ``JLink`` instance port_width (int): the trace port width to use. Returns: ``None`` Raises: ValueError: if ``port_width`` is not ``1``, ``2``, or ``4``. JLinkException: on error.
codesearchnet
def maybe_download(directory, filename, uri): tf.gfile.MakeDirs(directory) filepath = os.path.join(directory, filename) if tf.gfile.Exists(filepath): tf.logging.info(('Not downloading, file already found: %s' % filepath)) return filepath tf.logging.info(('Downloading %s to %s' % (uri, filepath))) try: tf.gfile.Copy(uri, filepath) except tf.errors.UnimplementedError: if uri.startswith('http'): inprogress_filepath = (filepath + '.incomplete') (inprogress_filepath, _) = urllib.urlretrieve(uri, inprogress_filepath, reporthook=download_report_hook) print() tf.gfile.Rename(inprogress_filepath, filepath) else: raise ValueError(('Unrecognized URI: ' + filepath)) statinfo = os.stat(filepath) tf.logging.info(('Successfully downloaded %s, %s bytes.' % (filename, statinfo.st_size))) return filepath
Download filename from uri unless it's already in directory. Copies a remote file to local if that local file does not already exist. If the local file pre-exists this function call, it does not check that the local file is a copy of the remote. Remote filenames can be filepaths, any URI readable by tensorflow.gfile, or a URL. Args: directory: path to the directory that will be used. filename: name of the file to download to (do nothing if it already exists). uri: URI to copy (or download) from. Returns: The path to the downloaded file.
codesearchnet
def has_shell_command(self, command): try: output = self.shell(['command', '-v', command]).decode('utf-8').strip() return command in output except AdbError: return False
Checks to see if a given check command exists on the device. Args: command: A string that is the name of the command to check. Returns: A boolean that is True if the command exists and False otherwise.
juraj-google-style
def decode_response(status: int, headers: MutableMapping, body: bytes) -> dict: data = decode_body(headers, body) raise_for_status(status, headers, data) raise_for_api_error(headers, data) return data
Decode incoming response Args: status: Response status headers: Response headers body: Response body Returns: Response data
codesearchnet
def tangent(f): node = annotate.resolve_calls(f) RemoveWith().visit(node) wrapped = functools.wraps(f)(compile_.compile_function(node)) wrapped.tangent = f return wrapped
A decorator which removes the `with insert_grad_of` statement. This allows the function to be called as usual. Args: f: A function Returns: A function with any `with insert_grad_of` context managers removed.
juraj-google-style
def get_build_info(): return build_info.build_info
Get a dictionary describing TensorFlow's build environment. Values are generated when TensorFlow is compiled, and are static for each TensorFlow package. The return value is a dictionary with string keys such as: - cuda_version - cudnn_version - is_cuda_build - is_rocm_build - msvcp_dll_names - nvcuda_dll_name - cudart_dll_name - cudnn_dll_name Note that the actual keys and values returned by this function is subject to change across different versions of TensorFlow or across platforms. Returns: A Dictionary describing TensorFlow's build environment.
github-repos
def get_layer_index_bound_by_layer_name(layers, layer_range=None): if layer_range is not None: if len(layer_range) != 2: raise ValueError(f'layer_range must be a list or tuple of length 2. Received: layer_range = {layer_range} of length {len(layer_range)}') if not isinstance(layer_range[0], str) or not isinstance(layer_range[1], str): raise ValueError(f'layer_range should contain string type only. Received: {layer_range}') else: return [0, len(layers)] lower_index = [idx for idx, layer in enumerate(layers) if re.match(layer_range[0], layer.name)] upper_index = [idx for idx, layer in enumerate(layers) if re.match(layer_range[1], layer.name)] if not lower_index or not upper_index: raise ValueError(f'Passed layer_names do not match the layer names in the model. Received: {layer_range}') if min(lower_index) > max(upper_index): return [min(upper_index), max(lower_index) + 1] return [min(lower_index), max(upper_index) + 1]
Get the layer indexes from the model based on layer names. The layer indexes can be used to slice the model into sub models for display. Args: model: `Model` instance. layer_names: a list or tuple of 2 strings, the starting layer name and ending layer name (both inclusive) for the result. All layers will be included when `None` is provided. Returns: The index value of layer based on its unique name (layer_names). Output will be [first_layer_index, last_layer_index + 1].
github-repos
def local_variables_initializer(): if context.executing_eagerly(): return control_flow_ops.no_op(name='local_variables_initializer') return variables_initializer(local_variables())
Returns an Op that initializes all local variables. This is just a shortcut for `variables_initializer(local_variables())` @compatibility(TF2) In TF2, variables are initialized immediately when they are created. There is no longer a need to run variable initializers before using them. @end_compatibility Returns: An Op that initializes all local variables in the graph.
github-repos
def get_template(template): from cloud_inquisitor.database import db tmpl = db.Template.find_one(template_name=template) if not tmpl: raise InquisitorError('No such template found: {}'.format(template)) tmplenv = Environment(loader=BaseLoader, autoescape=True) tmplenv.filters['json_loads'] = json.loads tmplenv.filters['slack_quote_join'] = lambda data: ', '.join('`{}`'.format(x) for x in data) return tmplenv.from_string(tmpl.template)
Return a Jinja2 template by filename Args: template (str): Name of the template to return Returns: A Jinja2 Template object
juraj-google-style
def pretty_emit(self, record, is_header=False, task_level=None): task = record.task or self.cur_task if task_level is None: task_level = self.cur_depth_level if is_header: extra_prefix = ( self.get_task_indicator(task_level - 1) + ' ' + ('' if self.am_i_main_thread else '[%s] ' % self.cur_thread) + task + ': ' ) record.levelno = logging.INFO else: extra_prefix = ' ' + self.get_task_indicator(task_level) + ' ' if task: record.msg = ( ' ' * (task_level - 1) + extra_prefix + str(record.msg) ) super().emit(record) super().flush()
Wrapper around the :class:`logging.StreamHandler` emit method to add some decoration stuff to the message Args: record (logging.LogRecord): log record to emit is_header (bool): if this record is a header, usually, a start or end task message task_level (int): If passed, will take that as the current nested task level instead of calculating it from the current tasks Returns: None
juraj-google-style
def group_alleles_by_start_end_Xbp(arr, bp=28): starts = arr[(:, 0:bp)] ends = arr[(:, (- bp):)] starts_ends_idxs = defaultdict(list) (l, seq_len) = arr.shape for i in range(l): start_i = starts[i] end_i = ends[i] start_i_str = ''.join([str(x) for x in start_i]) end_i_str = ''.join([str(x) for x in end_i]) starts_ends_idxs[(start_i_str + end_i_str)].append(i) return starts_ends_idxs
Group alleles by matching ends Args: arr (numpy.array): 2D int matrix of alleles bp (int): length of ends to group by Returns: dict of lists: key of start + end strings to list of indices of alleles with matching ends
codesearchnet
def _parse_session_run_index(self, event): metadata_string = event.log_message.message try: metadata = json.loads(metadata_string) except ValueError as e: logger.error( "Could not decode metadata string '%s' for step value: %s", metadata_string, e) return constants.SENTINEL_FOR_UNDETERMINED_STEP try: return metadata["session_run_index"] except KeyError: logger.error( "The session_run_index is missing from the metadata: %s", metadata_string) return constants.SENTINEL_FOR_UNDETERMINED_STEP
Parses the session_run_index value from the event proto. Args: event: The event with metadata that contains the session_run_index. Returns: The int session_run_index value. Or constants.SENTINEL_FOR_UNDETERMINED_STEP if it could not be determined.
juraj-google-style
def prune(self, limit=None, n=None, percentile=None, keep_ends=False): strip = self.copy() if (not (limit or n or percentile)): m = 'You must provide a limit or n or percentile for pruning.' raise StriplogError(m) if limit: prune = [i for (i, iv) in enumerate(strip) if (iv.thickness < limit)] if n: prune = strip.thinnest(n=n, index=True) if percentile: n = np.floor(((len(strip) * percentile) / 100)) prune = strip.thinnest(n=n, index=True) if keep_ends: (first, last) = (0, (len(strip) - 1)) if (first in prune): prune.remove(first) if (last in prune): prune.remove(last) del strip[prune] return strip
Remove intervals below a certain limit thickness. In place. Args: limit (float): Anything thinner than this will be pruned. n (int): The n thinnest beds will be pruned. percentile (float): The thinnest specified percentile will be pruned. keep_ends (bool): Whether to keep the first and last, regardless of whether they meet the pruning criteria.
codesearchnet
def start_task(self, method, *args, **kwargs): thread = threading.Thread(target=method, args=args, kwargs=kwargs) thread.is_daemon = False thread.start() self.threads.append(thread)
Start a task in a separate thread Args: method: the method to start in a separate thread args: Accept args/kwargs arguments
codesearchnet
def filepaths_in_dir(path): filepaths = [] for (root, directories, filenames) in os.walk(path): for filename in filenames: filepath = os.path.join(root, filename) filepath = filepath.replace(path, '').lstrip('/') filepaths.append(filepath) return filepaths
Find all files in a directory, and return the relative paths to those files. Args: path (str): the directory path to walk Returns: list: the list of relative paths to all files inside of ``path`` or its subdirectories.
codesearchnet
def save_config(self, lookup_key, config): with self._config_lock: self._configs[lookup_key] = config
Save a configuration to the cache of configs. Args: lookup_key: A string containing the cache lookup key. config: The dict containing the configuration to save to the cache.
codesearchnet
def delete_keys(d: Dict[(Any, Any)], keys_to_delete: List[Any], keys_to_keep: List[Any]) -> None: for k in keys_to_delete: if ((k in d) and (k not in keys_to_keep)): del d[k]
Deletes keys from a dictionary, in place. Args: d: dictonary to modify keys_to_delete: if any keys are present in this list, they are deleted... keys_to_keep: ... unless they are present in this list.
codesearchnet