code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def call_plugins(self, step): for plugin in self.plugins: try: getattr(plugin, step)() except AttributeError: self.logger.debug("{} doesn't exist on plugin {}".format(step, plugin)) except TypeError: self.logger.debug("{} on plugin {} is not callable".format(step, plugin))
For each plugins, check if a "step" method exist on it, and call it Args: step (str): The method to search and call on each plugin
juraj-google-style
def get_num_bytes(self, batch: Sequence[str]) -> int: return sum((sys.getsizeof(element) for element in batch))
Returns: The number of bytes of input batch elements.
github-repos
def stl(A, b): r from scipy.linalg import solve_triangular A = asarray(A, float) b = asarray(b, float) return solve_triangular(A, b, lower=True, check_finite=False)
r"""Shortcut to ``solve_triangular(A, b, lower=True, check_finite=False)``. Solve linear systems :math:`\mathrm A \mathbf x = \mathbf b` when :math:`\mathrm A` is a lower-triangular matrix. Args: A (array_like): A lower-triangular matrix. b (array_like): Ordinate values. Returns: :class:`numpy.ndarray`: Solution ``x``. See Also -------- scipy.linalg.solve_triangular: Solve triangular linear equations.
juraj-google-style
def get_error(self, block=False, timeout=None): try: error = self._errors.get(block=block, timeout=timeout) return error except Exception: return None
Removes and returns an error from self._errors Args: block(bool): if True block until a RTMMessage is available, else it will return None when self._inbox is empty timeout(int): it blocks at most timeout seconds Returns: error if inbox is not empty, else None
codesearchnet
def __type_matches(self, obj: Any, type_: Type) -> bool: if is_generic_union(type_): for t in generic_type_args(type_): if self.__type_matches(obj, t): return True return False elif is_generic_list(type_): if (not isinstance(obj, list)): return False for item in obj: if (not self.__type_matches(item, generic_type_args(type_)[0])): return False return True elif is_generic_dict(type_): if (not isinstance(obj, OrderedDict)): return False for (key, value) in obj: if (not isinstance(key, generic_type_args(type_)[0])): return False if (not self.__type_matches(value, generic_type_args(type_)[1])): return False return True else: return isinstance(obj, type_)
Checks that the object matches the given type. Like isinstance(), but will work with union types using Union, \ Dict and List. Args: obj: The object to check type_: The type to check against Returns: True iff obj is of type type_
codesearchnet
def __driver_helper(self, line): if line.strip() == '?': self.stdout.write('\n') self.stdout.write(self.doc_string()) else: toks = shlex.split(line[:-1]) try: msg = self.__get_help_message(toks) except Exception as e: self.stderr.write('\n') self.stderr.write(traceback.format_exc()) self.stderr.flush() self.stdout.write('\n') self.stdout.write(msg) self.stdout.write('\n') self.stdout.write(self.prompt) self.stdout.write(line) self.stdout.flush()
Driver level helper method. 1. Display help message for the given input. Internally calls self.__get_help_message() to obtain the help message. 2. Re-display the prompt and the input line. Arguments: line: The input line. Raises: Errors from helper methods print stack trace without terminating this shell. Other exceptions will terminate this shell.
juraj-google-style
def _Open(self, hostname, port): try: self._xmlrpc_server = SimpleXMLRPCServer.SimpleXMLRPCServer( (hostname, port), logRequests=False, allow_none=True) except SocketServer.socket.error as exception: logger.warning(( 'Unable to bind a RPC server on {0:s}:{1:d} with error: ' '{2!s}').format(hostname, port, exception)) return False self._xmlrpc_server.register_function( self._callback, self._RPC_FUNCTION_NAME) return True
Opens the RPC communication channel for clients. Args: hostname (str): hostname or IP address to connect to for requests. port (int): port to connect to for requests. Returns: bool: True if the communication channel was successfully opened.
juraj-google-style
def _create_resource(self): assert self._default_value.get_shape().ndims == 0 table_ref = gen_simple_hash_table_op.examples_simple_hash_table_create(key_dtype=self._key_dtype, value_dtype=self._value_dtype, name=self._name) return table_ref
Create the resource tensor handle. `_create_resource` is an override of a method in base class `TrackableResource` that is required for SavedModel support. It can be called by the `resource_handle` property defined by `TrackableResource`. Returns: A tensor handle to the lookup table.
github-repos
def create_container_definition(container_name, image, port=80, cpu=1.0, memgb=1.5, environment=None): container = {'name': container_name} container_properties = {'image': image} container_properties['ports'] = [{'port': port}] container_properties['resources'] = {'requests': {'cpu': cpu, 'memoryInGB': memgb}} container['properties'] = container_properties if (environment is not None): container_properties['environmentVariables'] = environment return container
Makes a python dictionary of container properties. Args: container_name: The name of the container. image (str): Container image string. E.g. nginx. port (int): TCP port number. E.g. 8080. cpu (float): Amount of CPU to allocate to container. E.g. 1.0. memgb (float): Memory in GB to allocate to container. E.g. 1.5. environment (list): A list of [{'name':'envname', 'value':'envvalue'}]. Sets environment variables in the container. Returns: A Python dictionary of container properties, pass a list of these to create_container_group().
codesearchnet
def _create_plugin(self, config): if (config is None): raise ValueError('No plugin config to create plugin from.') name = config.pop('name', None) if (name is None): raise cfg.AitConfigMissing('plugin name') module_name = name.rsplit('.', 1)[0] class_name = name.rsplit('.', 1)[(- 1)] if (class_name in [x.name for x in (((self.outbound_streams + self.inbound_streams) + self.servers) + self.plugins)]): raise ValueError('Plugin "{}" already loaded. Only one plugin of a given name is allowed'.format(class_name)) plugin_inputs = config.pop('inputs', None) if (plugin_inputs is None): log.warn('No plugin inputs specified for {}'.format(name)) plugin_inputs = [] subscribers = config.pop('outputs', None) if (subscribers is None): log.warn('No plugin outputs specified for {}'.format(name)) subscribers = [] module = import_module(module_name) plugin_class = getattr(module, class_name) instance = plugin_class(plugin_inputs, subscribers, zmq_args={'zmq_context': self.broker.context, 'zmq_proxy_xsub_url': self.broker.XSUB_URL, 'zmq_proxy_xpub_url': self.broker.XPUB_URL}, **config) return instance
Creates a plugin from its config. Params: config: plugin configuration as read by ait.config Returns: plugin: a Plugin Raises: ValueError: if any of the required config values are missing
codesearchnet
def pow(cls, x: 'TensorFluent', y: 'TensorFluent') -> 'TensorFluent': return cls._binary_op(x, y, tf.pow, tf.float32)
Returns a TensorFluent for the pow function.TensorFluent Args: x: The first operand. y: The second operand. Returns: A TensorFluent wrapping the pow function.
juraj-google-style
def _GetPathSegmentSeparator(self, path): if (path.startswith('\\') or path[1:].startswith(':\\')): return '\\' if path.startswith('/'): return '/' if ('/' and ('\\' in path)): forward_count = len(path.split('/')) backward_count = len(path.split('\\')) if (forward_count > backward_count): return '/' return '\\' if ('/' in path): return '/' return '\\'
Given a path give back the path separator as a best guess. Args: path (str): path. Returns: str: path segment separator.
codesearchnet
def generate(self, step, params): subfactory = self.get_factory() logger.debug( "SubFactory: Instantiating %s.%s(%s), create=%r", subfactory.__module__, subfactory.__name__, utils.log_pprint(kwargs=params), step, ) force_sequence = step.sequence if self.FORCE_SEQUENCE else None return step.recurse(subfactory, params, force_sequence=force_sequence)
Evaluate the current definition and fill its attributes. Args: step: a factory.builder.BuildStep params (dict): additional, call-time added kwargs for the step.
juraj-google-style
def wait_for_import(self, connection_id, wait_interval): self.stdout.write(self.style.NOTICE('Waiting for import'), ending='') state = utils.ConnectionStates.IMPORT_CONFIGURATION while state == utils.ConnectionStates.IMPORT_CONFIGURATION: self.stdout.write(self.style.NOTICE('.'), ending='') time.sleep(wait_interval) try: connection = utils.get_connection(connection_id) except requests.HTTPError as e: raise CommandError("Failed to fetch connection information.") from e else: state = connection['state'] self.stdout.write(self.style.NOTICE(' Done!'))
Wait until connection state is no longer ``IMPORT_CONFIGURATION``. Args: connection_id (str): Heroku Connect connection to monitor. wait_interval (int): How frequently to poll in seconds. Raises: CommandError: If fetch connection information fails.
juraj-google-style
def parseMagnitude(m): m = NumberService().parse(m) def toDecimalPrecision(n, k): return float("%.*f" % (k, round(n, k))) digits = 2 magnitude = toDecimalPrecision(m, digits) while not magnitude: digits += 1 magnitude = toDecimalPrecision(m, digits) if m < 1.0: magnitude = toDecimalPrecision(m, digits + 1) if int(magnitude) == magnitude: magnitude = int(magnitude) magString = str(magnitude) magString = re.sub(r'(\d)e-(\d+)', '\g<1> times ten to the negative \g<2>', magString) magString = re.sub(r'(\d)e\+(\d+)', '\g<1> times ten to the \g<2>', magString) magString = re.sub(r'-(\d+)', 'negative \g<1>', magString) magString = re.sub(r'\b0(\d+)', '\g<1>', magString) return magString
Parses a number m into a human-ready string representation. For example, crops off floats if they're too accurate. Arguments: m (float): Floating-point number to be cleaned. Returns: Human-ready string description of the number.
juraj-google-style
def get_video_features(self, pixel_values: torch.FloatTensor, vision_feature_layer: Optional[Union[int, List[int]]]=None, vision_feature_select_strategy: Optional[str]=None): vision_feature_layer = vision_feature_layer if vision_feature_layer is not None else self.config.vision_feature_layer vision_feature_select_strategy = vision_feature_select_strategy if vision_feature_select_strategy is not None else self.config.vision_feature_select_strategy batch_size, frames, channels, height, width = pixel_values.shape pixel_values = pixel_values.reshape(batch_size * frames, channels, height, width) video_features = self.vision_tower(pixel_values, output_hidden_states=True) if isinstance(vision_feature_layer, int): selected_video_features = video_features.hidden_states[vision_feature_layer] else: hs_pool = [video_features.hidden_states[layer_idx] for layer_idx in vision_feature_layer] selected_video_features = torch.cat(hs_pool, dim=-1) if vision_feature_select_strategy == 'default': selected_video_features = selected_video_features[:, 1:] elif vision_feature_select_strategy == 'full': selected_video_features = selected_video_features video_features = self.vision_resampler(selected_video_features) video_features = self.multi_modal_projector(video_features) video_features = torch.split(video_features, frames, dim=0) return video_features
Obtains video last hidden states from the vision tower and apply multimodal projection. Args: pixel_values (`torch.FloatTensor]` of shape `(batch_size, num_frames, channels, height, width)`) The tensors corresponding to the input video. vision_feature_layer (`Union[int, List[int]]`, *optiona;*): The index of the layer to select the vision feature. If multiple indices are provided, the vision feature of the corresponding indices will be concatenated to form the vision features. vision_feature_select_strategy (`str`, *optional*): The feature selection strategy used to select the vision feature from the vision backbone. Can be one of `"default"` or `"full"` Returns: video_features (List[`torch.Tensor`]): List of video feature tensor, each contains all the visual feature of all patches and are of shape `(num_videos, video_length, embed_dim)`).
github-repos
def infer_paths(output_dir, **subdirs): directories = {} for name, path in six.iteritems(subdirs): directories[name] = path if path else os.path.join(output_dir, name) directories["output_dir"] = output_dir return directories
Infers standard paths to policy and model directories. Example: >>> infer_paths("/some/output/dir/", policy="", model="custom/path") {"policy": "/some/output/dir/policy", "model": "custom/path", "output_dir":"/some/output/dir/"} Args: output_dir: output directory. **subdirs: sub-directories. Returns: a dictionary with the directories.
juraj-google-style
def add_newlines(f, output, char): line_count = get_line_count(f) f = open(f, 'r+') output = open(output, 'r+') for line in range(line_count): string = f.readline() string = re.sub(char, (char + '\n'), string) output.write(string)
Adds line breaks after every occurance of a given character in a file. Args: f: string, path to input file. output: string, path to output file. Returns: None.
codesearchnet
def _get_and_write_archive(self, hunt, output_file_path): hunt_archive = hunt.GetFilesArchive() hunt_archive.WriteToFile(output_file_path)
Gets and writes a hunt archive. Function is necessary for the _check_approval_wrapper to work. Args: hunt: The GRR hunt object. output_file_path: The output path where to write the Hunt Archive.
codesearchnet
def register_handler(self, callable_obj, entrypoint, methods=('GET',)): router_obj = Route.wrap_callable(uri=entrypoint, methods=methods, callable_obj=callable_obj) if router_obj.is_valid: self._routes.add(router_obj) return self raise RouteError('Missing params: methods: {} - entrypoint: {}'.format(methods, entrypoint))
Register a handler callable to a specific route. Args: entrypoint (str): The uri relative path. methods (tuple): A tuple of valid method strings. callable_obj (callable): The callable object. Returns: The Router instance (for chaining purposes). Raises: RouteError, for missing routing params or invalid callable object type.
codesearchnet
async def update_read_timestamp(self, read_timestamp=None): if read_timestamp is None: read_timestamp = (self.events[-1].timestamp if self.events else datetime.datetime.now(datetime.timezone.utc)) if read_timestamp > self.latest_read_timestamp: logger.info( 'Setting {} latest_read_timestamp from {} to {}' .format(self.id_, self.latest_read_timestamp, read_timestamp) ) state = self._conversation.self_conversation_state state.self_read_state.latest_read_timestamp = ( parsers.to_timestamp(read_timestamp) ) try: await self._client.update_watermark( hangouts_pb2.UpdateWatermarkRequest( request_header=self._client.get_request_header(), conversation_id=hangouts_pb2.ConversationId( id=self.id_ ), last_read_timestamp=parsers.to_timestamp( read_timestamp ), ) ) except exceptions.NetworkError as e: logger.warning('Failed to update read timestamp: {}'.format(e)) raise
Update the timestamp of the latest event which has been read. This method will avoid making an API request if it will have no effect. Args: read_timestamp (datetime.datetime): (optional) Timestamp to set. Defaults to the timestamp of the newest event. Raises: .NetworkError: If the timestamp cannot be updated.
juraj-google-style
def _access_control(self, access_control, my_media_group=None): extension = None if (access_control is AccessControl.Private): if my_media_group: my_media_group.private = gdata.media.Private() elif (access_control is AccessControl.Unlisted): from gdata.media import YOUTUBE_NAMESPACE from atom import ExtensionElement kwargs = {'namespace': YOUTUBE_NAMESPACE, 'attributes': {'action': 'list', 'permission': 'denied'}} extension = [ExtensionElement('accessControl', **kwargs)] return extension
Prepares the extension element for access control Extension element is the optional parameter for the YouTubeVideoEntry We use extension element to modify access control settings Returns: tuple of extension elements
codesearchnet
def downloadRecords(search_result, from_doc=1): downer = Downloader() if ('set_number' not in search_result): return [] set_number = str(search_result['set_number']) if (len(set_number) < 6): set_number = (((6 - len(set_number)) * '0') + set_number) records = [] for cnt in range(search_result['no_records']): doc_number = (from_doc + cnt) if ((cnt >= MAX_RECORDS) or (doc_number > search_result['no_records'])): break set_data = downer.download((ALEPH_URL + Template(RECORD_URL_TEMPLATE).substitute(SET_NUM=set_number, RECORD_NUM=doc_number))) records.append(set_data) return records
Download `MAX_RECORDS` documents from `search_result` starting from `from_doc`. Attr: search_result (dict): returned from :func:`searchInAleph`. from_doc (int, default 1): Start from document number `from_doc`. Returns: list: List of XML strings with documents in MARC OAI.
codesearchnet
def enable(profile='allprofiles'): cmd = ['netsh', 'advfirewall', 'set', profile, 'state', 'on'] ret = __salt__['cmd.run_all'](cmd, python_shell=False, ignore_retcode=True) if ret['retcode'] != 0: raise CommandExecutionError(ret['stdout']) return True
.. versionadded:: 2015.5.0 Enable firewall profile Args: profile (Optional[str]): The name of the profile to enable. Default is ``allprofiles``. Valid options are: - allprofiles - domainprofile - privateprofile - publicprofile Returns: bool: True if successful Raises: CommandExecutionError: If the command fails CLI Example: .. code-block:: bash salt '*' firewall.enable
juraj-google-style
def input_list_parser(infile_list): final_list_of_files = [] for x in infile_list: if op.isdir(x): os.chdir(x) final_list_of_files.extend(glob.glob('*')) if op.isfile(x): final_list_of_files.append(x) return final_list_of_files
Always return a list of files with varying input. >>> input_list_parser(['/path/to/folder/']) ['/path/to/folder/file1.txt', '/path/to/folder/file2.txt', '/path/to/folder/file3.txt'] >>> input_list_parser(['/path/to/file.txt']) ['/path/to/file.txt'] >>> input_list_parser(['file1.txt']) ['file1.txt'] Args: infile_list: List of arguments Returns: list: Standardized list of files
codesearchnet
def get_url_reports(self, resources): api_name = 'virustotal-url-reports' (all_responses, resources) = self._bulk_cache_lookup(api_name, resources) resource_chunks = self._prepare_resource_chunks(resources, '\n') response_chunks = self._request_reports("resource", resource_chunks, 'url/report') self._extract_response_chunks(all_responses, response_chunks, api_name) return all_responses
Retrieves a scan report on a given URL. Args: resources: list of URLs. Returns: A dict with the URL as key and the VT report as value.
juraj-google-style
def kill(self, procname): for proc in psutil.process_iter(): if proc.name() == procname: self.info_log( '[pid:%s][name:%s] killed' % (proc.pid, proc.name()) ) proc.kill()
Kill by process name Args: procname (str)
juraj-google-style
def write(self, data): if isinstance(data, WriteBuffer): self._write_buffer.append(data) else: if len(data) > 0: self._write_buffer.append(data) if self.aggressive_write: self._handle_write() if self._write_buffer._total_length > 0: self._register_or_update_event_handler(write=True)
Buffers some data to be sent to the host:port in a non blocking way. So the data is always buffered and not sent on the socket in a synchronous way. You can give a WriteBuffer as parameter. The internal Connection WriteBuffer will be extended with this one (without copying). Args: data (str or WriteBuffer): string (or WriteBuffer) to write to the host:port.
juraj-google-style
def __getitem__(self, key): if isinstance(key, str): if key in self.unmaterialized_cols: return self.unmaterialized_cols[key] raw_column = self.df[key].values dtype = str(raw_column.dtype) if dtype == 'object': raw_column = self.raw_columns[key] weld_type = WeldVec(WeldChar()) else: weld_type = grizzly_impl.numpy_to_weld_type_mapping[dtype] if self.predicates is None: return SeriesWeld(raw_column, weld_type, self, key) return SeriesWeld( grizzly_impl.filter( raw_column, self.predicates.expr, weld_type ), weld_type, self, key ) elif isinstance(key, list): return DataFrameWeld(self.df[key], self.predicates) elif isinstance(key, SeriesWeld): if self.predicates is not None: return DataFrameWeld(self.df, key.per_element_and(self.predicates)) return DataFrameWeld(self.df, key) raise Exception("Invalid type in __getitem__")
Summary Args: key (TYPE): Description Returns: TYPE: Description Raises: Exception: Description
juraj-google-style
def on_change(self, attr, *callbacks): if attr not in self.properties(): raise ValueError("attempted to add a callback on nonexistent %s.%s property" % (self.__class__.__name__, attr)) super(Model, self).on_change(attr, *callbacks)
Add a callback on this object to trigger when ``attr`` changes. Args: attr (str) : an attribute name on this object *callbacks (callable) : callback functions to register Returns: None Example: .. code-block:: python widget.on_change('value', callback1, callback2, ..., callback_n)
juraj-google-style
def dp010(self, value=None): if value is not None: try: value = float(value) except ValueError: raise ValueError('value {} need to be of type float ' 'for field `dp010`'.format(value)) self._dp010 = value
Corresponds to IDD Field `dp010` Dew-point temperature corresponding to 1.0% annual cumulative frequency of occurrence Args: value (float): value for IDD Field `dp010` Unit: C if `value` is None it will not be checked against the specification and is assumed to be a missing value Raises: ValueError: if `value` is not a valid value
juraj-google-style
def connect_input(self, index, walker, trigger=None): if (trigger is None): trigger = TrueTrigger() if (index >= len(self.inputs)): raise TooManyInputsError('Input index exceeded max number of inputs', index=index, max_inputs=len(self.inputs), stream=self.stream) self.inputs[index] = (walker, trigger)
Connect an input to a stream walker. If the input is already connected to something an exception is thrown. Otherwise the walker is used to read inputs for that input. A triggering condition can optionally be passed that will determine when this input will be considered as triggered. Args: index (int): The index of the input that we want to connect walker (StreamWalker): The stream walker to use for the input trigger (InputTrigger): The trigger to use for the input. If no trigger is specified, the input is considered to always be triggered (so TrueTrigger is used)
codesearchnet
def __init__(self, word_count=None): if isinstance(word_count, dict): word_count = iteritems(word_count) sorted_counts = list(sorted(word_count, key=lambda wc: wc[1], reverse=True)) words = [w for w,c in sorted_counts] super(CountedVocabulary, self).__init__(words=words) self.word_count = dict(sorted_counts)
Build attributes word_id and id_word from input. Args: word_count (dictionary): A dictionary of the type word:count or list of tuples of the type (word, count).
juraj-google-style
def _PrintDictAsTable(self, src_dict): key_list = list(src_dict.keys()) key_list.sort() print('|', end='') for key in key_list: print(' {0:s} |'.format(key), end='') print('') print('|', end='') for key in key_list: print(' :---: |', end='') print('') print('|', end='') for key in key_list: print(' {0!s} |'.format(src_dict[key]), end='') print('\n')
Prints a table of artifact definitions. Args: src_dict (dict[str, ArtifactDefinition]): artifact definitions by name.
juraj-google-style
def import_demonstrations(self, demonstrations): if isinstance(demonstrations, dict): if self.unique_state: demonstrations['states'] = dict(state=demonstrations['states']) if self.unique_action: demonstrations['actions'] = dict(action=demonstrations['actions']) self.model.import_demo_experience(**demonstrations) else: if self.unique_state: states = dict(state=list()) else: states = {name: list() for name in demonstrations[0]['states']} internals = {name: list() for name in demonstrations[0]['internals']} if self.unique_action: actions = dict(action=list()) else: actions = {name: list() for name in demonstrations[0]['actions']} terminal = list() reward = list() for demonstration in demonstrations: if self.unique_state: states['state'].append(demonstration['states']) else: for name, state in states.items(): state.append(demonstration['states'][name]) for name, internal in internals.items(): internal.append(demonstration['internals'][name]) if self.unique_action: actions['action'].append(demonstration['actions']) else: for name, action in actions.items(): action.append(demonstration['actions'][name]) terminal.append(demonstration['terminal']) reward.append(demonstration['reward']) self.model.import_demo_experience( states=states, internals=internals, actions=actions, terminal=terminal, reward=reward )
Imports demonstrations, i.e. expert observations. Note that for large numbers of observations, set_demonstrations is more appropriate, which directly sets memory contents to an array an expects a different layout. Args: demonstrations: List of observation dicts
juraj-google-style
def lengths_to_area_mask(feature_length, length, max_area_size): paddings = tf.cast(tf.expand_dims( tf.logical_not( tf.sequence_mask(feature_length, maxlen=length)), 2), tf.float32) _, _, area_sum, _, _ = compute_area_features(paddings, max_area_width=max_area_size) mask = tf.squeeze(tf.logical_not(tf.cast(area_sum, tf.bool)), [2]) return mask
Generates a non-padding mask for areas based on lengths. Args: feature_length: a tensor of [batch_size] length: the length of the batch max_area_size: the maximum area size considered Returns: mask: a tensor in shape of [batch_size, num_areas]
juraj-google-style
def fibo(max_value=None): a = 1 b = 1 while True: if ((max_value is None) or (a < max_value)): (yield a) (a, b) = (b, (a + b)) else: (yield max_value)
Generator for fibonaccial decay. Args: max_value: The maximum value to yield. Once the value in the true fibonacci sequence exceeds this, the value of max_value will forever after be yielded.
codesearchnet
def log(cls, x: 'TensorFluent') -> 'TensorFluent': return cls._unary_op(x, tf.log, tf.float32)
Returns a TensorFluent for the log function. Args: x: The input fluent. Returns: A TensorFluent wrapping the log function.
juraj-google-style
def datestr2date(date_str): if any(c not in '0123456789-/' for c in date_str): raise ValueError('Illegal character in date string') if '/' in date_str: try: m, d, y = date_str.split('/') except: raise ValueError('Date {} must have no or exactly 2 slashes. {}'. format(date_str, VALID_DATE_FORMATS_TEXT)) elif '-' in date_str: try: d, m, y = date_str.split('-') except: raise ValueError('Date {} must have no or exactly 2 dashes. {}'. format(date_str, VALID_DATE_FORMATS_TEXT)) elif len(date_str) == 8 or len(date_str) == 6: d = date_str[-2:] m = date_str[-4:-2] y = date_str[:-4] else: raise ValueError('Date format not recognised. {}'.format( VALID_DATE_FORMATS_TEXT)) if len(y) == 2: year = 2000 + int(y) elif len(y) == 4: year = int(y) else: raise ValueError('year must be 2 or 4 digits') for s in (m, d): if 1 <= len(s) <= 2: month, day = int(m), int(d) else: raise ValueError('m and d must be 1 or 2 digits') try: return datetime.date(year, month, day) except ValueError: raise ValueError('Invalid date {}. {}'.format(date_str, VALID_DATE_FORMATS_TEXT))
Turns a string into a datetime.date object. This will only work if the format can be "guessed", so the string must have one of the formats from VALID_DATE_FORMATS_TEXT. Args: date_str (str) a string that represents a date Returns: datetime.date object Raises: ValueError if the input string does not have a valid format.
juraj-google-style
def full_name(decl, with_defaults=True): if None is decl: raise RuntimeError("Unable to generate full name for None object!") if with_defaults: if not decl.cache.full_name: path = declaration_path(decl) if path == [""]: decl.cache.full_name = "" else: decl.cache.full_name = full_name_from_declaration_path(path) return decl.cache.full_name else: if not decl.cache.full_partial_name: path = partial_declaration_path(decl) if path == [""]: decl.cache.full_partial_name = "" else: decl.cache.full_partial_name = \ full_name_from_declaration_path(path) return decl.cache.full_partial_name
Returns declaration full qualified name. If `decl` belongs to anonymous namespace or class, the function will return C++ illegal qualified name. Args: decl (declaration_t): declaration for which the full qualified name should be calculated. Returns: list[(str | basestring)]: full name of the declaration.
juraj-google-style
def defect_concentration(self, chemical_potentials, temperature=300, fermi_level=0.0): n = self.multiplicity * 1e24 / self.defect.bulk_structure.volume conc = n * np.exp(-1.0 * self.formation_energy(chemical_potentials, fermi_level=fermi_level) / (kb * temperature)) return conc
Get the defect concentration for a temperature and Fermi level. Args: temperature: the temperature in K fermi_level: the fermi level in eV (with respect to the VBM) Returns: defects concentration in cm^-3
juraj-google-style
def wrap_query_in_nested_if_field_is_nested(query, field, nested_fields): for element in nested_fields: match_pattern = r'^{}.'.format(element) if re.match(match_pattern, field): return generate_nested_query(element, query) return query
Helper for wrapping a query into a nested if the fields within the query are nested Args: query : The query to be wrapped. field : The field that is being queried. nested_fields : List of fields which are nested. Returns: (dict): The nested query
juraj-google-style
def _InitializeGraph(self, os_name, artifact_list): dependencies = artifact_registry.REGISTRY.SearchDependencies(os_name, artifact_list) (artifact_names, attribute_names) = dependencies self._AddAttributeNodes(attribute_names) self._AddArtifactNodesAndEdges(artifact_names)
Creates the nodes and directed edges of the dependency graph. Args: os_name: String specifying the OS name. artifact_list: List of requested artifact names.
codesearchnet
def delete_vmss_vms(access_token, subscription_id, resource_group, vmss_name, vm_ids): endpoint = ''.join([get_rm_endpoint(), '/subscriptions/', subscription_id, '/resourceGroups/', resource_group, '/providers/Microsoft.Compute/virtualMachineScaleSets/', vmss_name, '/delete?api-version=', COMP_API]) body = (('{"instanceIds" : ' + vm_ids) + '}') return do_post(endpoint, body, access_token)
Delete a VM in a VM Scale Set. Args: access_token (str): A valid Azure authentication token. subscription_id (str): Azure subscription id. resource_group (str): Azure resource group name. vmss_name (str): Name of the virtual machine scale set. vm_ids (str): String representation of a JSON list of VM IDs. E.g. '[1,2]'. Returns: HTTP response.
codesearchnet
def get_all_reqs(self): try: open(self.req_file, 'rb') except IOError: msg = "[Error] Cannot read file '%s'." % self.req_file logging.error(msg) sys.exit(1) curr_status = True parser = configparser.ConfigParser() parser.read(self.req_file) if not parser.sections(): err_msg = '[Error] Empty config file. ' err_msg += '(file = %s, ' % str(self.req_file) err_msg += 'parser sectons = %s)' % str(parser.sections()) self.error_msg.append(err_msg) logging.error(err_msg) curr_status = False required_dict = {} optional_dict = {} unsupported_dict = {} dependency_dict = {} for section in parser.sections(): all_configs = parser.options(section) for config in all_configs: spec = parser.get(section, config) if section == 'Dependency': dependency_dict[config] = [] spec_split = spec.split(',\n') if spec_split[0] == '[': spec_split = spec_split[1:] elif '[' in spec_split[0]: spec_split[0] = spec_split[0].replace('[', '') else: warn_msg = '[Warning] Config file format error: Missing `[`.' warn_msg += '(section = %s, ' % str(section) warn_msg += 'config = %s)' % str(config) logging.warning(warn_msg) self.warning_msg.append(warn_msg) if spec_split[-1] == ']': spec_split = spec_split[:-1] elif ']' in spec_split[-1]: spec_split[-1] = spec_split[-1].replace(']', '') else: warn_msg = '[Warning] Config file format error: Missing `]`.' warn_msg += '(section = %s, ' % str(section) warn_msg += 'config = %s)' % str(config) logging.warning(warn_msg) self.warning_msg.append(warn_msg) for rule in spec_split: spec_dict = self.filter_dependency(rule) cfg_name = spec_dict['cfg'] dep_name = spec_dict['cfgd'] cfg_req = self._Reqs(self.convert_to_list(spec_dict['cfg_spec'], ' '), config=cfg_name, section=section) dep_req = self._Reqs(self.convert_to_list(spec_dict['cfgd_spec'], ' '), config=dep_name, section=section) cfg_req_status = cfg_req.get_status dep_req_status = dep_req.get_status if not cfg_req_status[0] or not dep_req_status[0]: msg = '[Error] Failed to create _Reqs() instance for a ' msg += 'dependency item. (config = %s, ' % str(cfg_name) msg += 'dep = %s)' % str(dep_name) logging.error(msg) self.error_msg.append(cfg_req_status[1]) self.error_msg.append(dep_req_status[1]) curr_status = False break else: dependency_dict[config].append([cfg_name, cfg_req, dep_name, dep_req]) if not curr_status: break else: if section == 'Required': add_to = required_dict elif section == 'Optional': add_to = optional_dict elif section == 'Unsupported': add_to = unsupported_dict else: msg = '[Error] Section name `%s` is not accepted.' % str(section) msg += 'Accepted section names are `Required`, `Optional`, ' msg += '`Unsupported`, and `Dependency`.' logging.error(msg) self.error_msg.append(msg) curr_status = False break req_list = self.convert_to_list(self.filter_line(spec), ' ') add_to[config] = self._Reqs(req_list, config=config, section=section) if not curr_status: break if not curr_status: break return_dict = {'required': required_dict, 'optional': optional_dict, 'unsupported': unsupported_dict, 'dependency': dependency_dict} return return_dict
Parses all compatibility specifications listed in the `.ini` config file. Reads and parses each and all compatibility specifications from the `.ini` config file by sections. It then populates appropriate dicts that represent each section (e.g. `self.required`) and returns a tuple of the populated dicts. Returns: Dict of dict { `required`: Dict of `Required` configs and supported versions, `optional`: Dict of `Optional` configs and supported versions, `unsupported`: Dict of `Unsupported` configs and supported versions, `dependency`: Dict of `Dependency` configs and supported versions }
github-repos
def parse_clnsig(acc, sig, revstat, transcripts): clnsig_accsessions = [] if acc: try: acc = int(acc) except ValueError: pass if isinstance(acc, int): revstat_groups = [] if revstat: revstat_groups = [rev.lstrip('_') for rev in revstat.split(',')] sig_groups = [] if sig: for significance in sig.split('/'): splitted_word = significance.split('_') sig_groups.append(' '.join(splitted_word[:2])) for sign_term in sig_groups: clnsig_accsessions.append({'value': sign_term, 'accession': int(acc), 'revstat': ', '.join(revstat_groups)}) else: acc_groups = acc.split('|') sig_groups = sig.split('|') revstat_groups = revstat.split('|') for (acc_group, sig_group, revstat_group) in zip(acc_groups, sig_groups, revstat_groups): accessions = acc_group.split(',') significances = sig_group.split(',') revstats = revstat_group.split(',') for (accession, significance, revstat) in zip(accessions, significances, revstats): clnsig_accsessions.append({'value': int(significance), 'accession': accession, 'revstat': revstat}) elif transcripts: clnsig = set() for transcript in transcripts: for annotation in transcript.get('clinsig', []): clnsig.add(annotation) for annotation in clnsig: clnsig_accsessions.append({'value': annotation}) return clnsig_accsessions
Get the clnsig information Args: acc(str): The clnsig accession number, raw from vcf sig(str): The clnsig significance score, raw from vcf revstat(str): The clnsig revstat, raw from vcf transcripts(iterable(dict)) Returns: clnsig_accsessions(list): A list with clnsig accessions
codesearchnet
def __init__(self, url: str): self.url = url parsed_url = urlparse(self.url) self.scheme = parsed_url.scheme if parsed_url.scheme else 'file' self.netloc = parsed_url.netloc self.path = parsed_url.path self.filename = os.path.basename(self.path)
Construct a File object from a url string. Args: - url (string) : url string of the file e.g. - 'input.txt' - 'file:///scratch/proj101/input.txt' - 'globus://go#ep1/~/data/input.txt' - 'globus://ddb59aef-6d04-11e5-ba46-22000b92c6ec/home/johndoe/data/input.txt'
juraj-google-style
def _add(self, frame, strict): if not isinstance(frame, Frame): raise TypeError("%r not a Frame instance" % frame) orig_frame = frame frame = frame._upgrade_frame() if frame is None: if not strict: return raise TypeError( "Can't upgrade %r frame" % type(orig_frame).__name__) hash_key = frame.HashKey if strict or hash_key not in self: self[hash_key] = frame return while True: old_frame = self[hash_key] new_frame = old_frame._merge_frame(frame) new_hash = new_frame.HashKey if new_hash == hash_key: self[hash_key] = new_frame break else: assert new_frame is frame if new_hash not in self: self[new_hash] = new_frame break hash_key = new_hash
Add a frame. Args: frame (Frame): the frame to add strict (bool): if this should raise in case it can't be added and frames shouldn't be merged.
juraj-google-style
def HandleBlockReceived(self, inventory): block = IOHelper.AsSerializableWithType(inventory, 'neo.Core.Block.Block') if (not block): return blockhash = block.Hash.ToBytes() try: if (blockhash in BC.Default().BlockRequests): BC.Default().BlockRequests.remove(blockhash) except KeyError: pass try: if (blockhash in self.myblockrequests): self.heart_beat(HEARTBEAT_BLOCKS) self.myblockrequests.remove(blockhash) except KeyError: pass self.leader.InventoryReceived(block)
Process a Block inventory payload. Args: inventory (neo.Network.Inventory):
codesearchnet
def bessel_k0e(x, name=None): with ops.name_scope(name, 'bessel_k0e', [x]): return gen_special_math_ops.bessel_k0e(x)
Computes the Bessel k0e function of `x` element-wise. Modified Bessel function of order 0. >>> tf.math.special.bessel_k0e([0.5, 1., 2., 4.]).numpy() array([1.52410939, 1.14446308, 0.84156822, 0.60929767], dtype=float32) Args: x: A `Tensor` or `SparseTensor`. Must be one of the following types: `half`, `float32`, `float64`. name: A name for the operation (optional). Returns: A `Tensor` or `SparseTensor`, respectively. Has the same type as `x`. @compatibility(scipy) Equivalent to scipy.special.k0e @end_compatibility
github-repos
def verify_link_in_task_graph(chain, decision_link, task_link): log.info('Verifying the {} {} task definition is part of the {} {} task graph...'.format(task_link.name, task_link.task_id, decision_link.name, decision_link.task_id)) if (task_link.task_id in decision_link.task_graph): graph_defn = deepcopy(decision_link.task_graph[task_link.task_id]) verify_task_in_task_graph(task_link, graph_defn) log.info("Found {} in the graph; it's a match".format(task_link.task_id)) return raise_on_errors(["Can't find task {} {} in {} {} task-graph.json!".format(task_link.name, task_link.task_id, decision_link.name, decision_link.task_id)])
Compare the runtime task definition against the decision task graph. Args: chain (ChainOfTrust): the chain we're operating on. decision_link (LinkOfTrust): the decision task link task_link (LinkOfTrust): the task link we're testing Raises: CoTError: on failure.
codesearchnet
def compute_g_values(self, input_ids: torch.LongTensor) -> torch.LongTensor: self._check_input_ids_shape(input_ids) ngrams = input_ids.unfold(dimension=1, size=self.ngram_len, step=1) ngram_keys = self.compute_ngram_keys(ngrams) return self.sample_g_values(ngram_keys)
Computes g values for each ngram from the given sequence of tokens. Args: input_ids (`torch.LongTensor`): Input token ids (batch_size, input_len). Returns: G values (batch_size, input_len - (ngram_len - 1), depth).
github-repos
def load(self, auth, state=None, sync=True): self._keep_api.setAuth(auth) self._reminders_api.setAuth(auth) self._media_api.setAuth(auth) if state is not None: self.restore(state) if sync: self.sync(True)
Authenticate to Google with a prepared authentication object & sync. Args: auth (APIAuth): Authentication object. state (dict): Serialized state to load. Raises: LoginException: If there was a problem logging in.
juraj-google-style
def parse_key(key): (hkey, lkey) = struct.unpack('<II', key[0:UBIFS_SK_LEN]) ino_num = (hkey & UBIFS_S_KEY_HASH_MASK) key_type = (lkey >> UBIFS_S_KEY_BLOCK_BITS) khash = lkey return {'type': key_type, 'ino_num': ino_num, 'khash': khash}
Parse node key Arguments: Str:key -- Hex string literal of node key. Returns: Int:key_type -- Type of key, data, ino, dent, etc. Int:ino_num -- Inode number. Int:khash -- Key hash.
codesearchnet
def get_country_by_id(self, country_id: int) -> typing.Optional['Country']: VALID_POSITIVE_INT.validate(country_id, 'get_country_by_id') if country_id not in self._countries_by_id.keys(): for country in self.countries: if country.country_id == country_id: self._countries_by_id[country_id] = country return country raise ValueError(country_id) else: return self._countries_by_id[country_id]
Gets a country from its name Args: country_id: country id Returns: Country
juraj-google-style
def update_summary(self, w): old = self.summary.v reviewers = self._graph.retrieve_reviewers(self) reviews = [self._graph.retrieve_review(r, self).score for r in reviewers] weights = [w(r.anomalous_score) for r in reviewers] if (sum(weights) == 0): self.summary = np.mean(reviews) else: self.summary = np.average(reviews, weights=weights) return abs((self.summary.v - old))
Update summary. The new summary is a weighted average of reviews i.e. .. math:: \\frac{\\sum_{r \\in R} \\mbox{weight}(r) \\times \\mbox{review}(r)} {\\sum_{r \\in R} \\mbox{weight}(r)}, where :math:`R` is a set of reviewers reviewing this product, :math:`\\mbox{review}(r)` and :math:`\\mbox{weight}(r)` are the review and weight of the reviewer :math:`r`, respectively. Args: w: A weight function. Returns: absolute difference between old summary and updated one.
codesearchnet
def set_function_defaults(self, node: cfg.CFGNode, defaults_var: cfg.Variable) -> None: defaults = self._extract_defaults(defaults_var) new_sigs = [] for sig in self.signatures: if defaults: new_sigs.append(sig.set_defaults(defaults)) else: d = sig.param_types if hasattr(self, 'parent'): d = d[1:] new_sigs.append(sig.set_defaults(d)) self.signatures = new_sigs if hasattr(self, 'parent'): self.parent._member_map[self.name] = self.to_pytd_def(node, self.name)
Attempts to set default arguments for a function's signatures. If defaults_var is not an unambiguous tuple (i.e. one that can be processed by abstract_utils.get_atomic_python_constant), every argument is made optional and a warning is issued. This function emulates __defaults__. If this function is part of a class (or has a parent), that parent is updated so the change is stored. Args: node: the node that defaults are being set at. defaults_var: a Variable with a single binding to a tuple of default values.
github-repos
def post_process(self, outputs, target_sizes): logger.warning_once('`post_process` is deprecated and will be removed in v5 of Transformers, please use `post_process_object_detection` instead, with `threshold=0.` for equivalent results.') out_logits, out_bbox = (outputs.logits, outputs.pred_boxes) if len(out_logits) != len(target_sizes): raise ValueError('Make sure that you pass in as many target sizes as the batch dimension of the logits') if target_sizes.shape[1] != 2: raise ValueError('Each element of target_sizes must contain the size (h, w) of each image of the batch') prob = nn.functional.softmax(out_logits, -1) scores, labels = prob[..., :-1].max(-1) boxes = center_to_corners_format(out_bbox) img_h, img_w = target_sizes.unbind(1) scale_fct = torch.stack([img_w, img_h, img_w, img_h], dim=1).to(boxes.device) boxes = boxes * scale_fct[:, None, :] results = [{'scores': s, 'labels': l, 'boxes': b} for s, l, b in zip(scores, labels, boxes)] return results
Converts the raw output of [`DetrForObjectDetection`] into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch. Args: outputs ([`DetrObjectDetectionOutput`]): Raw outputs of the model. target_sizes (`torch.Tensor` of shape `(batch_size, 2)`): Tensor containing the size (height, width) of each image of the batch. For evaluation, this must be the original image size (before any data augmentation). For visualization, this should be the image size after data augment, but before padding. Returns: `List[Dict]`: A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.
github-repos
def _lookup_global(self, symbol): assert symbol.parts namespace = self.namespaces if len(symbol.parts) == 1: namespace = self.namespaces[None] try: return self._lookup_namespace(symbol, namespace) except Error as orig_exc: try: namespace = self.namespaces[None] return self._lookup_namespace(symbol, namespace) except Error: raise orig_exc
Helper for lookup_symbol that only looks up global variables. Args: symbol: Symbol
juraj-google-style
def env(mounts): f_mounts = [m.strip("/") for m in mounts] root = local.path("/") ld_libs = [root / m / "lib" for m in f_mounts] ld_libs.extend([root / m / "lib64" for m in f_mounts]) paths = [root / m / "bin" for m in f_mounts] paths.extend([root / m / "sbin" for m in f_mounts]) paths.extend([root / m for m in f_mounts]) return paths, ld_libs
Compute the environment of the change root for the user. Args: mounts: The mountpoints of the current user. Return: paths ld_libs
juraj-google-style
def create(self, vid): command = ('vlan %s' % vid) return (self.configure(command) if isvlan(vid) else False)
Creates a new VLAN resource Args: vid (str): The VLAN ID to create Returns: True if create was successful otherwise False
codesearchnet
def expand(sql, args=None): (sql, args) = SqlModule.get_sql_statement_with_environment(sql, args) return _sql_statement.SqlStatement.format(sql._sql, args)
Expand a SqlStatement, query string or SqlModule with a set of arguments. Args: sql: a SqlStatement, %%sql module, or string containing a query. args: a string of command line arguments or a dictionary of values. If a string, it is passed to the argument parser for the SqlModule associated with the SqlStatement or SqlModule. If a dictionary, it is used to override any default arguments from the argument parser. If the sql argument is a string then args must be None or a dictionary as in this case there is no associated argument parser. Returns: The expanded SQL, list of referenced scripts, and list of referenced external tables.
codesearchnet
def update_dict_recursive(editable_dict: dict, editing_dict: dict) -> None: for (k, v) in editing_dict.items(): if isinstance(v, collections.Mapping): update_dict_recursive(editable_dict.get(k, {}), v) else: editable_dict[k] = v
Updates dict recursively You need to use this function to update dictionary if depth of editing_dict is more then 1 Args: editable_dict: dictionary, that will be edited editing_dict: dictionary, that contains edits Returns: None
codesearchnet
def AddArguments(cls, argument_group): argument_group.add_argument( '--virustotal-api-key', '--virustotal_api_key', dest='virustotal_api_key', type=str, action='store', default=None, metavar='API_KEY', help=( 'Specify the API key for use with VirusTotal.')) argument_group.add_argument( '--virustotal-free-rate-limit', '--virustotal_free_rate_limit', dest='virustotal_free_rate_limit', action='store_false', default=cls._DEFAULT_RATE_LIMIT, help=( 'Limit Virustotal requests to the default free API key rate of ' '4 requests per minute. Set this to false if you have an key ' 'for the private API.')) argument_group.add_argument( '--virustotal-hash', '--virustotal_hash', dest='virustotal_hash', type=str, action='store', choices=['md5', 'sha1', 'sha256'], default=cls._DEFAULT_HASH, metavar='HASH', help=( 'Type of hash to query VirusTotal, the default is: {0:s}'.format( cls._DEFAULT_HASH)))
Adds command line arguments the helper supports to an argument group. This function takes an argument parser or an argument group object and adds to it all the command line arguments this helper supports. Args: argument_group (argparse._ArgumentGroup|argparse.ArgumentParser): argparse group.
juraj-google-style
def previous(self) -> 'ArrayEntry': try: (newval, nbef) = self.before.pop() except IndexError: raise NonexistentInstance(self.json_pointer(), 'previous of first') from None return ArrayEntry((self.index - 1), nbef, self.after.cons(self.value), newval, self.parinst, self.schema_node, self.timestamp)
Return an instance node corresponding to the previous entry. Raises: NonexistentInstance: If the receiver is the first entry of the parent array.
codesearchnet
def get_variant_id(variant_dict=None, variant_line=None): if variant_dict: chrom = variant_dict['CHROM'] position = variant_dict['POS'] ref = variant_dict['REF'] alt = variant_dict['ALT'] elif variant_line: splitted_line = variant_line.rstrip().split('\t') chrom = splitted_line[0] position = splitted_line[1] ref = splitted_line[3] alt = splitted_line[4] else: raise Exception("Have to provide variant dict or variant line") return '_'.join([ chrom, position, ref, alt, ])
Build a variant id The variant id is a string made of CHROM_POS_REF_ALT Args: variant_dict (dict): A variant dictionary Returns: variant_id (str)
juraj-google-style
def has_no_current_path(self, path, **kwargs): try: return self.assert_no_current_path(path, **kwargs) except ExpectationNotMet: return False
Checks if the page doesn't have the given path. Args: path (str | RegexObject): The string or regex that the current "path" should match. **kwargs: Arbitrary keyword arguments for :class:`CurrentPathQuery`. Returns: bool: Whether it doesn't match.
codesearchnet
def get_usedby_and_readonly(self, id): uri = (((self.URI + '/') + id) + '/usedby/readonly') return self._client.get(uri)
Gets the build plans details os teh selected plan script as per the selected attributes. Args: id: ID of the Plan Script. Returns: array of build plans
codesearchnet
def is_same_vectors(self, vec_set1, vec_set2): if (np.absolute(rel_strain(vec_set1[0], vec_set2[0])) > self.max_length_tol): return False elif (np.absolute(rel_strain(vec_set1[1], vec_set2[1])) > self.max_length_tol): return False elif (np.absolute(rel_angle(vec_set1, vec_set2)) > self.max_angle_tol): return False else: return True
Determine if two sets of vectors are the same within length and angle tolerances Args: vec_set1(array[array]): an array of two vectors vec_set2(array[array]): second array of two vectors
codesearchnet
def execute_command(self, tab_name, panel_name, command_module, command_class, command_data=None): command_data = ({} if (command_data is None) else command_data) cmdclassname = '{}.{}'.format(command_module, command_class) self._add_entry(templates.EXTERNAL_COMMAND.format(external_command_tab=tab_name, external_command_panel=panel_name, command_class_name=command_class, command_class=cmdclassname)) data_count = len(command_data.keys()) if (data_count > 0): data_str_list = [] for (k, v) in command_data.items(): data_str_list.append(' "{}" , "{}"'.format(k, v)) data_str = '_\n ,'.join(data_str_list) self._add_entry(templates.EXTERNAL_COMMANDDATA.format(data_count=data_count, data_string=data_str))
Append an execute external command entry to the journal. This instructs Revit to execute the provided command from the provided module, tab, and panel. Args: tab_name (str): name of ribbon tab that contains the command panel_name (str): name of ribbon panel that contains the command command_module (str): name of module that provides the command command_class (str): name of command class inside command module command_data (dict): dict of string data to be passed to command Examples: >>> jm = JournalMaker() >>> cmdata = {'key1':'value1', 'key2':'value2'} >>> jm.execute_command(tab_name='Add-Ins', ... panel_name='Panel Name', ... command_module='Addon App Namespace', ... command_class='Command Classname', ... command_data=cmdata)
codesearchnet
def __call__(self, string): texts = [] floats = [] for i, part in enumerate(self._FLOAT_RE.split(string)): if i % 2 == 0: texts.append(part) else: floats.append(float(part)) return (texts, np.array(floats))
Extracts floats from a string. >>> text_parts, floats = _FloatExtractor()("Text 1.0 Text") >>> text_parts ["Text ", " Text"] >>> floats np.array([1.0]) Args: string: the string to extract floats from. Returns: A (string, array) pair, where `string` has each float replaced by "..." and `array` is a `float32` `numpy.array` containing the extracted floats.
github-repos
def connect(backend=None, host=None, port=None, name=None, max_tries=None, connection_timeout=None, replicaset=None, ssl=None, login=None, password=None, ca_cert=None, certfile=None, keyfile=None, keyfile_passphrase=None, crlfile=None): backend = (backend or bigchaindb.config['database']['backend']) host = (host or bigchaindb.config['database']['host']) port = (port or bigchaindb.config['database']['port']) dbname = (name or bigchaindb.config['database']['name']) replicaset = (replicaset or bigchaindb.config['database'].get('replicaset')) ssl = (ssl if (ssl is not None) else bigchaindb.config['database'].get('ssl', False)) login = (login or bigchaindb.config['database'].get('login')) password = (password or bigchaindb.config['database'].get('password')) ca_cert = (ca_cert or bigchaindb.config['database'].get('ca_cert', None)) certfile = (certfile or bigchaindb.config['database'].get('certfile', None)) keyfile = (keyfile or bigchaindb.config['database'].get('keyfile', None)) keyfile_passphrase = (keyfile_passphrase or bigchaindb.config['database'].get('keyfile_passphrase', None)) crlfile = (crlfile or bigchaindb.config['database'].get('crlfile', None)) try: (module_name, _, class_name) = BACKENDS[backend].rpartition('.') Class = getattr(import_module(module_name), class_name) except KeyError: raise ConfigurationError('Backend `{}` is not supported. BigchainDB currently supports {}'.format(backend, BACKENDS.keys())) except (ImportError, AttributeError) as exc: raise ConfigurationError('Error loading backend `{}`'.format(backend)) from exc logger.debug('Connection: {}'.format(Class)) return Class(host=host, port=port, dbname=dbname, max_tries=max_tries, connection_timeout=connection_timeout, replicaset=replicaset, ssl=ssl, login=login, password=password, ca_cert=ca_cert, certfile=certfile, keyfile=keyfile, keyfile_passphrase=keyfile_passphrase, crlfile=crlfile)
Create a new connection to the database backend. All arguments default to the current configuration's values if not given. Args: backend (str): the name of the backend to use. host (str): the host to connect to. port (int): the port to connect to. name (str): the name of the database to use. replicaset (str): the name of the replica set (only relevant for MongoDB connections). Returns: An instance of :class:`~bigchaindb.backend.connection.Connection` based on the given (or defaulted) :attr:`backend`. Raises: :exc:`~ConnectionError`: If the connection to the database fails. :exc:`~ConfigurationError`: If the given (or defaulted) :attr:`backend` is not supported or could not be loaded. :exc:`~AuthenticationError`: If there is a OperationFailure due to Authentication failure after connecting to the database.
codesearchnet
def compute_v(self, memory_antecedent): if self.shared_kv: raise ValueError('compute_v cannot be called with shared_kv') ret = mtf.einsum([memory_antecedent, self.wv], reduced_dims=[self.memory_input_dim]) if self.combine_dims: ret = mtf.replace_dimensions(ret, ret.shape.dims[(- 1)], self.v_dims) return ret
Compute value Tensor v. Args: memory_antecedent: a Tensor with dimensions {memory_input_dim} + other_dims Returns: a Tensor with dimensions memory_heads_dims + {value_dim} + other_dims
codesearchnet
def _determine_profiles(self): mp_insts = self._conn.EnumerateInstances('CIM_RegisteredProfile', namespace=self.interop_ns) self._profiles = mp_insts
Determine the WBEM management profiles advertised by the WBEM server, by communicating with it and enumerating the instances of `CIM_RegisteredProfile`. If the profiles could be determined, this method sets the :attr:`profiles` property of this object to the list of `CIM_RegisteredProfile` instances (as :class:`~pywbem.CIMInstance` objects), and returns. Otherwise, it raises an exception. Raises: Exceptions raised by :class:`~pywbem.WBEMConnection`. CIMError: CIM_ERR_NOT_FOUND, Interop namespace could not be determined.
codesearchnet
def get_hash(path, hash_alg='sha256'): h = hashlib.new(hash_alg) with open(path, 'rb') as f: for chunk in iter(functools.partial(f.read, 4096), b''): h.update(chunk) return h.hexdigest()
Get the hash of the file at ``path``. I'd love to make this async, but evidently file i/o is always ready Args: path (str): the path to the file to hash. hash_alg (str, optional): the algorithm to use. Defaults to 'sha256'. Returns: str: the hexdigest of the hash.
codesearchnet
def report(self, name, owner=None, **kwargs): return Report(self.tcex, name, owner=owner, **kwargs)
Create the Report TI object. Args: owner: name: **kwargs: Return:
juraj-google-style
def save_data_files(bs, prefix=None, directory=None): filename = 'phonon_band.dat' filename = '{}_phonon_band.dat'.format(prefix) if prefix else filename directory = directory if directory else '.' filename = os.path.join(directory, filename) with open(filename, 'w') as f: header = ' f.write(header) for band in bs.bands: for d, e in zip(bs.distance, band): f.write('{:.8f} {:.8f}\n'.format(d, e)) f.write('\n') return filename
Write the phonon band structure data files to disk. Args: bs (:obj:`~pymatgen.phonon.bandstructure.PhononBandStructureSymmLine`): The phonon band structure. prefix (:obj:`str`, optional): Prefix for data file. directory (:obj:`str`, optional): Directory in which to save the data. Returns: str: The filename of the written data file.
juraj-google-style
def _iter_errors_custom(instance, checks, options): for v_function in checks: try: result = v_function(instance) except TypeError: result = v_function(instance, options) if isinstance(result, Iterable): for x in result: yield x elif result is not None: yield result for field in instance: if type(instance[field]) is list: for obj in instance[field]: if _is_stix_obj(obj): for err in _iter_errors_custom(obj, checks, options): yield err
Perform additional validation not possible merely with JSON schemas. Args: instance: The STIX object to be validated. checks: A sequence of callables which do the checks. Each callable may be written to accept 1 arg, which is the object to check, or 2 args, which are the object and a ValidationOptions instance. options: ValidationOptions instance with settings affecting how validation should be done.
juraj-google-style
def get_destination(self, filepath, targetdir=None): dst = self.change_extension(filepath, 'css') if targetdir: dst = os.path.join(targetdir, dst) return dst
Return destination path from given source file path. Destination is allways a file with extension ``.css``. Args: filepath (str): A file path. The path is allways relative to sources directory. If not relative, ``targetdir`` won't be joined. absolute (bool): If given will be added at beginning of file path. Returns: str: Destination filepath.
juraj-google-style
def screenshot(self, png_filename=None, format='raw'): value = self.http.get('screenshot').value raw_value = base64.b64decode(value) png_header = b"\x89PNG\r\n\x1a\n" if not raw_value.startswith(png_header) and png_filename: raise WDAError(-1, "screenshot png format error") if png_filename: with open(png_filename, 'wb') as f: f.write(raw_value) if format == 'raw': return raw_value elif format == 'pillow': from PIL import Image buff = io.BytesIO(raw_value) return Image.open(buff) else: raise ValueError("unknown format")
Screenshot with PNG format Args: png_filename(string): optional, save file name format(string): return format, pillow or raw(default) Returns: raw data or PIL.Image Raises: WDAError
juraj-google-style
def symlink(self, link_target, path, dir_fd=None): link_target = self._path_with_dir_fd(link_target, self.symlink, dir_fd) self.filesystem.create_symlink( path, link_target, create_missing_dirs=False)
Creates the specified symlink, pointed at the specified link target. Args: link_target: The target of the symlink. path: Path to the symlink to create. dir_fd: If not `None`, the file descriptor of a directory, with `link_target` being relative to this directory. New in Python 3.3. Raises: OSError: if the file already exists.
juraj-google-style
def get_structure_by_material_id(self, material_id, final=True, conventional_unit_cell=False): prop = "final_structure" if final else "initial_structure" data = self.get_data(material_id, prop=prop) if conventional_unit_cell: data[0][prop] = SpacegroupAnalyzer(data[0][prop]). \ get_conventional_standard_structure() return data[0][prop]
Get a Structure corresponding to a material_id. Args: material_id (str): Materials Project material_id (a string, e.g., mp-1234). final (bool): Whether to get the final structure, or the initial (pre-relaxation) structure. Defaults to True. conventional_unit_cell (bool): Whether to get the standard conventional unit cell Returns: Structure object.
juraj-google-style
def rename_object(self, object_name, new_name): def rename_fn(weights_dict, source_name, target_name): weights_dict[target_name] = weights_dict[source_name] weights_dict.pop(source_name) self._edit_object(rename_fn, object_name, new_name)
Rename an object in the file (e.g. a layer). Args: object_name: String, name or path of the object to rename (e.g. `"dense_2"` or `"layers/dense_2"`). new_name: String, new name of the object.
github-repos
def done(self, metadata: Optional[Dict[str, Any]]=None, related_links: Optional[Dict[str, str]]=None) -> None:
Marks current trial as done. Args: metadata: Additional metadata to add to current trial. related_links: Additional links to add to current trial.
github-repos
def _RawGlobPathSpecWithNumericSchema( file_system, parent_path_spec, segment_format, location, segment_number): segment_files = [] while True: segment_location = segment_format.format(location, segment_number) kwargs = path_spec_factory.Factory.GetProperties(parent_path_spec) kwargs['location'] = segment_location if parent_path_spec.parent is not None: kwargs['parent'] = parent_path_spec.parent segment_path_spec = path_spec_factory.Factory.NewPathSpec( parent_path_spec.type_indicator, **kwargs) if not file_system.FileEntryExistsByPathSpec(segment_path_spec): break segment_files.append(segment_path_spec) segment_number += 1 return segment_files
Globs for path specifications according to a numeric naming schema. Args: file_system (FileSystem): file system. parent_path_spec (PathSpec): parent path specification. segment_format (str): naming schema of the segment file location. location (str): the base segment file location string. segment_number (int): first segment number. Returns: list[PathSpec]: path specifications that match the glob.
juraj-google-style
def get_embedded_tweet(tweet): if (tweet.retweeted_tweet is not None): return tweet.retweeted_tweet elif (tweet.quoted_tweet is not None): return tweet.quoted_tweet else: return None
Get the retweeted Tweet OR the quoted Tweet and return it as a dictionary Args: tweet (Tweet): A Tweet object (not simply a dict) Returns: dict (or None, if the Tweet is neither a quote tweet or a Retweet): a dictionary representing the quote Tweet or the Retweet
codesearchnet
def is_smart(self, value): self.set_bool("is_smart", value) if value is True: if self.find("criteria") is None: self.criteria = ElementTree.SubElement(self, "criteria")
Set group is_smart property to value. Args: value: Boolean.
juraj-google-style
def quad_genz_keister_16(order): order = sorted(GENZ_KEISTER_16.keys())[order] (abscissas, weights) = GENZ_KEISTER_16[order] abscissas = numpy.array(abscissas) weights = numpy.array(weights) weights /= numpy.sum(weights) abscissas *= numpy.sqrt(2) return (abscissas, weights)
Hermite Genz-Keister 16 rule. Args: order (int): The quadrature order. Must be in the interval (0, 8). Returns: (:py:data:typing.Tuple[numpy.ndarray, numpy.ndarray]): Abscissas and weights Examples: >>> abscissas, weights = quad_genz_keister_16(1) >>> print(numpy.around(abscissas, 4)) [-1.7321 0. 1.7321] >>> print(numpy.around(weights, 4)) [0.1667 0.6667 0.1667]
codesearchnet
def check_secret(self, secret): try: return hmac.compare_digest(secret, self.secret) except AttributeError: return (secret == self.secret)
Checks if the secret string used in the authentication attempt matches the "known" secret string. Some mechanisms will override this method to control how this comparison is made. Args: secret: The secret string to compare against what was used in the authentication attempt. Returns: True if the given secret matches the authentication attempt.
codesearchnet
def export_to_tf_tensor(self, x, laid_out_x): tensor_layout = self.tensor_layout(x.shape) if not tensor_layout.is_fully_replicated: raise NotImplementedError( "SimdMeshImpl only supports export_to_tf_tensor of fully-replicated " "Tensors. Try reshaping to new dimension names. " " x.shape = %s tensor_layout=%s" % (x.shape, tensor_layout)) return laid_out_x.one_slice
Turn a Tensor into a tf.Tensor. Args: x: a Tensor laid_out_x: a LaidOutTensor Returns: a tf.Tensor
juraj-google-style
def create_token_type_ids_from_sequences(self, token_ids_0: List[int], token_ids_1: Optional[List[int]]=None) -> List[int]: bos_token = [self.bos_token_id] eos_token = [self.eos_token_id] if token_ids_1 is None: return len(bos_token + token_ids_0 + eos_token) * [0] return len(bos_token + token_ids_0 + eos_token + eos_token + token_ids_1 + eos_token) * [0]
Create a mask from the two sequences passed. CLIP does not make use of token type ids, therefore a list of zeros is returned. Args: token_ids_0 (`List[int]`): List of IDs. token_ids_1 (`List[int]`, *optional*): Optional second list of IDs for sequence pairs. Returns: `List[int]`: List of zeros.
github-repos
def declare(self, name): if (name in self._data): raise KeyError('Declared name {} that already existed'.format(name)) self._data[name] = self._loop.create_future()
Declare that a key will be set in the future. This will create a future for the key that is used to hold its result and allow awaiting it. Args: name (str): The unique key that will be used.
codesearchnet
def report_factory(app, report_name, **kwargs): created = pendulum.now().to_rfc3339_string() user_model = app._swimlane.user.as_usergroup_selection() return Report( app, { "$type": Report._type, "groupBys": [], "aggregates": [], "applicationIds": [app.id], "columns": [], "sorts": { "$type": "System.Collections.Generic.Dictionary`2" "[[System.String, mscorlib]," "[Core.Models.Search.SortTypes, Core]], mscorlib", }, "filters": [], "defaultSearchReport": False, "allowed": [], "permissions": { "$type": "Core.Models.Security.PermissionMatrix, Core" }, "createdDate": created, "modifiedDate": created, "createdByUser": user_model, "modifiedByUser": user_model, "id": None, "name": report_name, "disabled": False, "keywords": "" }, **kwargs )
Report instance factory populating boilerplate raw data Args: app (App): Swimlane App instance report_name (str): Generated Report name Keyword Args **kwargs: Kwargs to pass to the Report class
juraj-google-style
def tournament_name2number(self, name): tournaments = self.get_tournaments() d = {t['name']: t['tournament'] for t in tournaments} return d.get(name, None)
Translate tournament name to tournament number. Args: name (str): tournament name to translate Returns: number (int): number of the tournament or `None` if unknown. Examples: >>> NumerAPI().tournament_name2number('delta') 4 >>> NumerAPI().tournament_name2number('foo') None
juraj-google-style
def genClientCert(self, name, outp=None): ucert = self.getUserCert(name) if not ucert: raise s_exc.NoSuchFile('missing User cert') cacert = self._loadCertPath(self._getCaPath(ucert)) if not cacert: raise s_exc.NoSuchFile('missing CA cert') ukey = self.getUserKey(name) if not ukey: raise s_exc.NoSuchFile('missing User private key') ccert = crypto.PKCS12() ccert.set_friendlyname(name.encode('utf-8')) ccert.set_ca_certificates([cacert]) ccert.set_certificate(ucert) ccert.set_privatekey(ukey) crtpath = self._saveP12To(ccert, 'users', '%s.p12' % name) if outp is not None: outp.printf('client cert saved: %s' % (crtpath,))
Generates a user PKCS #12 archive. Please note that the resulting file will contain private key material. Args: name (str): The name of the user keypair. outp (synapse.lib.output.Output): The output buffer. Examples: Make the PKC12 object for user "myuser": myuserpkcs12 = cdir.genClientCert('myuser') Returns: OpenSSL.crypto.PKCS12: The PKCS #12 archive.
juraj-google-style
def append(self, header, f, _left=False): self.items_length += len(header) if _left: self.deque.appendleft((header, f)) else: self.deque.append((header, f))
Add a column to the table. Args: header (str): Column header f (function(datum)->str): Makes the row string from the datum. Str returned by f should have the same width as header.
juraj-google-style
def latexify_spacegroup(spacegroup_symbol): sym = re.sub(r"_(\d+)", r"$_{\1}$", spacegroup_symbol) return re.sub(r"-(\d)", r"$\\overline{\1}$", sym)
Generates a latex formatted spacegroup. E.g., P2_1/c is converted to P2$_{1}$/c and P-1 is converted to P$\\overline{1}$. Args: spacegroup_symbol (str): A spacegroup symbol Returns: A latex formatted spacegroup with proper subscripts and overlines.
juraj-google-style
def load_file_to_base64_str(f_path): path = abs_path(f_path) with io.open(path, 'rb') as f: f_bytes = f.read() base64_str = base64.b64encode(f_bytes).decode('utf-8') return base64_str
Loads the content of a file into a base64 string. Args: f_path: full path to the file including the file name. Returns: A base64 string representing the content of the file in utf-8 encoding.
github-repos
def config(self, commands, **kwargs): commands = make_iterable(commands) commands = list(commands) commands.insert(0, 'configure terminal') response = self.run_commands(commands, **kwargs) if self.autorefresh: self.refresh() response.pop(0) return response
Configures the node with the specified commands This method is used to send configuration commands to the node. It will take either a string or a list and prepend the necessary commands to put the session into config mode. Args: commands (str, list): The commands to send to the node in config mode. If the commands argument is a string it will be cast to a list. The list of commands will also be prepended with the necessary commands to put the session in config mode. **kwargs: Additional keyword arguments for expanded eAPI functionality. Only supported eAPI params are used in building the request Returns: The config method will return a list of dictionaries with the output from each command. The function will strip the response from any commands it prepends.
codesearchnet
def create_contentkey_authorization_policy_options(access_token, key_delivery_type="2", \ name="HLS Open Authorization Policy", key_restriction_type="0"): path = '/ContentKeyAuthorizationPolicyOptions' endpoint = ''.join([ams_rest_endpoint, path]) body = '{ \ "Name":"policy",\ "KeyDeliveryType":"' + key_delivery_type + '", \ "KeyDeliveryConfiguration":"", \ "Restrictions":[{ \ "Name":"' + name + '", \ "KeyRestrictionType":"' + key_restriction_type + '", \ "Requirements":null \ }] \ }' return do_ams_post(endpoint, path, body, access_token, "json_only")
Create Media Service Content Key Authorization Policy Options. Args: access_token (str): A valid Azure authentication token. key_delivery_type (str): A Media Service Content Key Authorization Policy Delivery Type. name (str): A Media Service Contenty Key Authorization Policy Name. key_restiction_type (str): A Media Service Contenty Key Restriction Type. Returns: HTTP response. JSON body.
juraj-google-style