code
stringlengths
20
4.93k
docstring
stringlengths
33
1.27k
source
stringclasses
3 values
def _set_per_output_metric_attributes(self, metrics_dict, output_index): updated_metrics_dict = collections.OrderedDict() for metric_name, metric_fn in metrics_dict.items(): metric_name = self._add_unique_metric_name(metric_name, metric_fn, output_index) metric_fn._name = metric_name updated_metrics_dict[metric_name] = metric_fn self._compile_metric_functions.append(metric_fn) return updated_metrics_dict
Sets the metric attributes on the model for the given output. Args: metrics_dict: A dict with metric names as keys and metric fns as values. output_index: The index of the model output for which the metric attributes are added. Returns: Metrics dict updated with unique metric names as keys.
github-repos
def logical_switches(self): if (not self.__logical_switches): self.__logical_switches = LogicalSwitches(self.__connection) return self.__logical_switches
Gets the LogicalSwitches API client. Returns: LogicalSwitches:
codesearchnet
def IsBlockInNameSpace(nesting_state, is_forward_declaration): if is_forward_declaration: return len(nesting_state.stack) >= 1 and ( isinstance(nesting_state.stack[-1], _NamespaceInfo)) return (len(nesting_state.stack) > 1 and nesting_state.stack[-1].check_namespace_indentation and isinstance(nesting_state.stack[-2], _NamespaceInfo))
Checks that the new block is directly in a namespace. Args: nesting_state: The _NestingState object that contains info about our state. is_forward_declaration: If the class is a forward declared class. Returns: Whether or not the new block is directly in a namespace.
juraj-google-style
def _sub_entity_map(self, assignments, item, campaign): for assignment in assignments: placement = self._placement_dao.get(assignment, required=True) event_tag = self._event_tag_dao.get(assignment, required=True) creative = self._creative_dao.get(assignment, required=True) landing_page = None if assignment.get(FieldMap.AD_LANDING_PAGE_ID, '') != 'CAMPAIGN_DEFAULT': landing_page = self._landing_page_dao.get(assignment, required=True) if landing_page: assignment[FieldMap.AD_LANDING_PAGE_ID] = landing_page['id'] if item: assignment[FieldMap.AD_ID] = item['id'] assignment[FieldMap.AD_NAME] = item['name'] if campaign: assignment[FieldMap.CAMPAIGN_ID] = campaign['id'] assignment[FieldMap.CAMPAIGN_NAME] = campaign['name'] if placement: assignment[FieldMap.PLACEMENT_ID] = placement['id'] assignment[FieldMap.PLACEMENT_NAME] = placement['name'] if creative: assignment[FieldMap.CREATIVE_ID] = creative['id'] assignment[FieldMap.CREATIVE_NAME] = creative['name'] if event_tag: assignment[FieldMap.EVENT_TAG_ID] = event_tag['id'] assignment[FieldMap.EVENT_TAG_NAME] = event_tag['name']
Maps ids and names of sub entities so they can be updated in the Bulkdozer feed. When Bulkdozer is done processing an item, it writes back the updated names and ids of related objects, this method makes sure those are updated in the ad feed. Args: assignments: List of child feeds to map. item: The DCM ad object that was updated or created. campaign: The campaign object associated with the ad.
github-repos
def createLabel(self, name): if self.findLabel(name): raise exception.LabelException('Label exists') node = _node.Label() node.name = name self._labels[node.id] = node return node
Create a new label. Args: name (str): Label name. Returns: gkeepapi.node.Label: The new label. Raises: LabelException: If the label exists.
juraj-google-style
def slice_hidden(self, x): x_sliced = tf.reshape( x, shape=[-1, self.hparams.num_blocks, self.hparams.block_dim]) return x_sliced
Slice encoder hidden state into block_dim. Args: x: Encoder hidden state of shape [-1, hidden_size]. Returns: Sliced states of shape [-1, num_blocks, block_dim].
juraj-google-style
def load(self, email, master_token, android_id): self._email = email self._android_id = android_id self._master_token = master_token self.refresh() return True
Authenticate to Google with the provided master token. Args: email (str): The account to use. master_token (str): The master token. android_id (str): An identifier for this client. Raises: LoginException: If there was a problem logging in.
codesearchnet
def __type_check_attributes(self, node: yaml.Node, mapping: CommentedMap, argspec: inspect.FullArgSpec) -> None: logger.debug('Checking for extraneous attributes') logger.debug('Constructor arguments: {}, mapping: {}'.format(argspec.args, list(mapping.keys()))) for (key, value) in mapping.items(): if (not isinstance(key, str)): raise RecognitionError('{}{}YAtiML only supports strings for mapping keys'.format(node.start_mark, os.linesep)) if ((key not in argspec.args) and ('yatiml_extra' not in argspec.args)): raise RecognitionError('{}{}Found additional attributes and {} does not support those'.format(node.start_mark, os.linesep, self.class_.__name__)) if ((key in argspec.args) and (not self.__type_matches(value, argspec.annotations[key]))): raise RecognitionError('{}{}Expected attribute {} to be of type {} but it is a(n) {}'.format(node.start_mark, os.linesep, key, argspec.annotations[key], type(value)))
Ensure all attributes have a matching constructor argument. This checks that there is a constructor argument with a \ matching type for each existing attribute. If the class has a yatiml_extra attribute, then extra \ attributes are okay and no error will be raised if they exist. Args: node: The node we're processing mapping: The mapping with constructed subobjects constructor_attrs: The attributes of the constructor, \ including self and yatiml_extra, if applicable
codesearchnet
def convert_positional_argument(self, index, arg_value): if self._has_self: if index == 0: return arg_value index -= 1 arg_name = self.arg_names[index] return self.convert_argument(arg_name, arg_value)
Convert and validate a positional argument. Args: index (int): The positional index of the argument arg_value (object): The value to convert and validate Returns: object: The converted value.
juraj-google-style
def scan_devices(self, subnet, timeout=None): max_range = { 16: 256, 24: 256, 25: 128, 27: 32, 28: 16, 29: 8, 30: 4, 31: 2 } if "/" not in subnet: mask = int(24) network = subnet else: network, mask = subnet.split("/") mask = int(mask) if mask not in max_range: raise RuntimeError("Cannot determine the subnet mask!") network = network.rpartition(".")[0] if mask == 16: for i in range(0, 1): network = network.rpartition(".")[0] if mask == 16: for seq1 in range(0, max_range[mask]): for seq2 in range(0, max_range[mask]): ipaddr = "{0}.{1}.{2}".format(network, seq1, seq2) thd = threading.Thread( target=self.__raw_scan, args=(ipaddr, timeout) ) thd.start() else: for seq1 in range(0, max_range[mask]): ipaddr = "{0}.{1}".format(network, seq1) thd = threading.Thread( target=self.__raw_scan, args=(ipaddr, timeout) ) thd.start() return self.amcrest_ips
Scan cameras in a range of ips Params: subnet - subnet, i.e: 192.168.1.0/24 if mask not used, assuming mask 24 timeout_sec - timeout in sec Returns:
juraj-google-style
def add_value(self, field_name: str, value: object=None, json_path: str=None, json_path_extraction: str=None, keep_empty: bool=False) -> None: def validate(v): if (v is not None): if isinstance(v, str): if ((v.strip() != '') or keep_empty): return True else: return False else: return True return False self.validate_field(field_name) if (field_name not in self._kg): self._kg[field_name] = [] if json_path: self._add_doc_value(field_name, json_path) if validate(value): if (not isinstance(value, list)): value = [value] all_valid = True invalid = [] for a_value in value: if isinstance(a_value, Extraction): valid = self._add_single_value(field_name, a_value.value, provenance_path=str(json_path_extraction), keep_empty=keep_empty) elif isinstance(a_value, Segment): valid = self._add_single_value(field_name, a_value.value, provenance_path=a_value.json_path, keep_empty=keep_empty) else: valid = self._add_single_value(field_name, a_value, provenance_path=json_path_extraction, reference_type='constant', keep_empty=keep_empty) all_valid = (all_valid and valid) if (not valid): invalid.append(((field_name + ':') + str(a_value))) if (not all_valid): print(('Some kg value type invalid according to schema:' + json.dumps(invalid))) if (len(self._kg[field_name]) == 0): self._kg.pop(field_name)
Add a value to knowledge graph. Input can either be a value or a json_path. If the input is json_path, the helper function _add_doc_value is called. If the input is a value, then it is handled Args: field_name: str, the field name in the knowledge graph value: the value to be added to the knowledge graph json_path: str, if json_path is provided, then get the value at this path in the doc json_path_extraction: str, discard_empty: bool, Returns:
codesearchnet
def __init__(self, command_sequence, sess, dump_root=None): local_cli_wrapper.LocalCLIDebugWrapperSession.__init__(self, sess, dump_root=dump_root) self._command_sequence = command_sequence self._command_pointer = 0 self.observers = {'debug_dumps': [], 'tf_errors': [], 'run_start_cli_run_numbers': [], 'run_end_cli_run_numbers': [], 'print_feed_responses': [], 'profiler_py_graphs': [], 'profiler_run_metadata': []}
Constructor of the for-test subclass. Args: command_sequence: (list of list of str) A list of command arguments, including the command prefix, each element of the list is such as: ["run", "-n"], ["print_feed", "input:0"]. sess: See the doc string of LocalCLIDebugWrapperSession.__init__. dump_root: See the doc string of LocalCLIDebugWrapperSession.__init__.
github-repos
def token_network_connect(self, registry_address: PaymentNetworkID, token_address: TokenAddress, funds: TokenAmount, initial_channel_target: int=3, joinable_funds_target: float=0.4) -> None: if (not is_binary_address(registry_address)): raise InvalidAddress('registry_address must be a valid address in binary') if (not is_binary_address(token_address)): raise InvalidAddress('token_address must be a valid address in binary') token_network_identifier = views.get_token_network_identifier_by_token_address(chain_state=views.state_from_raiden(self.raiden), payment_network_id=registry_address, token_address=token_address) connection_manager = self.raiden.connection_manager_for_token_network(token_network_identifier) (has_enough_reserve, estimated_required_reserve) = has_enough_gas_reserve(raiden=self.raiden, channels_to_open=initial_channel_target) if (not has_enough_reserve): raise InsufficientGasReserve(f'The account balance is below the estimated amount necessary to finish the lifecycles of all active channels. A balance of at least {estimated_required_reserve} wei is required.') connection_manager.connect(funds=funds, initial_channel_target=initial_channel_target, joinable_funds_target=joinable_funds_target)
Automatically maintain channels open for the given token network. Args: token_address: the ERC20 token network to connect to. funds: the amount of funds that can be used by the ConnectionMananger. initial_channel_target: number of channels to open proactively. joinable_funds_target: fraction of the funds that will be used to join channels opened by other participants.
codesearchnet
def tdist95conf_level(df): df = int(round(df)) highest_table_df = len(_T_DIST_95_CONF_LEVELS) if df >= 200: return 1.960 if df >= 100: return 1.984 if df >= 80: return 1.990 if df >= 60: return 2.000 if df >= 50: return 2.009 if df >= 40: return 2.021 if df >= highest_table_df: return _T_DIST_95_CONF_LEVELS[highest_table_df - 1] return _T_DIST_95_CONF_LEVELS[df]
Approximate the 95% confidence interval for Student's T distribution. Given the degrees of freedom, returns an approximation to the 95% confidence interval for the Student's T distribution. Args: df: An integer, the number of degrees of freedom. Returns: A float.
juraj-google-style
def download_decompress(url: str, download_path: [Path, str], extract_paths=None): file_name = Path(urlparse(url).path).name download_path = Path(download_path) if (extract_paths is None): extract_paths = [download_path] elif isinstance(extract_paths, list): extract_paths = [Path(path) for path in extract_paths] else: extract_paths = [Path(extract_paths)] cache_dir = os.getenv('DP_CACHE_DIR') extracted = False if cache_dir: cache_dir = Path(cache_dir) url_hash = md5(url.encode('utf8')).hexdigest()[:15] arch_file_path = (cache_dir / url_hash) extracted_path = (cache_dir / (url_hash + '_extracted')) extracted = extracted_path.exists() if ((not extracted) and (not arch_file_path.exists())): simple_download(url, arch_file_path) else: arch_file_path = (download_path / file_name) simple_download(url, arch_file_path) extracted_path = extract_paths.pop() if (not extracted): log.info('Extracting {} archive into {}'.format(arch_file_path, extracted_path)) extracted_path.mkdir(parents=True, exist_ok=True) if file_name.endswith('.tar.gz'): untar(arch_file_path, extracted_path) elif file_name.endswith('.gz'): ungzip(arch_file_path, (extracted_path / Path(file_name).with_suffix('').name)) elif file_name.endswith('.zip'): with zipfile.ZipFile(arch_file_path, 'r') as zip_ref: zip_ref.extractall(extracted_path) else: raise RuntimeError(f'Trying to extract an unknown type of archive {file_name}') if (not cache_dir): arch_file_path.unlink() for extract_path in extract_paths: for src in extracted_path.iterdir(): dest = (extract_path / src.name) if src.is_dir(): copytree(src, dest) else: extract_path.mkdir(parents=True, exist_ok=True) shutil.copy(str(src), str(dest))
Download and extract .tar.gz or .gz file to one or several target locations. The archive is deleted if extraction was successful. Args: url: URL for file downloading download_path: path to the directory where downloaded file will be stored until the end of extraction extract_paths: path or list of paths where contents of archive will be extracted
codesearchnet
def connection_made(self, transport): self.transport = transport self.transport.sendto(self.message) self.transport.close()
Create connection, use to send message and close. Args: transport (asyncio.DatagramTransport): Transport used for sending.
codesearchnet
def _DefaultValueConstructorForField(field): if _IsMapField(field): return _GetInitializeDefaultForMap(field) if field.label == _FieldDescriptor.LABEL_REPEATED: if field.has_default_value and field.default_value != []: raise ValueError('Repeated field default value not empty list: %s' % ( field.default_value)) if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: message_type = field.message_type def MakeRepeatedMessageDefault(message): return containers.RepeatedCompositeFieldContainer( message._listener_for_children, field.message_type) return MakeRepeatedMessageDefault else: type_checker = type_checkers.GetTypeChecker(field) def MakeRepeatedScalarDefault(message): return containers.RepeatedScalarFieldContainer( message._listener_for_children, type_checker) return MakeRepeatedScalarDefault if field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE: message_type = field.message_type def MakeSubMessageDefault(message): result = message_type._concrete_class() result._SetListener( _OneofListener(message, field) if field.containing_oneof is not None else message._listener_for_children) return result return MakeSubMessageDefault def MakeScalarDefault(message): return field.default_value return MakeScalarDefault
Returns a function which returns a default value for a field. Args: field: FieldDescriptor object for this field. The returned function has one argument: message: Message instance containing this field, or a weakref proxy of same. That function in turn returns a default value for this field. The default value may refer back to |message| via a weak reference.
juraj-google-style
def allow_nan_stats(self): return self._allow_nan_stats
Python `bool` describing behavior when a stat is undefined. Stats return +/- infinity when it makes sense. E.g., the variance of a Cauchy distribution is infinity. However, sometimes the statistic is undefined, e.g., if a distribution's pdf does not achieve a maximum within the support of the distribution, the mode is undefined. If the mean is undefined, then by definition the variance is undefined. E.g. the mean for Student's T for df = 1 is undefined (no clear way to say it is either + or - infinity), so the variance = E[(X - mean)**2] is also undefined. Returns: allow_nan_stats: Python `bool`.
github-repos
def is_point(self): if self.childCount() == 2: if self.child(0).valid_values == float and self.child(1).valid_values == float: return True else: return False
figures out if item is a point, that is if it has two subelements of type float Args: self: Returns: if item is a point (True) or not (False)
juraj-google-style
def _flatten_obs(self, obs_dict, verbose=False): ob_lst = [] for key in obs_dict: if (key in self.keys): if verbose: print('adding key: {}'.format(key)) ob_lst.append(obs_dict[key]) return np.concatenate(ob_lst)
Filters keys of interest out and concatenate the information. Args: obs_dict: ordered dictionary of observations
codesearchnet
def __init__(self, options): self.__options = options self.count_total = 0 self.items_queued = OrderedDict() self.items_in_progress = OrderedDict() self.items_finished = OrderedDict() self.items_cancelled = OrderedDict() self.items_errored = OrderedDict()
Constructs a Queue instance. Args: options (:class:`nyawc.Options`): The options to use.
juraj-google-style
def _ip_assigned(self): output = [] cmd = ['/sbin/ip', 'address', 'show', 'dev', self.config['interface'], 'to', self.ip_with_prefixlen] if self.ip_check_disabled: self.log.info('checking for IP assignment on interface %s is disabled', self.config['interface']) return True self.log.debug('running %s', ' '.join(cmd)) try: output = subprocess.check_output(cmd, universal_newlines=True, timeout=1) except subprocess.CalledProcessError as error: self.log.error('error checking IP-PREFIX %s: %s', cmd, error.output) return True except subprocess.TimeoutExpired: self.log.error('timeout running %s', ' '.join(cmd)) return True except ValueError as error: self.log.error('running %s raised ValueError exception:%s', ' '.join(cmd), error) return True else: if (self.ip_with_prefixlen in output): msg = '{i} assigned to loopback interface'.format(i=self.ip_with_prefixlen) self.log.debug(msg) return True else: msg = "{i} isn't assigned to {d} interface".format(i=self.ip_with_prefixlen, d=self.config['interface']) self.log.warning(msg) return False self.log.debug("I shouldn't land here!, it is a BUG") return False
Check if IP prefix is assigned to loopback interface. Returns: True if IP prefix found assigned otherwise False.
codesearchnet
def from_value(cls, value): return cls(value.shape, dtype=value.dtype, trainable=value.trainable)
Creates a `VariableSpec` from the given `Variable`. `value`'s shape, dtype, and trainable attributes will be used to create the new `VariableSpec`. Example: >>> v = tf.Variable([1., 2., 3.]) >>> VariableSpec.from_value(v) VariableSpec(shape=(3,), dtype=tf.float32, trainable=True, alias_id=None) Args: value: A Variable. Returns: A `VariableSpec` created from `value`.
github-repos
def _new_import(self, import_name): assert self.root_path is not None, \ '"import" statement can not be used if meta-model is ' \ 'loaded from string.' current_namespace = self._namespace_stack[-1] if '.' in current_namespace: root_namespace = current_namespace.rsplit('.', 1)[0] import_name = "%s.%s" % (root_namespace, import_name) import_file_name = "%s.tx" % os.path.join(self.root_path, *import_name.split(".")) if import_name not in self.namespaces: self._enter_namespace(import_name) if self.debug: self.dprint("*** IMPORTING FILE: %s" % import_file_name) metamodel_from_file(import_file_name, metamodel=self) self._leave_namespace() self._imported_namespaces[current_namespace].append( self.namespaces[import_name])
Starts a new import. Args: import_name(str): A relative import in the dot syntax (e.g. "first.second.expressions")
juraj-google-style
def delete_datastore(self): (success, result) = self._read_from_hdx('datastore', self.data['id'], 'resource_id', self.actions()['datastore_delete'], force=True) if (not success): logger.debug(result)
Delete a resource from the HDX datastore Returns: None
codesearchnet
def __init__(self, scopes=None, service_account_name='default', **kwds): self.__service_account_name = service_account_name cached_scopes = None cache_filename = kwds.get('cache_filename') if cache_filename: cached_scopes = self._CheckCacheFileForMatch( cache_filename, scopes) scopes = cached_scopes or self._ScopesFromMetadataServer(scopes) if cache_filename and not cached_scopes: self._WriteCacheFile(cache_filename, scopes) with warnings.catch_warnings(): warnings.simplefilter('ignore') super(GceAssertionCredentials, self).__init__(scope=scopes, **kwds)
Initializes the credentials instance. Args: scopes: The scopes to get. If None, whatever scopes that are available to the instance are used. service_account_name: The service account to retrieve the scopes from. **kwds: Additional keyword args.
juraj-google-style
def evaluate(self, instance, step, extra): chain = step.chain[1:] if self.strict and not chain: raise TypeError( "A ContainerAttribute in 'strict' mode can only be used " "within a SubFactory.") return self.function(instance, chain)
Evaluate the current ContainerAttribute. Args: obj (LazyStub): a lazy stub of the object being constructed, if needed. containers (list of LazyStub): a list of lazy stubs of factories being evaluated in a chain, each item being a future field of next one.
juraj-google-style
def decrease_exponent_to(self, new_exp): if (new_exp > self.exponent): raise ValueError(('New exponent %i should be more negative thanold exponent %i' % (new_exp, self.exponent))) factor = pow(self.BASE, (self.exponent - new_exp)) new_enc = ((self.encoding * factor) % self.public_key.n) return self.__class__(self.public_key, new_enc, new_exp)
Return an `EncodedNumber` with same value but lower exponent. If we multiply the encoded value by :attr:`BASE` and decrement :attr:`exponent`, then the decoded value does not change. Thus we can almost arbitrarily ratchet down the exponent of an :class:`EncodedNumber` - we only run into trouble when the encoded integer overflows. There may not be a warning if this happens. This is necessary when adding :class:`EncodedNumber` instances, and can also be useful to hide information about the precision of numbers - e.g. a protocol can fix the exponent of all transmitted :class:`EncodedNumber` to some lower bound(s). Args: new_exp (int): the desired exponent. Returns: EncodedNumber: Instance with the same value and desired exponent. Raises: ValueError: You tried to increase the exponent, which can't be done without decryption.
codesearchnet
def num_samples(self, dataset_split): return { problem.DatasetSplit.TRAIN: 1000000, problem.DatasetSplit.EVAL: 10000, problem.DatasetSplit.TEST: 10000 }[dataset_split]
Determine the dataset sized given a dataset_split. Args: dataset_split: A problem.DatasetSplit. Returns: The desired number of samples for this dataset_split.
juraj-google-style
def get_stream(self, error_callback=None, live=True): self.join() return Stream(self, error_callback=error_callback, live=live)
Get room stream to listen for messages. Kwargs: error_callback (func): Callback to call when an error occurred (parameters: exception) live (bool): If True, issue a live stream, otherwise an offline stream Returns: :class:`Stream`. Stream
codesearchnet
def get_filtered_vts_list(self, vts, vt_filter): if (not vt_filter): raise RequiredArgument('vt_filter: A valid filter is required.') filters = self.parse_filters(vt_filter) if (not filters): return None _vts_aux = vts.copy() for (_element, _oper, _filter_val) in filters: for vt_id in _vts_aux.copy(): if (not _vts_aux[vt_id].get(_element)): _vts_aux.pop(vt_id) continue _elem_val = _vts_aux[vt_id].get(_element) _val = self.format_filter_value(_element, _elem_val) if self.filter_operator[_oper](_val, _filter_val): continue else: _vts_aux.pop(vt_id) return _vts_aux
Gets a collection of vulnerability test from the vts dictionary, which match the filter. Arguments: vt_filter (string): Filter to apply to the vts collection. vts (dictionary): The complete vts collection. Returns: Dictionary with filtered vulnerability tests.
codesearchnet
def get_go_server(settings=None): if (not settings): settings = get_settings() return gocd.Server(settings.get('server'), user=settings.get('user'), password=settings.get('password'))
Returns a `gocd.Server` configured by the `settings` object. Args: settings: a `gocd_cli.settings.Settings` object. Default: if falsey calls `get_settings`. Returns: gocd.Server: a configured gocd.Server instance
codesearchnet
def _GetConfigValue(self, config_parser, section_name, value_name): try: return config_parser.get(section_name, value_name) except configparser.NoOptionError: return None
Retrieves a value from the config parser. Args: config_parser (ConfigParser): configuration parser. section_name (str): name of the section that contains the value. value_name (str): name of the value. Returns: object: configuration value or None if the value does not exists.
juraj-google-style
def vector(p1, p2): return np.subtract(p1[COLS.XYZ], p2[COLS.XYZ])
compute vector between two 3D points Args: p1, p2: indexable objects with indices 0, 1, 2 corresponding to 3D cartesian coordinates. Returns: 3-vector from p1 - p2
juraj-google-style
def is_alive(self) -> bool: return self._thread.is_alive()
Returns whether the thread is alive. This method returns True just before the run() method starts until just after the run() method terminates. Returns: True if the thread is alive, otherwise False.
github-repos
def _render_objects(self, items, attributes=None, datatype='object'): if (not items): return if (datatype == 'chartdata'): if (not attributes): attributes = [items['cols'][i]['label'] for i in range(0, len(items['cols']))] items = items['rows'] indices = {attributes[i]: i for i in range(0, len(attributes))} num_segments = len(self._segments) self._segments.append('<table>') first = True for o in items: if first: first = False if ((datatype == 'dict') and (not attributes)): attributes = list(o.keys()) if (attributes is not None): self._segments.append('<tr>') for attr in attributes: self._segments.append(('<th>%s</th>' % attr)) self._segments.append('</tr>') self._segments.append('<tr>') if (attributes is None): self._segments.append(('<td>%s</td>' % HtmlBuilder._format(o))) else: for attr in attributes: if (datatype == 'dict'): self._segments.append(('<td>%s</td>' % HtmlBuilder._format(o.get(attr, None), nbsp=True))) elif (datatype == 'chartdata'): self._segments.append(('<td>%s</td>' % HtmlBuilder._format(o['c'][indices[attr]]['v'], nbsp=True))) else: self._segments.append(('<td>%s</td>' % HtmlBuilder._format(o.__getattribute__(attr), nbsp=True))) self._segments.append('</tr>') self._segments.append('</table>') if first: self._segments = self._segments[:num_segments]
Renders an HTML table with the specified list of objects. Args: items: the iterable collection of objects to render. attributes: the optional list of properties or keys to render. datatype: the type of data; one of 'object' for Python objects, 'dict' for a list of dictionaries, or 'chartdata' for Google chart data.
codesearchnet
def _send_offset_commit_request(self, offsets): assert self.config['api_version'] >= (0, 8, 1), 'Unsupported Broker API' assert all(map(lambda k: isinstance(k, TopicPartition), offsets)) assert all(map(lambda v: isinstance(v, OffsetAndMetadata), offsets.values())) if not offsets: log.debug('No offsets to commit') return Future().success(None) node_id = self.coordinator() if node_id is None: return Future().failure(Errors.GroupCoordinatorNotAvailableError) offset_data = collections.defaultdict(dict) for tp, offset in six.iteritems(offsets): offset_data[tp.topic][tp.partition] = offset if self._subscription.partitions_auto_assigned(): generation = self.generation() else: generation = Generation.NO_GENERATION if self.config['api_version'] >= (0, 9) and generation is None: return Future().failure(Errors.CommitFailedError()) if self.config['api_version'] >= (0, 9): request = OffsetCommitRequest[2]( self.group_id, generation.generation_id, generation.member_id, OffsetCommitRequest[2].DEFAULT_RETENTION_TIME, [( topic, [( partition, offset.offset, offset.metadata ) for partition, offset in six.iteritems(partitions)] ) for topic, partitions in six.iteritems(offset_data)] ) elif self.config['api_version'] >= (0, 8, 2): request = OffsetCommitRequest[1]( self.group_id, -1, '', [( topic, [( partition, offset.offset, -1, offset.metadata ) for partition, offset in six.iteritems(partitions)] ) for topic, partitions in six.iteritems(offset_data)] ) elif self.config['api_version'] >= (0, 8, 1): request = OffsetCommitRequest[0]( self.group_id, [( topic, [( partition, offset.offset, offset.metadata ) for partition, offset in six.iteritems(partitions)] ) for topic, partitions in six.iteritems(offset_data)] ) log.debug("Sending offset-commit request with %s for group %s to %s", offsets, self.group_id, node_id) future = Future() _f = self._client.send(node_id, request) _f.add_callback(self._handle_offset_commit_response, offsets, future, time.time()) _f.add_errback(self._failed_request, node_id, request, future) return future
Commit offsets for the specified list of topics and partitions. This is a non-blocking call which returns a request future that can be polled in the case of a synchronous commit or ignored in the asynchronous case. Arguments: offsets (dict of {TopicPartition: OffsetAndMetadata}): what should be committed Returns: Future: indicating whether the commit was successful or not
juraj-google-style
def transition_scope(self, state: Sequence[tf.Tensor], action: Sequence[tf.Tensor]) -> Dict[str, TensorFluent]: scope = {} scope.update(self.non_fluents_scope()) scope.update(self.state_scope(state)) scope.update(self.action_scope(action)) return scope
Returns the complete transition fluent scope for the current `state` and `action` fluents. Args: state (Sequence[tf.Tensor]): The current state fluents. action (Sequence[tf.Tensor]): The action fluents. Returns: A mapping from fluent names to :obj:`rddl2tf.fluent.TensorFluent`.
juraj-google-style
def command(self, server_id, command, *args): server = self._storage[server_id] try: if args: result = getattr(server, command)(*args) else: result = getattr(server, command)() except AttributeError: raise ValueError("Cannot issue the command %r to server %s" % (command, server_id)) self._storage[server_id] = server return result
run command Args: server_id - server identity command - command which apply to server
juraj-google-style
def __nonzero__(self): self._disallow_bool_casting()
Dummy method to prevent a tensor from being used as a Python `bool`. This is the Python 2.x counterpart to `__bool__()` above. Raises: `TypeError`.
github-repos
def prettyprint_cfg_node(node, decorate_after_node=0, full=False): if node.id <= decorate_after_node: return repr(node) + f' [{len(node.bindings)} bindings]' if full: name = lambda x: getattr(x, 'name', str(x)) else: name = str bindings = collections.defaultdict(list) for b in node.bindings: bindings[b.variable.id].append(name(b.data)) b = ', '.join(['%d:%s' % (k, '|'.join(v)) for k, v in sorted(bindings.items())]) return repr(node) + ' [' + b + ']'
A reasonably compact representation of all the bindings at a node. Args: node: The node to prettyprint. decorate_after_node: Don't print bindings unless node_id > this. full: Print the full string representation of a binding's data Returns: A prettyprinted node.
github-repos
def _read_output(self, stream, callback, output_file): if (((callback is None) and (output_file is None)) or stream.closed): return False line = stream.readline() if line: if (callback is not None): callback(line.decode(), self._data, self._store, self._signal, self._context) if (output_file is not None): output_file.write(line) return True else: return False
Read the output of the process, executed the callback and save the output. Args: stream: A file object pointing to the output stream that should be read. callback(callable, None): A callback function that is called for each new line of output. output_file: A file object to which the full output is written. Returns: bool: True if a line was read from the output, otherwise False.
codesearchnet
def merge_nodes(self, n1: str, n2: str, same_polarity: bool = True): for p in self.predecessors(n1): for st in self[p][n1]["InfluenceStatements"]: if not same_polarity: st.obj_delta["polarity"] = -st.obj_delta["polarity"] st.obj.db_refs["UN"][0] = (n2, st.obj.db_refs["UN"][0][1]) if not self.has_edge(p, n2): self.add_edge(p, n2) self[p][n2]["InfluenceStatements"] = self[p][n1][ "InfluenceStatements" ] else: self[p][n2]["InfluenceStatements"] += self[p][n1][ "InfluenceStatements" ] for s in self.successors(n1): for st in self.edges[n1, s]["InfluenceStatements"]: if not same_polarity: st.subj_delta["polarity"] = -st.subj_delta["polarity"] st.subj.db_refs["UN"][0] = (n2, st.subj.db_refs["UN"][0][1]) if not self.has_edge(n2, s): self.add_edge(n2, s) self[n2][s]["InfluenceStatements"] = self[n1][s][ "InfluenceStatements" ] else: self[n2][s]["InfluenceStatements"] += self[n1][s][ "InfluenceStatements" ] self.remove_node(n1)
Merge node n1 into node n2, with the option to specify relative polarity. Args: n1 n2 same_polarity
juraj-google-style
def unpack_wstring(self, offset, length): start = self._offset + offset end = self._offset + offset + 2 * length try: return bytes(self._buf[start:end]).decode("utf16") except AttributeError: return bytes(self._buf[start:end]).decode('utf16')
Returns a string from the relative offset with the given length, where each character is a wchar (2 bytes) Arguments: - `offset`: The relative offset from the start of the block. - `length`: The length of the string. Throws: - `UnicodeDecodeError`
juraj-google-style
def get_policy(self, name): address = _create_policy_address(name) policy_list_bytes = None try: policy_list_bytes = self._state_view.get(address=address) except KeyError: return None if (policy_list_bytes is not None): policy_list = _create_from_bytes(policy_list_bytes, identity_pb2.PolicyList) for policy in policy_list.policies: if (policy.name == name): return policy return None
Get a single Policy by name. Args: name (str): The name of the Policy. Returns: (:obj:`Policy`) The Policy that matches the name.
codesearchnet
def Deserialize(self, reader): self.Type = reader.ReadByte() self.Hashes = reader.ReadHashes()
Deserialize full object. Args: reader (neo.IO.BinaryReader):
juraj-google-style
def extract_sub_graph(graph_def, dest_nodes): if not isinstance(graph_def, graph_pb2.GraphDef): raise TypeError(f'graph_def must be a graph_pb2.GraphDef proto, but got type {type(graph_def)}.') if isinstance(dest_nodes, str): raise TypeError(f'dest_nodes must be an iterable of strings, but got type {type(dest_nodes)}.') name_to_input_name, name_to_node, name_to_seq_num = _extract_graph_summary(graph_def) _assert_nodes_are_present(name_to_node, dest_nodes) nodes_to_keep = _bfs_for_reachable_nodes(dest_nodes, name_to_input_name) nodes_to_keep_list = sorted(list(nodes_to_keep), key=lambda n: name_to_seq_num[n]) out = graph_pb2.GraphDef() for n in nodes_to_keep_list: out.node.extend([copy.deepcopy(name_to_node[n])]) out.library.CopyFrom(graph_def.library) out.versions.CopyFrom(graph_def.versions) return out
Extract the subgraph that can reach any of the nodes in 'dest_nodes'. Args: graph_def: A graph_pb2.GraphDef proto. dest_nodes: An iterable of strings specifying the destination node names. Returns: The GraphDef of the sub-graph. Raises: TypeError: If 'graph_def' is not a graph_pb2.GraphDef proto.
github-repos
def get(object_ids): if isinstance(object_ids, (tuple, np.ndarray)): return ray.get(list(object_ids)) elif isinstance(object_ids, dict): keys_to_get = [ k for k, v in object_ids.items() if isinstance(v, ray.ObjectID) ] ids_to_get = [ v for k, v in object_ids.items() if isinstance(v, ray.ObjectID) ] values = ray.get(ids_to_get) result = object_ids.copy() for key, value in zip(keys_to_get, values): result[key] = value return result else: return ray.get(object_ids)
Get a single or a collection of remote objects from the object store. This method is identical to `ray.get` except it adds support for tuples, ndarrays and dictionaries. Args: object_ids: Object ID of the object to get, a list, tuple, ndarray of object IDs to get or a dict of {key: object ID}. Returns: A Python object, a list of Python objects or a dict of {key: object}.
juraj-google-style
def execute(self, container: Container, test: TestCase, verbose: bool=False) -> TestOutcome: bug = self.__installation.bugs[container.bug] response = self.command(container, cmd=test.command, context=test.context, stderr=True, time_limit=test.time_limit, kill_after=test.kill_after, verbose=verbose) passed = test.oracle.check(response) return TestOutcome(response, passed)
Runs a specified test inside a given container. Returns: the outcome of the test execution.
codesearchnet
def _expand_variable_match(positional_vars, named_vars, match): positional = match.group('positional') name = match.group('name') if (name is not None): try: return six.text_type(named_vars[name]) except KeyError: raise ValueError("Named variable '{}' not specified and needed by template `{}` at position {}".format(name, match.string, match.start())) elif (positional is not None): try: return six.text_type(positional_vars.pop(0)) except IndexError: raise ValueError('Positional variable not specified and needed by template `{}` at position {}'.format(match.string, match.start())) else: raise ValueError('Unknown template expression {}'.format(match.group(0)))
Expand a matched variable with its value. Args: positional_vars (list): A list of positonal variables. This list will be modified. named_vars (dict): A dictionary of named variables. match (re.Match): A regular expression match. Returns: str: The expanded variable to replace the match. Raises: ValueError: If a positional or named variable is required by the template but not specified or if an unexpected template expression is encountered.
codesearchnet
def to_df(self, **kwargs): return pd.read_sql(sql=self.statement, con=self.session.bind, **kwargs)
[pandas.read_sql] Arguments: Query {[type]} -- [description] Returns: [pd.DataFrame or generate] -- [description]
juraj-google-style
def start_prompt(self, message, text_input=False, cli_color=''): with self._cond: if self._prompt: raise MultiplePromptsError prompt_id = uuid.uuid4().hex _LOG.debug('Displaying prompt (%s): "%s"%s', prompt_id, message, ', Expects text input.' if text_input else '') self._response = None self._prompt = Prompt( id=prompt_id, message=message, text_input=text_input) if sys.stdin.isatty(): self._console_prompt = ConsolePrompt( message, functools.partial(self.respond, prompt_id), cli_color) self._console_prompt.start() self.notify_update() return prompt_id
Display a prompt. Args: message: A string to be presented to the user. text_input: A boolean indicating whether the user must respond with text. cli_color: An ANSI color code, or the empty string. Raises: MultiplePromptsError: There was already an existing prompt. Returns: A string uniquely identifying the prompt.
juraj-google-style
def get_keys(self, alias_name, key_format): uri = self.URI + "/keys/" + alias_name + "?format=" + key_format return self._client.get(uri)
Retrieves the contents of PKCS12 file in the format specified. This PKCS12 formatted file contains both the certificate as well as the key file data. Valid key formats are Base64 and PKCS12. Args: alias_name: Key pair associated with the RabbitMQ key_format: Valid key formats are Base64 and PKCS12. Returns: dict: RabbitMQ certificate
juraj-google-style
def torque_off(self): data = [] data.append(0x0A) data.append(self.servoid) data.append(RAM_WRITE_REQ) data.append(TORQUE_CONTROL_RAM) data.append(0x01) data.append(0x00) send_data(data)
Set the torques of Herkulex to zero In this mode, position control and velocity control will not work, enable torque before that. Also the servo shaft is freely movable Args: none
juraj-google-style
def _update_sample_weight_modes(self, sample_weights=None): if not self._is_compiled: return if sample_weights and any((s is not None for s in sample_weights)): for endpoint in self._training_endpoints: endpoint.sample_weight_mode = endpoint.sample_weight_mode or 'samplewise' else: for endpoint in self._training_endpoints: endpoint.sample_weight_mode = None
Updates sample weight modes based on training/eval inputs. Sample weight placeholders will be created for all or no outputs based on whether sample_weight is provided for any output. If model contains `_sample_weight_modes` we check if the input `sample_weights` corresponds to the sample weight modes. 1. Set sample weight mode to be 'temporal' for output i, if `compile` sample_weight_mode was set to `temporal` and sample weight inputs are given for one or more outputs. 2. Set sample weight mode to be 'samplewise' for output i, if `compile` sample_weight_mode was not set and sample weight inputs are given for one or more outputs. 3. Reset sample weight mode to None for output i if sample weight mode was set but there is no sample weight input. Args: sample_weights: List of sample weights of the same length as model outputs or None.
github-repos
def noisy_wrap(__func: Callable) -> Callable: def wrapper(*args, **kwargs): DebugPrint.enable() try: __func(*args, **kwargs) finally: DebugPrint.disable() return wrapper
Decorator to enable DebugPrint for a given function. Args: __func: Function to wrap Returns: Wrapped function
juraj-google-style
def get_mask_from_raster(rasterfile, outmaskfile, keep_nodata=False): raster_r = RasterUtilClass.read_raster(rasterfile) xsize = raster_r.nCols ysize = raster_r.nRows nodata_value = raster_r.noDataValue srs = raster_r.srs x_min = raster_r.xMin y_max = raster_r.yMax dx = raster_r.dx data = raster_r.data if (not keep_nodata): i_min = (ysize - 1) i_max = 0 j_min = (xsize - 1) j_max = 0 for i in range(ysize): for j in range(xsize): if (abs((data[i][j] - nodata_value)) > DELTA): i_min = min(i, i_min) i_max = max(i, i_max) j_min = min(j, j_min) j_max = max(j, j_max) y_size_mask = ((i_max - i_min) + 1) x_size_mask = ((j_max - j_min) + 1) x_min_mask = (x_min + (j_min * dx)) y_max_mask = (y_max - (i_min * dx)) else: y_size_mask = ysize x_size_mask = xsize x_min_mask = x_min y_max_mask = y_max i_min = 0 j_min = 0 print(('%dx%d -> %dx%d' % (xsize, ysize, x_size_mask, y_size_mask))) mask = numpy.zeros((y_size_mask, x_size_mask)) for i in range(y_size_mask): for j in range(x_size_mask): if (abs((data[(i + i_min)][(j + j_min)] - nodata_value)) > DELTA): mask[i][j] = 1 else: mask[i][j] = DEFAULT_NODATA mask_geotrans = [x_min_mask, dx, 0, y_max_mask, 0, (- dx)] RasterUtilClass.write_gtiff_file(outmaskfile, y_size_mask, x_size_mask, mask, mask_geotrans, srs, DEFAULT_NODATA, GDT_Int32) return Raster(y_size_mask, x_size_mask, mask, DEFAULT_NODATA, mask_geotrans, srs)
Generate mask data from a given raster data. Args: rasterfile: raster file path. outmaskfile: output mask file path. Returns: Raster object of mask data.
codesearchnet
def delete(paths): if isinstance(paths, str): raise BeamIOError('Delete passed string argument instead of list: %s' % paths) if len(paths) == 0: return filesystem = FileSystems.get_filesystem(paths[0]) return filesystem.delete(paths)
Deletes files or directories at the provided paths. Directories will be deleted recursively. Args: paths: list of paths that give the file objects to be deleted Raises: ``BeamIOError``: if any of the delete operations fail
github-repos
def just_load_srno(srno, prm_filename=None): from cellpy import dbreader, filefinder print("just_load_srno: srno: %i" % srno) print("just_load_srno: making class and setting prms") d = CellpyData() print() print("just_load_srno: starting to load reader") reader = dbreader.Reader() print("------ok------") run_name = reader.get_cell_name(srno) print("just_load_srno: run_name:") print(run_name) m = reader.get_mass(srno) print("just_load_srno: mass: %f" % m) print() print("just_load_srno: getting file_names") raw_files, cellpy_file = filefinder.search_for_files(run_name) print("raw_files:", raw_files) print("cellpy_file:", cellpy_file) print("just_load_srno: running loadcell") d.loadcell(raw_files, cellpy_file, mass=m) print("------ok------") print("just_load_srno: getting step_numbers for charge") v = d.get_step_numbers("charge") print(v) print() print("just_load_srno: finding C-rates") d.find_C_rates(v, silent=False) print() print("just_load_srno: OK") return True
Simply load an dataset based on serial number (srno). This convenience function reads a dataset based on a serial number. This serial number (srno) must then be defined in your database. It is mainly used to check that things are set up correctly. Args: prm_filename: name of parameter file (optional). srno (int): serial number Example: >>> srno = 918 >>> just_load_srno(srno) srno: 918 read prms ....
juraj-google-style
def build_transcript(transcript_info, build='37'): try: transcript_id = transcript_info['ensembl_transcript_id'] except KeyError: raise KeyError('Transcript has to have ensembl id') build = build is_primary = transcript_info.get('is_primary', False) refseq_id = transcript_info.get('refseq_id') refseq_identifiers = transcript_info.get('refseq_identifiers') try: chrom = transcript_info['chrom'] except KeyError: raise KeyError('Transcript has to have a chromosome') try: start = int(transcript_info['transcript_start']) except KeyError: raise KeyError('Transcript has to have start') except TypeError: raise TypeError('Transcript start has to be integer') try: end = int(transcript_info['transcript_end']) except KeyError: raise KeyError('Transcript has to have end') except TypeError: raise TypeError('Transcript end has to be integer') try: hgnc_id = int(transcript_info['hgnc_id']) except KeyError: raise KeyError('Transcript has to have a hgnc id') except TypeError: raise TypeError('hgnc id has to be integer') transcript_obj = HgncTranscript(transcript_id=transcript_id, hgnc_id=hgnc_id, chrom=chrom, start=start, end=end, is_primary=is_primary, refseq_id=refseq_id, refseq_identifiers=refseq_identifiers, build=build) for key in list(transcript_obj): if (transcript_obj[key] is None): transcript_obj.pop(key) return transcript_obj
Build a hgnc_transcript object Args: transcript_info(dict): Transcript information Returns: transcript_obj(HgncTranscript) { transcript_id: str, required hgnc_id: int, required build: str, required refseq_id: str, chrom: str, required start: int, required end: int, required is_primary: bool }
codesearchnet
def _definition_from_example(example): assert isinstance(example, dict) def _has_simple_type(value): accepted = (str, int, float, bool) return isinstance(value, accepted) definition = {'type': 'object', 'properties': {}} for (key, value) in example.items(): if (not _has_simple_type(value)): raise Exception('Not implemented yet') ret_value = None if isinstance(value, str): ret_value = {'type': 'string'} elif isinstance(value, int): ret_value = {'type': 'integer', 'format': 'int64'} elif isinstance(value, float): ret_value = {'type': 'number', 'format': 'double'} elif isinstance(value, bool): ret_value = {'type': 'boolean'} else: raise Exception('Not implemented yet') definition['properties'][key] = ret_value return definition
Generates a swagger definition json from a given example Works only for simple types in the dict Args: example: The example for which we want a definition Type is DICT Returns: A dict that is the swagger definition json
codesearchnet
def get_cross_attention_token_mask(input_ids: List[int], image_token_id: int) -> List[List[int]]: image_token_locations = [i for i, token in enumerate(input_ids) if token == image_token_id] if len(image_token_locations) == 0: return [] if len(image_token_locations) == 1: return [[image_token_locations[0], -1]] vision_masks = [[loc1, loc2] for loc1, loc2 in zip(image_token_locations[:-1], image_token_locations[1:])] vision_masks.append([image_token_locations[-1], len(input_ids)]) last_mask_end = vision_masks[-1][1] for vision_mask in vision_masks[::-1]: if vision_mask[0] == vision_mask[1] - 1: vision_mask[1] = last_mask_end last_mask_end = vision_mask[1] return vision_masks
Generate a cross-attention token mask for image tokens in the input sequence. This function identifies the positions of image tokens in the input sequence and creates a mask that defines which subsequent tokens each image token should attend to. Args: input_ids (List[int]): A list of token ids representing the input sequence. image_token_id (int): The id of the token used to represent images in the sequence. Returns: List[List[int]]: A list of [start, end] pairs, where each pair represents the range of tokens an image token should attend to. Notes: - If no image tokens are present, an empty list is returned. - For a single image token, it attends to all subsequent tokens until the end of the sequence. - For multiple image tokens, each attends to tokens up to the next image token or the end of the sequence. - Consecutive image tokens are treated as a group and attend to all subsequent tokens together.
github-repos
def __init__(self, data_type=None): super(EventData, self).__init__() self.data_type = data_type self.offset = None self.query = None
Initializes an event data attribute container. Args: data_type (Optional[str]): event data type indicator.
juraj-google-style
def _ParseCredentialOptions(self, options): credentials = getattr(options, 'credentials', []) if not isinstance(credentials, list): raise errors.BadConfigOption('Unsupported credentials value.') for credential_string in credentials: credential_type, _, credential_data = credential_string.partition(':') if not credential_type or not credential_data: raise errors.BadConfigOption( 'Badly formatted credential: {0:s}.'.format(credential_string)) if credential_type not in self._SUPPORTED_CREDENTIAL_TYPES: raise errors.BadConfigOption( 'Unsupported credential type for: {0:s}.'.format( credential_string)) if credential_type in self._BINARY_DATA_CREDENTIAL_TYPES: try: credential_data = credential_data.decode('hex') except TypeError: raise errors.BadConfigOption( 'Unsupported credential data for: {0:s}.'.format( credential_string)) self._credentials.append((credential_type, credential_data))
Parses the credential options. Args: options (argparse.Namespace): command line arguments. Raises: BadConfigOption: if the options are invalid.
juraj-google-style
def uniprot_ec(uniprot_id): r = requests.post(('http: ec = r.content.decode('utf-8').splitlines()[1] if (len(ec) == 0): ec = None return ec
Retrieve the EC number annotation for a UniProt ID. Args: uniprot_id: Valid UniProt ID Returns:
codesearchnet
def validate_per_replica_inputs(distribution_strategy, x): per_replica_list = nest.flatten(x, expand_composites=True) x_values_list = [] for x in per_replica_list: x_values = distribution_strategy.unwrap(x) for value in x_values: if not tensor_util.is_tf_type(value): raise ValueError('Dataset input to the model should be tensors instead they are of type {}'.format(type(value))) if not context.executing_eagerly(): validate_all_tensor_shapes(x, x_values) validate_all_tensor_types(x, x_values) x_values_list.append(x_values[0]) return x_values_list
Validates PerReplica dataset input list. Args: distribution_strategy: The current DistributionStrategy used to call `fit`, `evaluate` and `predict`. x: A list of PerReplica objects that represent the input or target values. Returns: List containing the first element of each of the PerReplica objects in the input list. Raises: ValueError: If any of the objects in the `per_replica_list` is not a tensor.
github-repos
def extract_example_parser_configuration(parse_example_op, sess): if parse_example_op.type == 'ParseExample': return _extract_from_parse_example(parse_example_op, sess) elif parse_example_op.type == 'ParseExampleV2': return _extract_from_parse_example_v2(parse_example_op, sess) else: raise ValueError(f'Found unexpected type when parsing example. Expected `ParseExample` object. Received type: {parse_example_op.type}')
Returns an ExampleParserConfig proto. Args: parse_example_op: A ParseExample or ParseExampleV2 `Operation` sess: A tf.compat.v1.Session needed to obtain some configuration values. Returns: A ExampleParserConfig proto. Raises: ValueError: If attributes are inconsistent.
github-repos
def _get_or_create_eval_step(): graph = ops.get_default_graph() eval_steps = graph.get_collection(ops.GraphKeys.EVAL_STEP) if len(eval_steps) == 1: return eval_steps[0] elif len(eval_steps) > 1: raise ValueError('Multiple tensors added to tf.GraphKeys.EVAL_STEP') else: counter = variable_scope.get_variable('eval_step', shape=[], dtype=dtypes.int64, initializer=init_ops.zeros_initializer(), trainable=False, collections=[ops.GraphKeys.LOCAL_VARIABLES, ops.GraphKeys.EVAL_STEP]) return counter
Gets or creates the eval step `Tensor`. Returns: A `Tensor` representing a counter for the evaluation step. Raises: ValueError: If multiple `Tensors` have been added to the `tf.GraphKeys.EVAL_STEP` collection.
github-repos
def execute_wait(self, cmd, walltime=2, envs={}): stdin, stdout, stderr = self.ssh_client.exec_command( self.prepend_envs(cmd, envs), bufsize=-1, timeout=walltime ) exit_status = stdout.channel.recv_exit_status() return exit_status, stdout.read().decode("utf-8"), stderr.read().decode("utf-8")
Synchronously execute a commandline string on the shell. Args: - cmd (string) : Commandline string to execute - walltime (int) : walltime in seconds Kwargs: - envs (dict) : Dictionary of env variables Returns: - retcode : Return code from the execution, -1 on fail - stdout : stdout string - stderr : stderr string Raises: None.
juraj-google-style
def set_hostname(hostname): with salt.utils.winapi.Com(): conn = wmi.WMI() comp = conn.Win32_ComputerSystem()[0] return comp.Rename(Name=hostname)
Set the hostname of the windows minion, requires a restart before this will be updated. .. versionadded:: 2016.3.0 Args: hostname (str): The hostname to set Returns: bool: ``True`` if successful, otherwise ``False`` CLI Example: .. code-block:: bash salt 'minion-id' system.set_hostname newhostname
codesearchnet
def set_type(self, agent_type): type_str = SpawnAgentCommand.__type_keys[agent_type] self.add_string_parameters(type_str)
Set the type of agent to spawn in Holodeck. Currently accepted agents are: DiscreteSphereAgent, UAVAgent, and AndroidAgent. Args: agent_type (str): The type of agent to spawn.
juraj-google-style
def assert_raises_regex(expected_exception, expected_regex, extras=None, *args, **kwargs): context = _AssertRaisesContext(expected_exception, expected_regex, extras=extras) return context
Assert that an exception is raised when a function is called. If no exception is raised, test fail. If an exception is raised but not of the expected type, the exception is let through. If an exception of the expected type is raised but the error message does not match the expected_regex, test fail. This should only be used as a context manager: with assert_raises(Exception): func() Args: expected_exception: An exception class that is expected to be raised. extras: An optional field for extra information to be included in test result.
github-repos
def data_group_association(self, xid): groups = [] group_data = None if (self.groups.get(xid) is not None): group_data = self.groups.get(xid) del self.groups[xid] elif (self.groups_shelf.get(xid) is not None): group_data = self.groups_shelf.get(xid) del self.groups_shelf[xid] if (group_data is not None): group_data = self.data_group_type(group_data) groups.append(group_data) for assoc_xid in group_data.get('associatedGroupXid', []): groups.extend(self.data_group_association(assoc_xid)) return groups
Return group dict array following all associations. Args: xid (str): The xid of the group to retrieve associations. Returns: list: A list of group dicts.
codesearchnet
def get_symbol(self, symbol): self._ensure_symbols_loaded() if type(symbol) is int: return self._symbols_by_index[symbol] else: return self._symbols_by_name[symbol]
Get a specific symbol by index or name. Args: symbol(int or str): The index or name of the symbol to return. Returns: ELF.Symbol: The symbol. Raises: KeyError: The requested symbol does not exist.
juraj-google-style
def update(self, uid: int, flag_set: Iterable[Flag], op: FlagOp = FlagOp.REPLACE) -> FrozenSet[Flag]: orig_set = self._flags.get(uid, frozenset()) new_flags = op.apply(orig_set, self & flag_set) if new_flags: self._flags[uid] = new_flags else: self._flags.pop(uid, None) return new_flags
Update the flags for the session, returning the resulting flags. Args: uid: The message UID value. flag_set: The set of flags for the update operation. op: The type of update.
juraj-google-style
def _UpdateEtag(self, response): etag = response.headers.get('etag', self.etag) etag_updated = (self.etag != etag) self.etag = etag return etag_updated
Update the etag from an API response. Args: response: HTTP response with a header field. Returns: bool, True if the etag in the response header updated.
codesearchnet
def ParseOptions(cls, options, configuration_object): if not isinstance(configuration_object, tools.CLITool): raise errors.BadConfigObject( 'Configuration object is not an instance of CLITool') number_of_extraction_workers = cls._ParseNumericOption( options, 'workers', default_value=0) if number_of_extraction_workers < 0: raise errors.BadConfigOption( 'Invalid number of extraction workers value cannot be negative.') worker_memory_limit = cls._ParseNumericOption( options, 'worker_memory_limit') if worker_memory_limit and worker_memory_limit < 0: raise errors.BadConfigOption( 'Invalid worker memory limit value cannot be negative.') setattr( configuration_object, '_number_of_extraction_workers', number_of_extraction_workers) setattr(configuration_object, '_worker_memory_limit', worker_memory_limit)
Parses and validates options. Args: options (argparse.Namespace): parser options. configuration_object (CLITool): object to be configured by the argument helper. Raises: BadConfigObject: when the configuration object is of the wrong type. BadConfigOption: when a configuration parameter fails validation.
juraj-google-style
def array_view(array, slicing=None, mapping=None): dtype = translate_dtype(array.dtype) sliced_array = (array[command_parser._parse_slices(slicing)] if slicing else array) if (np.isscalar(sliced_array) and (str(dtype) == 'string')): ndims = len(array.shape) slice_shape = [] for _ in range(ndims): sliced_array = [sliced_array] slice_shape.append(1) return (dtype, tuple(slice_shape), sliced_array) else: shape = sliced_array.shape if (mapping == 'image/png'): if (len(sliced_array.shape) == 2): return (dtype, shape, array_to_base64_png(sliced_array)) elif (len(sliced_array.shape) == 3): raise NotImplementedError('image/png mapping for 3D array has not been implemented') else: raise ValueError(('Invalid rank for image/png mapping: %d' % len(sliced_array.shape))) elif (mapping == 'health-pill'): health_pill = health_pill_calc.calc_health_pill(array) return (dtype, shape, health_pill) elif ((mapping is None) or (mapping == '') or (mapping.lower() == 'none')): return (dtype, shape, sliced_array.tolist()) else: raise ValueError(('Invalid mapping: %s' % mapping))
View a slice or the entirety of an ndarray. Args: array: The input array, as an numpy.ndarray. slicing: Optional slicing string, e.g., "[:, 1:3, :]". mapping: Optional mapping string. Supported mappings: `None` or case-insensitive `'None'`: Unmapped nested list. `'image/png'`: Image encoding of a 2D sliced array or 3D sliced array with 3 as the last dimension. If the sliced array is not 2D or 3D with 3 as the last dimension, a `ValueError` will be thrown. `health-pill`: A succinct summary of the numeric values of a tensor. See documentation in [`health_pill_calc.py`] for more details. Returns: 1. dtype as a `str`. 2. shape of the sliced array, as a tuple of `int`s. 3. the potentially sliced values, as a nested `list`.
codesearchnet
def trace(self, graph_element_name): self._depth_count += 1 node_name = get_node_name(graph_element_name) if node_name == self._destination_node_name: raise GraphTracingReachedDestination() if node_name in self._skip_node_names: return if node_name in self._visited_nodes: return self._visited_nodes.append(node_name) for input_list in self._input_lists: if node_name not in input_list: continue for inp in input_list[node_name]: if get_node_name(inp) in self._visited_nodes: continue self._inputs.append(inp) self._depth_list.append(self._depth_count) self.trace(inp) self._depth_count -= 1
Trace inputs. Args: graph_element_name: Name of the node or an output tensor of the node, as a str. Raises: GraphTracingReachedDestination: if destination_node_name of this tracer object is not None and the specified node is reached.
github-repos
def pretty_dump(fn): @wraps(fn) def pretty_dump_wrapper(*args, **kwargs): response.content_type = "application/json; charset=utf-8" return json.dumps( fn(*args, **kwargs), indent=4, separators=(',', ': ') ) return pretty_dump_wrapper
Decorator used to output prettified JSON. ``response.content_type`` is set to ``application/json; charset=utf-8``. Args: fn (fn pointer): Function returning any basic python data structure. Returns: str: Data converted to prettified JSON.
juraj-google-style
def dispatch(self, event): if event.is_directory: return paths = [] if has_attribute(event, 'dest_path'): paths.append(os.path.realpath( unicode_paths.decode(event.dest_path))) if event.src_path: paths.append(os.path.realpath( unicode_paths.decode(event.src_path))) paths = [p for p in paths if not p.startswith(os.path.realpath(self.vcs.repository_dir())) and not self.vcs.path_is_ignored(p)] if len(paths) > 0: super(VcsEventHandler, self).dispatch(event)
Only dispatch if the event does not correspond to an ignored file. Args: event (watchdog.events.FileSystemEvent)
juraj-google-style
def psd(data, dt, ndivide=1, window=hanning, overlap_half=False): logger = getLogger('decode.utils.ndarray.psd') if overlap_half: step = int(len(data) / (ndivide + 1)) size = step * 2 else: step = int(len(data) / ndivide) size = step if bin(len(data)).count('1') != 1: logger.warning('warning: length of data is not power of 2: {}'.format(len(data))) size = int(len(data) / ndivide) if bin(size).count('1') != 1.: if overlap_half: logger.warning('warning: ((length of data) / (ndivide+1)) * 2 is not power of 2: {}'.format(size)) else: logger.warning('warning: (length of data) / ndivide is not power of 2: {}'.format(size)) psd = np.zeros(size) T = (size - 1) * dt vs = 1 / dt vk_ = fftfreq(size, dt) vk = vk_[np.where(vk_ >= 0)] for i in range(ndivide): d = data[i * step:i * step + size] if window is None: w = np.ones(size) corr = 1.0 else: w = window(size) corr = np.mean(w**2) psd = psd + 2 * (np.abs(fft(d * w)))**2 / size * dt / corr return vk, psd[:len(vk)] / ndivide
Calculate power spectrum density of data. Args: data (np.ndarray): Input data. dt (float): Time between each data. ndivide (int): Do averaging (split data into ndivide, get psd of each, and average them). ax (matplotlib.axes): Axis you want to plot on. doplot (bool): Plot how averaging works. overlap_half (bool): Split data to half-overlapped regions. Returns: vk (np.ndarray): Frequency. psd (np.ndarray): PSD
juraj-google-style
def __init__(self, compression_method=None, parent=None, **kwargs): if not compression_method or not parent: raise ValueError('Missing compression method or parent value.') super(CompressedStreamPathSpec, self).__init__(parent=parent, **kwargs) self.compression_method = compression_method
Initializes a path specification. Note that the compressed stream path specification must have a parent. Args: compression_method (Optional[str]): method used to the compress the data. parent (Optional[PathSpec]): parent path specification. Raises: ValueError: when compression method or parent are not set.
juraj-google-style
def _from_string(cls, serialized): if ':' not in serialized: raise InvalidKeyError( "BlockTypeKeyV1 keys must contain ':' separating the block family from the block_type.", serialized) family, __, block_type = serialized.partition(':') return cls(family, block_type)
Return an instance of `cls` parsed from its `serialized` form. Args: cls: The :class:`OpaqueKey` subclass. serialized (unicode): A serialized :class:`OpaqueKey`, with namespace already removed. Raises: InvalidKeyError: Should be raised if `serialized` is not a valid serialized key understood by `cls`.
juraj-google-style
def get_block_entity_data(self, pos_or_x, y=None, z=None): if (None not in (y, z)): pos_or_x = (pos_or_x, y, z) coord_tuple = tuple((int(floor(c)) for c in pos_or_x)) return self.block_entities.get(coord_tuple, None)
Access block entity data. Returns: BlockEntityData subclass instance or None if no block entity data is stored for that location.
codesearchnet
def feature_info(self): feature_list = self.prop('available-features-list', None) if (feature_list is None): raise ValueError(('Firmware features are not supported on CPC %s' % self.name)) return feature_list
Returns information about the features available for this CPC. Authorization requirements: * Object-access permission to this CPC. Returns: :term:`iterable`: An iterable where each item represents one feature that is available for this CPC. Each item is a dictionary with the following items: * `name` (:term:`unicode string`): Name of the feature. * `description` (:term:`unicode string`): Short description of the feature. * `state` (bool): Enablement state of the feature (`True` if the enabled, `False` if disabled). Raises: :exc:`ValueError`: Features are not supported on the HMC. :exc:`~zhmcclient.HTTPError` :exc:`~zhmcclient.ParseError` :exc:`~zhmcclient.AuthError` :exc:`~zhmcclient.ConnectionError`
codesearchnet
def total_duration(utterances: List[Utterance]) -> int: return sum([duration(utter) for utter in utterances])
Get the duration of an entire list of utterances in milliseconds Args: utterances: The list of utterance we are finding the duration of
juraj-google-style
def get_input_info_dict(self, signature=None): return self._spec.get_input_info_dict(signature=signature, tags=self._tags)
Describes the inputs required by a signature. Args: signature: A string with the signature to get inputs information for. If None, the default signature is used if defined. Returns: The result of ModuleSpec.get_input_info_dict() for the given signature, and the graph variant selected by `tags` when this Module was initialized. Raises: KeyError: if there is no such signature.
juraj-google-style
def __call__(self, input_values, attention_mask=None, mask_time_indices=None, gumbel_temperature: int=1, deterministic: bool=True, output_attentions=None, output_hidden_states=None, freeze_feature_encoder=False, return_dict=None): return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.wav2vec2(input_values, attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, mask_time_indices=mask_time_indices, deterministic=deterministic, freeze_feature_encoder=freeze_feature_encoder, return_dict=return_dict) transformer_features = self.project_hid(outputs[0]) extract_features = self.dropout_features(outputs[1], deterministic=deterministic) quantized_features, codevector_perplexity = self.quantizer(extract_features, mask_time_indices, deterministic=deterministic, temperature=gumbel_temperature) quantized_features = self.project_q(quantized_features) if not return_dict: return (transformer_features, quantized_features, codevector_perplexity) + outputs[2:] return FlaxWav2Vec2ForPreTrainingOutput(projected_states=transformer_features, projected_quantized_states=quantized_features, codevector_perplexity=codevector_perplexity, hidden_states=outputs.hidden_states, attentions=outputs.attentions)
Returns: Example: ```python ```
github-repos
def true_num_reactions(model, custom_spont_id=None): true_num = 0 for rxn in model.reactions: if (len(rxn.genes) == 0): continue if ((len(rxn.genes) == 1) and is_spontaneous(list(rxn.genes)[0], custom_id=custom_spont_id)): continue else: true_num += 1 return true_num
Return the number of reactions associated with a gene. Args: model (Model): custom_spont_id (str): Optional custom spontaneous ID if it does not match the regular expression ``[Ss](_|)0001`` Returns: int: Number of reactions associated with a gene
codesearchnet
def _normalize_string(raw_str): return " ".join( token.strip() for token in tokenizer.encode(text_encoder.native_to_unicode(raw_str)))
Normalizes the string using tokenizer.encode. Args: raw_str: the input string Returns: A string which is ready to be tokenized using split()
juraj-google-style
def random_unitary_matrix(num_qubits): hermitian_matrix = random_hermitian_matrix(num_qubits) return tf.linalg.expm(-1j * hermitian_matrix)
Returns a random unitary matrix. Uses the property that e^{-iH} is unitary for any Hermitian matrix H. Args: num_qubits: Number of qubits on which the matrix acts.
github-repos
def isUserCert(self, name): crtpath = self._getPathJoin('users', ('%s.crt' % name)) return os.path.isfile(crtpath)
Checks if a user certificate exists. Args: name (str): The name of the user keypair. Examples: Check if the user cert "myuser" exists: exists = cdir.isUserCert('myuser') Returns: bool: True if the certificate is present, False otherwise.
codesearchnet
def _module_to_paths(module): submodules = [] module_segments = module.split('.') for i in range(len(module_segments)): submodules.append('.'.join(module_segments[:i + 1])) paths = [] for submodule in submodules: if not submodule: paths.append('__init__.py') continue paths.append('%s/__init__.py' % submodule.replace('.', '/')) return paths
Get all API __init__.py file paths for the given module. Args: module: Module to get file paths for. Returns: List of paths for the given module. For e.g. module foo.bar requires 'foo/__init__.py' and 'foo/bar/__init__.py'.
github-repos
def disease_term(self, disease_identifier): query = {} try: disease_identifier = int(disease_identifier) query['disease_nr'] = disease_identifier except ValueError: query['_id'] = disease_identifier return self.disease_term_collection.find_one(query)
Return a disease term Checks if the identifier is a disease number or a id Args: disease_identifier(str) Returns: disease_obj(dict)
juraj-google-style
def distance_similarity(a, b, p, T=CLOSE_DISTANCE_THRESHOLD): d = distance_to_line(a, b, p) r = (-1/float(T)) * abs(d) + 1 return r if r > 0 else 0
Computes the distance similarity between a line segment and a point Args: a ([float, float]): x and y coordinates. Line start b ([float, float]): x and y coordinates. Line end p ([float, float]): x and y coordinates. Point to compute the distance Returns: float: between 0 and 1. Where 1 is very similar and 0 is completely different
juraj-google-style
class QuantLinear(nn.Module): def __init__(self, in_features, out_features, bias=True, weight_bit=8, bias_bit=32, per_channel=False, quant_mode=False): super().__init__() self.in_features = in_features self.out_features = out_features self.weight = nn.Parameter(torch.zeros([out_features, in_features])) self.register_buffer('weight_integer', torch.zeros_like(self.weight)) self.register_buffer('fc_scaling_factor', torch.zeros(self.out_features)) if bias: self.bias = nn.Parameter(torch.zeros(out_features)) self.register_buffer('bias_integer', torch.zeros_like(self.bias)) self.weight_bit = weight_bit self.quant_mode = quant_mode self.per_channel = per_channel self.bias_bit = bias_bit self.quant_mode = quant_mode self.percentile_mode = False self.weight_function = SymmetricQuantFunction.apply def __repr__(self): s = super().__repr__() s = f'({s} weight_bit={self.weight_bit}, quant_mode={self.quant_mode})' return s def forward(self, x, prev_act_scaling_factor=None): if not self.quant_mode: return (nn.functional.linear(x, weight=self.weight, bias=self.bias), None) assert prev_act_scaling_factor is not None and prev_act_scaling_factor.shape == (1,), 'Input activation to the QuantLinear layer should be globally (non-channel-wise) quantized. Please add a QuantAct layer with `per_channel = True` before this QuantAct layer' w = self.weight w_transform = w.data.detach() if self.per_channel: w_min, _ = torch.min(w_transform, dim=1, out=None) w_max, _ = torch.max(w_transform, dim=1, out=None) else: w_min = w_transform.min().expand(1) w_max = w_transform.max().expand(1) self.fc_scaling_factor = symmetric_linear_quantization_params(self.weight_bit, w_min, w_max, self.per_channel) self.weight_integer = self.weight_function(self.weight, self.weight_bit, self.percentile_mode, self.fc_scaling_factor) bias_scaling_factor = self.fc_scaling_factor * prev_act_scaling_factor if self.bias is not None: self.bias_integer = self.weight_function(self.bias, self.bias_bit, False, bias_scaling_factor) prev_act_scaling_factor = prev_act_scaling_factor.view(1, -1) x_int = x / prev_act_scaling_factor return (nn.functional.linear(x_int, weight=self.weight_integer, bias=self.bias_integer) * bias_scaling_factor, bias_scaling_factor)
Quantized version of `torch.nn.Linear`. Adds quantization-specific arguments on top of `torch.nn.Linear`. Args: weight_bit (`int`, *optional*, defaults to `8`): Bitwidth for the quantized weight. bias_bit (`int`, *optional*, defaults to `32`): Bitwidth for the quantized bias. per_channel (`bool`, *optional*, defaults to `False`): Whether or not to use channel-wise quantization. quant_mode (`bool`, *optional*, defaults to `False`): Whether or not the layer is quantized.
github-repos
def ParseIfaddrs(ifaddrs): precondition.AssertOptionalType(ifaddrs, ctypes.POINTER(Ifaddrs)) ifaces = {} for ifaddr in IterIfaddrs(ifaddrs): ifname = ctypes.string_at(ifaddr.ifa_name).decode("utf-8") iface = ifaces.setdefault(ifname, rdf_client_network.Interface()) iface.ifname = ifname if not ifaddr.ifa_addr: continue sockaddr = ctypes.cast(ifaddr.ifa_addr, ctypes.POINTER(Sockaddr)) iffamily = sockaddr.contents.sa_family if iffamily == AF_INET: sockaddrin = ctypes.cast(ifaddr.ifa_addr, ctypes.POINTER(Sockaddrin)) address = rdf_client_network.NetworkAddress() address.address_type = rdf_client_network.NetworkAddress.Family.INET address.packed_bytes = struct.pack("=L", sockaddrin.contents.sin_addr) iface.addresses.append(address) elif iffamily == AF_INET6: sockaddrin = ctypes.cast(ifaddr.ifa_addr, ctypes.POINTER(Sockaddrin6)) address = rdf_client_network.NetworkAddress() address.address_type = rdf_client_network.NetworkAddress.Family.INET6 address.packed_bytes = bytes(list(sockaddrin.contents.sin6_addr)) iface.addresses.append(address) elif iffamily == AF_LINK: sockaddrdl = ctypes.cast(ifaddr.ifa_addr, ctypes.POINTER(Sockaddrdl)) nlen = sockaddrdl.contents.sdl_nlen alen = sockaddrdl.contents.sdl_alen iface.mac_address = bytes(sockaddrdl.contents.sdl_data[nlen:nlen + alen]) else: raise ValueError("Unexpected socket address family: %s" % iffamily) return itervalues(ifaces)
Parses contents of the intrusive linked list of `ifaddrs`. Args: ifaddrs: A pointer to the first node of `ifaddrs` linked list. Can be NULL. Returns: An iterator over instances of `rdf_client_network.Interface`.
juraj-google-style
def _unverified_decode(token): token = _helpers.to_bytes(token) if (token.count(b'.') != 2): raise ValueError('Wrong number of segments in token: {0}'.format(token)) (encoded_header, encoded_payload, signature) = token.split(b'.') signed_section = ((encoded_header + b'.') + encoded_payload) signature = _helpers.padded_urlsafe_b64decode(signature) header = _decode_jwt_segment(encoded_header) payload = _decode_jwt_segment(encoded_payload) return (header, payload, signed_section, signature)
Decodes a token and does no verification. Args: token (Union[str, bytes]): The encoded JWT. Returns: Tuple[str, str, str, str]: header, payload, signed_section, and signature. Raises: ValueError: if there are an incorrect amount of segments in the token.
codesearchnet
def orient_undirected_graph(self, data, graph, **kwargs): self.arguments['{CITEST}'] = self.dir_CI_test[self.CI_test] self.arguments['{METHOD_INDEP}'] = self.dir_method_indep[self.method_indep] self.arguments['{DIRECTED}'] = 'TRUE' self.arguments['{ALPHA}'] = str(self.alpha) self.arguments['{NJOBS}'] = str(self.nb_jobs) self.arguments['{VERBOSE}'] = str(self.verbose).upper() fe = DataFrame(nx.adj_matrix(graph, weight=None).todense()) fg = DataFrame((1 - fe.values)) results = self._run_pc(data, fixedEdges=fe, fixedGaps=fg, verbose=self.verbose) return nx.relabel_nodes(nx.DiGraph(results), {idx: i for (idx, i) in enumerate(data.columns)})
Run PC on an undirected graph. Args: data (pandas.DataFrame): DataFrame containing the data graph (networkx.Graph): Skeleton of the graph to orient Returns: networkx.DiGraph: Solution given by PC on the given skeleton.
codesearchnet