file_name
stringlengths 3
137
| prefix
stringlengths 0
918k
| suffix
stringlengths 0
962k
| middle
stringlengths 0
812k
|
---|---|---|---|
launch.py
|
r"""
`torch.distributed.launch` is a module that spawns up multiple distributed
training processes on each of the training nodes.
The utility can be used for single-node distributed training, in which one or
more processes per node will be spawned. The utility can be used for either
CPU training or GPU training. If the utility is used for GPU training,
each distributed process will be operating on a single GPU. This can achieve
well-improved single-node training performance. It can also be used in
multi-node distributed training, by spawning up multiple processes on each node
for well-improved multi-node distributed training performance as well.
This will especially be benefitial for systems with multiple Infiniband
interfaces that have direct-GPU support, since all of them can be utilized for
aggregated communication bandwidth.
In both cases of single-node distributed training or multi-node distributed
training, this utility will launch the given number of processes per node
(``--nproc_per_node``). If used for GPU training, this number needs to be less
or euqal to the number of GPUs on the current system (``nproc_per_node``),
and each process will be operating on a single GPU from *GPU 0 to
GPU (nproc_per_node - 1)*.
**How to use this module:**
1. Single-Node multi-process distributed training
::
>>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE
YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other
arguments of your training script)
2. Multi-Node multi-process distributed training: (e.g. two nodes)
Node 1: *(IP: 192.168.1.1, and has a free port: 1234)*
::
>>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE
--nnodes=2 --node_rank=0 --master_addr="192.168.1.1"
--master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3
and all other arguments of your training script)
Node 2:
::
>>> python -m torch.distributed.launch --nproc_per_node=NUM_GPUS_YOU_HAVE
--nnodes=2 --node_rank=1 --master_addr="192.168.1.1"
--master_port=1234 YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3
and all other arguments of your training script)
3. To look up what optional arguments this module offers:
::
>>> python -m torch.distributed.launch --help
**Important Notices:**
1. This utilty and multi-process distributed (single-node or
multi-node) GPU training currently only achieves the best performance using
the NCCL distributed backend. Thus NCCL backend is the recommended backend to
use for GPU training.
2. In your training program, you must parse the command-line argument:
``--local_rank=LOCAL_PROCESS_RANK``, which will be provided by this module.
If your training program uses GPUs, you should ensure that your code only
runs on the GPU device of LOCAL_PROCESS_RANK. This can be done by:
Parsing the local_rank argument
::
>>> import argparse
>>> parser = argparse.ArgumentParser()
>>> parser.add_argument("--local_rank", type=int)
>>> args = parser.parse_args()
Set your device to local rank using either
::
>>> torch.cuda.set_device(arg.local_rank) # before your code runs
or
::
>>> with torch.cuda.device(arg.local_rank):
>>> # your code to run
3. In your training program, you are supposed to call the following function
at the beginning to start the distributed backend. You need to make sure that
the init_method uses ``env://``, which is the only supported ``init_method``
by this module.
::
torch.distributed.init_process_group(backend='YOUR BACKEND',
init_method='env://')
4. In your training program, you can either use regular distributed functions
or use :func:`torch.nn.parallel.DistributedDataParallel` module. If your
training program uses GPUs for training and you would like to use
:func:`torch.nn.parallel.DistributedDataParallel` module,
here is how to configure it.
::
model = torch.nn.parallel.DistributedDataParallel(model,
device_ids=[arg.local_rank],
output_device=arg.local_rank)
Please ensure that ``device_ids`` argument is set to be the only GPU device id
that your code will be operating on. This is generally the local rank of the
process. In other words, the ``device_ids`` needs to be ``[args.local_rank]``,
and ``output_device`` needs to be ``args.local_rank`` in order to use this
utility
5. Another way to pass ``local_rank`` to the subprocesses via environment variable
``LOCAL_RANK``. This behavior is enabled when you launch the script with
``--use_env=True``. You must adjust the subprocess example above to replace
``args.local_rank`` with ``os.environ['LOCAL_RANK']``; the launcher
will not pass ``--local_rank`` when you specify this flag.
.. warning::
``local_rank`` is NOT globally unique: it is only unique per process
on a machine. Thus, don't use it to decide if you should, e.g.,
write to a networked filesystem. See
https://github.com/pytorch/pytorch/issues/12042 for an example of
how things can go wrong if you don't do this correctly.
"""
import sys
import subprocess
import os
from argparse import ArgumentParser, REMAINDER
def parse_args():
"""
Helper function parsing the command line options
@retval ArgumentParser
"""
parser = ArgumentParser(description="PyTorch distributed training launch "
"helper utilty that will spawn up "
"multiple distributed processes")
# Optional arguments for the launch helper
parser.add_argument("--nnodes", type=int, default=1,
help="The number of nodes to use for distributed "
"training")
parser.add_argument("--node_rank", type=int, default=0,
help="The rank of the node for multi-node distributed "
"training")
parser.add_argument("--nproc_per_node", type=int, default=1,
help="The number of processes to launch on each node, "
"for GPU training, this is recommended to be set "
"to the number of GPUs in your system so that "
"each process can be bound to a single GPU.")
parser.add_argument("--master_addr", default="127.0.0.1", type=str,
help="Master node (rank 0)'s address, should be either "
"the IP address or the hostname of node 0, for "
"single node multi-proc training, the "
"--master_addr can simply be 127.0.0.1")
parser.add_argument("--master_port", default=29500, type=int,
help="Master node (rank 0)'s free port that needs to "
"be used for communciation during distributed "
"training")
parser.add_argument("--use_env", default=False, action="store_true",
help="Use environment variable to pass "
"'local rank'. For legacy reasons, the default value is False. "
"If set to True, the script will not pass "
"--local_rank as argument, and will instead set LOCAL_RANK.")
# positional
parser.add_argument("training_script", type=str,
help="The full path to the single GPU training "
"program/script to be launched in parallel, "
"followed by all the arguments for the "
"training script")
# rest from the training program
parser.add_argument('training_script_args', nargs=REMAINDER)
return parser.parse_args()
def main():
args = parse_args()
# world size in terms of number of processes
dist_world_size = args.nproc_per_node * args.nnodes
# set PyTorch distributed related environmental variables
current_env = os.environ.copy()
current_env["MASTER_ADDR"] = args.master_addr
current_env["MASTER_PORT"] = str(args.master_port)
current_env["WORLD_SIZE"] = str(dist_world_size)
processes = []
for local_rank in range(0, args.nproc_per_node):
# each process's rank
dist_rank = args.nproc_per_node * args.node_rank + local_rank
current_env["RANK"] = str(dist_rank)
current_env["LOCAL_RANK"] = str(local_rank)
# spawn the processes
if args.use_env:
|
else:
cmd = [sys.executable,
"-u",
args.training_script,
"--local_rank={}".format(local_rank)] + args.training_script_args
process = subprocess.Popen(cmd, env=current_env)
processes.append(process)
for process in processes:
process.wait()
if process.returncode != 0:
raise subprocess.CalledProcessError(returncode=process.returncode,
cmd=cmd)
if __name__ == "__main__":
main()
|
cmd = [sys.executable, "-u",
args.training_script] + args.training_script_args
|
feature_factory.py
|
from datamart.joiners.join_feature.feature_classes import *
from functools import reduce
import numpy as np
class FeatureFactory:
subclasses = {
(DistributeType.CATEGORICAL, DataType.NUMBER): CategoricalNumberFeature,
(DistributeType.CATEGORICAL, DataType.STRING): CategoricalStringFeature,
(DistributeType.TOKEN_CATEGORICAL, DataType.STRING): CategoricalTokenFeature,
(DistributeType.NON_CATEGORICAL, DataType.NUMBER): NonCategoricalNumberFeature,
(DistributeType.NON_CATEGORICAL, DataType.STRING): NonCategoricalStringFeature
}
@classmethod
def create(cls, df: pd.DataFrame, indexes, df_metadata):
"""
TODO: dynamically generate subclass of FeatureBase, by profiled info, datatype etc.
"""
# set default values:
metadata = cls._get_feature_metadata(df_metadata, indexes) or {}
data_type = None
distribute_type = DistributeType.NON_CATEGORICAL
if len(indexes) > 1:
distribute_type = DistributeType.TOKEN_CATEGORICAL
if cls._try_pd_to_datetime(df, indexes):
data_type = DataType.DATETIME
else:
# single column, not datetime
idx = indexes[0]
profiles = metadata.get('dsbox_profiled', {})
if len(df.iloc[:, idx]) // len(df.iloc[:, idx].unique()) >= 1.5:
distribute_type = DistributeType.CATEGORICAL
elif profiles:
most_common_tokens = profiles.get('most_common_tokens')
if most_common_tokens and cls._get_greater_than(most_common_tokens) >= len(most_common_tokens)//2:
distribute_type = DistributeType.TOKEN_CATEGORICAL
dtype = df.iloc[:, idx].dtype
if dtype == np.int64 or dtype == np.float64:
data_type = DataType.NUMBER
else:
semantic_types = metadata.get('semantic_type')
profiles = metadata.get('dsbox_profiled', {})
data_type = cls._get_data_type_by_semantic_type(semantic_types) \
or cls._get_data_type_by_profile(profiles)
if not data_type and cls._try_pd_to_datetime(df, indexes):
data_type = DataType.DATETIME
return cls.get_instance(df, indexes, metadata, data_type or DataType.STRING, distribute_type)
@classmethod
def get_instance(cls, df, indices, metadata, data_type, distribute_type):
constructor = cls.get_constructor(data_type, distribute_type)
return constructor(df, indices, metadata, distribute_type, data_type)
@classmethod
def get_constructor(cls, data_type, distribute_type=None):
if data_type == DataType.DATETIME:
return DatetimeFeature
return cls.subclasses.get((distribute_type, data_type))
@staticmethod
def _get_feature_metadata(metadata, indices):
if metadata.get('variables') and indices and indices[0] < len(metadata.get('variables')):
return metadata['variables'][indices[0]]
|
@staticmethod
def _get_greater_than(list_of_dict, key='count', threshold=2, inclusive=True):
if inclusive:
return reduce(lambda x, y: x + 1 if float(y[key]) >= threshold else x, list_of_dict, 0)
return reduce(lambda x, y: x + 1 if float(y[key]) > threshold else x, list_of_dict, 0)
@staticmethod
def _get_data_type_by_semantic_type(semantic_types: list):
# TODO: it would be better if we have a close set of used semantic_type, \
# and map them to either STRING, NUMBER or DATETIME
if semantic_types and len(semantic_types):
unique_types = set(t.rsplit('/', 1)[-1].lower() for t in semantic_types)
if 'time' in unique_types or 'date' in unique_types or 'datetime' in unique_types:
return DataType.DATETIME
if 'float' in unique_types or 'int' in unique_types or 'number' in unique_types:
return DataType.NUMBER
@staticmethod
def _get_data_type_by_profile(profiles):
numeric_ratio = profiles.get('ratio_of_numeric_values')
if numeric_ratio and numeric_ratio >= 0.99:
return DataType.NUMBER
@staticmethod
def _try_pd_to_datetime(df, indices):
try:
if len(indices) == 1:
_ = pd.to_datetime(df.iloc[[0, len(df) - 1], indices[0]])
else:
_ = pd.to_datetime(df.iloc[[0, len(df)-1], indices])
return True
except ValueError:
return False
|
@staticmethod
def _get_avg(list_of_dict, key='count'):
if len(list_of_dict):
return sum([_.get(key) for _ in list_of_dict])/len(list_of_dict)
|
from_str.rs
|
// This does practically the same thing that TryFrom<&str> does.
// Additionally, upon implementing FromStr, you can use the `parse` method
// on strings to generate an object of the implementor type.
// You can read more about it at https://doc.rust-lang.org/std/str/trait.FromStr.html
use std::str::FromStr;
#[derive(Debug)]
struct Person {
name: String,
age: usize,
}
// I AM DONE
// Steps:
// 1. If the length of the provided string is 0, then return an error
// 2. Split the given string on the commas present in it
// 3. Extract the first element from the split operation and use it as the name
// 4. If the name is empty, then return an error
// 5. Extract the other element from the split operation and parse it into a `usize` as the age
// with something like `"4".parse::<usize>()`.
// If while parsing the age, something goes wrong, then return an error
// Otherwise, then return a Result of a Person object
impl FromStr for Person {
type Err = String;
fn from_str(s: &str) -> Result<Person, Self::Err> {
match s.split(",").collect::<Vec<_>>()[..] {
[nameStr, ageStr] if nameStr.len() > 0 => {
match (nameStr.to_string(), ageStr.parse::<usize>()) {
(name, Ok(age)) => Ok(Person { name, age }),
_ => Err("".to_string()),
}
}
_ => Err("".to_string()),
}
}
}
fn main() {
let p = "Mark,20".parse::<Person>().unwrap();
println!("{:?}", p);
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn empty_input()
|
#[test]
fn good_input() {
let p = "John,32".parse::<Person>();
assert!(p.is_ok());
let p = p.unwrap();
assert_eq!(p.name, "John");
assert_eq!(p.age, 32);
}
#[test]
#[should_panic]
fn missing_age() {
"John,".parse::<Person>().unwrap();
}
#[test]
#[should_panic]
fn invalid_age() {
"John,twenty".parse::<Person>().unwrap();
}
#[test]
#[should_panic]
fn missing_comma_and_age() {
"John".parse::<Person>().unwrap();
}
#[test]
#[should_panic]
fn missing_name() {
",1".parse::<Person>().unwrap();
}
#[test]
#[should_panic]
fn missing_name_and_age() {
",".parse::<Person>().unwrap();
}
#[test]
#[should_panic]
fn missing_name_and_invalid_age() {
",one".parse::<Person>().unwrap();
}
}
|
{
assert!("".parse::<Person>().is_err());
}
|
emitter.test.py
|
__author__ = 'ziyasal'
from unittest import TestCase
import subprocess
import redis
from emitter import Emitter
class TestEmitter(TestCase):
@classmethod
def setUpClass(cls):
cls.redis_server = subprocess.Popen("redis-server", stdout=subprocess.PIPE, shell=True)
def setUp(self):
self.opts = dict(host='localhost', port=6379)
def test_In(self):
self.fail()
def test_To(self):
self.fail()
def test_Of(self):
self.fail()
def test_Emit(self):
io = Emitter(self.opts)
redis_cli = subprocess.Popen("redis-cli monitor", stdout=subprocess.PIPE,stderr=subprocess.PIPE, shell=True)
output = ""
while True:
chunk = redis_cli.stdout.read(1)
if chunk == '' and redis_cli.poll() != None:
break
if chunk == '\n':
io.Emit('broadcast event', 'Hello from socket.io-emitter')
if chunk != '' and 'PUBLISH' not in output:
output += chunk
else:
redis_cli.kill()
break
self.assertTrue('PUBLISH' in output)
def test_Construct_Emitter_With_Client(self):
client = redis.StrictRedis(host=self.opts['host'], port=self.opts['port'])
io = Emitter({'client': client})
self.assertIsNotNone(io._client)
def test_Construct_Emitter_With_Options(self):
io = Emitter(self.opts)
self.assertIsNotNone(io._client)
def test_Construct_Emitter_With_Null_Client_And_Null_Options_Raises_Exception(self):
self.assertRaises(Exception, Emitter, {'client': None})
@classmethod
def
|
(cls):
if not cls.redis_server is None:
cls.redis_server.kill()
|
tearDownClass
|
lib.rs
|
mod utils;
use wasm_bindgen::prelude::*;
// When the `wee_alloc` feature is enabled, use `wee_alloc` as the global
// allocator.
#[cfg(feature = "wee_alloc")]
#[global_allocator]
|
#[wasm_bindgen]
extern {
fn alert(s: &str);
}
#[wasm_bindgen]
pub fn greet() {
alert("Hello, hello-rust-wasm!");
alert("Hello, hello-rust-wasm again!!");
}
|
static ALLOC: wee_alloc::WeeAlloc = wee_alloc::WeeAlloc::INIT;
|
FSANET_model.py
|
import sys
import logging
import numpy as np
import tensorflow as tf
from keras.models import Model
from keras.applications.resnet50 import ResNet50
from keras.layers import Input
from keras.layers import Dense
from keras.layers import Conv2D
from keras.layers import Layer
from keras.layers import Reshape
from keras.layers import Multiply
from keras.layers import Flatten
from keras.layers import Activation
from keras.layers import Concatenate
from keras.layers import MaxPooling2D
from keras.layers import SeparableConv2D
from keras.layers import AveragePooling2D
from keras.layers import BatchNormalization
from keras import backend as K
from .capsulelayers import CapsuleLayer
from .capsulelayers import MatMulLayer
|
sys.setrecursionlimit(2 ** 20)
np.random.seed(2 ** 10)
# Custom layers
# Note - Usage of Lambda layers prevent the convertion
# and the optimizations by the underlying math engine (tensorflow in this case)
@register_keras_custom_object
class SSRLayer(Layer):
def __init__(self, s1, s2, s3, lambda_d, **kwargs):
super(SSRLayer, self).__init__(**kwargs)
self.s1 = s1
self.s2 = s2
self.s3 = s3
self.lambda_d = lambda_d
self.trainable = False
def call(self, inputs):
#inputs shape: (?,3,39)
x = inputs
a = x[:, :, 0] * 0
b = x[:, :, 0] * 0
c = x[:, :, 0] * 0
s1 = 3
s2 = 9
s3 = 27
di = s1 // 2
dj = s2 // 2
dk = s3 // 2
V = 1
#s1 = 3
# i = 0, 1, 2 ~> i-di = -1, 0, 1
for i in range(0, s1):
a = a + (i - di) * x[:, :, i]
a = a / (s1//2)
#s2 = 9
# j - dj ~> [-4, 4]
for j in range(0, s2):
b = b + (j - dj) * x[:, :, j+3]
b = b / (s2//2)
#s3 = 27
for k in range(0, s3):
c = c + (k - dk) * x[:, :, k+12]
c = c / (s3//2)
pred = (a+b+c) / 3
return pred
def compute_output_shape(self, input_shape):
return (input_shape[0], 3)
def get_config(self):
config = {
's1': self.s1,
's2': self.s2,
's3': self.s3,
'lambda_d': self.lambda_d
}
base_config = super(SSRLayer, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
@register_keras_custom_object
class FeatSliceLayer(Layer):
def __init__(self, start_index, end_index, **kwargs):
super(FeatSliceLayer, self).__init__(**kwargs)
self.start_index = start_index
self.end_index = end_index
self.trainable = False
def call(self, inputs):
return inputs[:,self.start_index:self.end_index]
def compute_output_shape(self, input_shape):
return (input_shape[0], self.end_index - self.start_index)
def get_config(self):
config = {
'start_index': self.start_index,
'end_index': self.end_index
}
base_config = super(FeatSliceLayer, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
@register_keras_custom_object
class MomentsLayer(Layer):
def __init__(self, **kwargs):
super(MomentsLayer,self).__init__(**kwargs)
self.trainable = False
def call(self, inputs):
_, var = tf.nn.moments(inputs,axes=-1)
#var : (batch_size, feature_map_width, feature_map_height)
return var
def compute_output_shape(self, input_shape):
return (input_shape[0], input_shape[-1])
@register_keras_custom_object
class MatrixMultiplyLayer(Layer):
def __init__(self, **kwargs):
super(MatrixMultiplyLayer,self).__init__(**kwargs)
self.trainable = False
def call(self, inputs):
x1, x2 = inputs
# TODO: add some asserts on the inputs
# it is expected the shape of inputs are
# arranged to be able to perform the matrix multiplication
return tf.matmul(x1,x2)
def compute_output_shape(self, input_shapes):
return (input_shapes[0][0],input_shapes[0][1], input_shapes[1][-1])
@register_keras_custom_object
class MatrixNormLayer(Layer):
def __init__(self, tile_count, **kwargs):
super(MatrixNormLayer,self).__init__(**kwargs)
self.trainable = False
self.tile_count = tile_count
def call(self, input):
sum = K.sum(input,axis=-1,keepdims=True)
tiled = K.tile(sum,(1,1,self.tile_count))
return tiled
def compute_output_shape(self, input_shape):
return (input_shape[0], input_shape[1], self.tile_count)
def get_config(self):
config = {
'tile_count': self.tile_count
}
base_config = super(MatrixNormLayer, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
@register_keras_custom_object
class PrimCapsLayer(Layer):
def __init__(self, **kwargs):
super(PrimCapsLayer,self).__init__(**kwargs)
self.trainable = False
def call(self, inputs):
x1, x2, norm = inputs
return tf.matmul(x1,x2) / norm
def compute_output_shape(self, input_shapes):
return input_shapes[-1]
@register_keras_custom_object
class AggregatedFeatureExtractionLayer(Layer):
def __init__(self, num_capsule, **kwargs):
super(AggregatedFeatureExtractionLayer,self).__init__(**kwargs)
self.trainable = False
self.num_capsule = num_capsule
def call(self, input):
s1_a = 0
s1_b = self.num_capsule//3
# input[:, 0: 1, :]
feat_s1_div = input[:,s1_a:s1_b,:]
s2_a = self.num_capsule//3
s2_b = 2*self.num_capsule//3
# input[:, 1: 2, :]
feat_s2_div = input[:,s2_a:s2_b,:]
s3_a = 2*self.num_capsule//3
s3_b = self.num_capsule
# input[:, 2: 3, :]
feat_s3_div = input[:,s3_a:s3_b,:]
return [feat_s1_div, feat_s2_div, feat_s3_div]
def compute_output_shape(self, input_shape):
last_dim = input_shape[-1]
partition = self.num_capsule//3
return [(input_shape[0], partition, last_dim), (input_shape[0], partition, last_dim), (input_shape[0], partition, last_dim)]
def get_config(self):
config = {
'num_capsule': self.num_capsule
}
base_config = super(AggregatedFeatureExtractionLayer, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
class BaseFSANet(object):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
'''
Args:
image_size : 64;
num_classes : 3; roll, pitch, yaw
stage_num : [3, 3, 3]; # of bins in each stage
lambda_d : 1.0; Control Delta
S_set : []; Parameters of Capsules
'''
self._channel_axis = 3 if K.image_data_format() == 'channels_last' else 1
if self._channel_axis == 1:
logging.debug("image_dim_ordering = 'th'")
self._input_shape = (3, image_size, image_size)
else:
logging.debug("image_dim_ordering = 'tf'")
self._input_shape = (image_size, image_size, 3)
self.num_classes = num_classes
self.stage_num = stage_num
self.lambda_d = lambda_d
''''
num_capsule = 3
dim_capsule = 16
routings = 2
num_primcaps = 7*3 or 8*8*3
m_dim = 5
'''
self.num_capsule = S_set[0]
self.dim_capsule = S_set[1]
self.routings = S_set[2]
self.num_primcaps = S_set[3]
self.m_dim = S_set[4]
# ? F_shape = 16
self.F_shape = int(self.num_capsule / 3) * self.dim_capsule
# ? map_xy_size = 8
self.map_xy_size = int(8 * image_size / 64)
# is_fc_model
self.is_fc_model = False
self.is_noS_model = False
self.is_varS_model = False
def ssr_build_resnet(self, input_size):
resnet = ResNet50(include_top=False, weights=None, input_tensor=None, input_shape=input_size, pooling=None, classes=1000)
model = Model(inputs=resnet.input, outputs=[AveragePooling2D((2,2))(Conv2D(64,(1,1))(resnet.get_layer('activation_10').output)),
Conv2D(64,(1,1))(resnet.get_layer('activation_16').output),
Conv2D(64,(1,1))(resnet.get_layer('activation_22').output)], name='ssr_backbone')
return model
def _convBlock(self, x, num_filters, activation, kernel_size=(3,3)):
x = SeparableConv2D(num_filters,kernel_size,padding='same')(x)
x = BatchNormalization(axis=-1)(x)
x = Activation(activation)(x)
return x
def ssr_F_model_build(self, feat_dim, name_F, vec_order):
input_s1_pre = Input((feat_dim,))
input_s2_pre = Input((feat_dim,))
input_s3_pre = Input((feat_dim,))
def _process_input(stage_index, stage_num, num_classes, input_s_pre):
# input_s_pre : (None, 16)
bins_num = stage_num ** stage_index
units_num = 3 * bins_num
assert units_num in [9, 27, 81]
prob_all_bins = Reshape((3, bins_num))(Dense(units=units_num,
activation='sigmoid',
name='all_bins_{}'.format(stage_index))(input_s_pre))
# delta_s : (None, 3)
# local_s : (None, 3)
# pred_s : (None, 3, 3)
# return delta_s, local_s, pred_s
return prob_all_bins
###########################################################################################
# delta_s1 : (None, 3)
# local_s1 : (None, 3)
# pred_s1 : (None, 3, 3)
# prob_s1: [None, 3, 3]
# prob_s2: [None, 3, 9]
# prob_s3: [None, 3, 27]
prob_s1 = _process_input(1, self.stage_num[0], self.num_classes, input_s1_pre)
prob_s2 = _process_input(2, self.stage_num[1], self.num_classes, input_s2_pre)
prob_s3 = _process_input(3, self.stage_num[2], self.num_classes, input_s3_pre)
# prob_merge: (None, 3, 39)
prob_merge = Concatenate(axis=-1)([prob_s1, prob_s2, prob_s3])
# return Model(inputs=[input_s1_pre,input_s2_pre,input_s3_pre],outputs=[pred_s1,pred_s2,pred_s3,delta_s1,delta_s2,delta_s3,local_s1,local_s2,local_s3], name=name_F + f'_{vec_order}')
return Model(inputs=[input_s1_pre, input_s2_pre, input_s3_pre],
outputs=prob_merge,
name=name_F + '_{}'.format(vec_order))
def ssr_FC_model_build(self, feat_dim, name_F):
input_s1_pre = Input((feat_dim,))
input_s2_pre = Input((feat_dim,))
input_s3_pre = Input((feat_dim,))
def _process_input(stage_index, stage_num, num_classes, input_s_pre):
feat_delta_s = Dense(2 * num_classes, activation='tanh')(input_s_pre)
delta_s = Dense(num_classes, activation='tanh', name='delta_s{}'.format(stage_index))(feat_delta_s)
feat_local_s = Dense(2 * num_classes, activation='tanh')(input_s_pre)
local_s = Dense(units=num_classes, activation='tanh', name='local_delta_stage{}'.format(stage_index))(feat_local_s)
feat_pred_s = Dense(stage_num * num_classes,activation='relu')(input_s_pre)
pred_s = Reshape((num_classes,stage_num))(feat_pred_s)
return delta_s, local_s, pred_s
delta_s1, local_s1, pred_s1 = _process_input(1, self.stage_num[0], self.num_classes, input_s1_pre)
delta_s2, local_s2, pred_s2 = _process_input(2, self.stage_num[1], self.num_classes, input_s2_pre)
delta_s3, local_s3, pred_s3 = _process_input(3, self.stage_num[2], self.num_classes, input_s3_pre)
return Model(inputs=[input_s1_pre,input_s2_pre,input_s3_pre],outputs=[pred_s1,pred_s2,pred_s3,delta_s1,delta_s2,delta_s3,local_s1,local_s2,local_s3], name=name_F)
def ssr_feat_S_model_build(self, m_dim):
input_preS = Input((self.map_xy_size,self.map_xy_size,64))
# is_varS_model compute teh variance
if self.is_varS_model:
feat_preS = MomentsLayer()(input_preS)
else:
feat_preS = Conv2D(1,(1,1),padding='same',activation='sigmoid')(input_preS)
feat_preS = Reshape((-1,))(feat_preS)
SR_matrix = Dense(m_dim*(self.map_xy_size*self.map_xy_size*3),activation='sigmoid')(feat_preS)
SR_matrix = Reshape((m_dim,(self.map_xy_size*self.map_xy_size*3)))(SR_matrix)
return Model(inputs=input_preS,outputs=[SR_matrix,feat_preS],name='feat_S_model')
def ssr_S_model_build(self, num_primcaps, m_dim, vec_order):
# Input: (8, 8, 64)
# s1: means stage 1?
input_s1_preS = Input((self.map_xy_size,self.map_xy_size,64))
input_s2_preS = Input((self.map_xy_size,self.map_xy_size,64))
input_s3_preS = Input((self.map_xy_size,self.map_xy_size,64))
# 这里有两种选择:
# 根据 is_varS_model 来判断是否计算 variance
feat_S_model = self.ssr_feat_S_model_build(m_dim)
SR_matrix_s1,feat_s1_preS = feat_S_model(input_s1_preS)
SR_matrix_s2,feat_s2_preS = feat_S_model(input_s2_preS)
SR_matrix_s3,feat_s3_preS = feat_S_model(input_s3_preS)
# by default, axis=-1
# keep the size of the feature map the same, concatenate the channels
feat_pre_concat = Concatenate()([feat_s1_preS,feat_s2_preS,feat_s3_preS])
# int(num_primcaps / 3) == 7 or 8*8
# m_dim == 5
SL_matrix = Dense(int(num_primcaps / 3) * m_dim,activation='sigmoid')(feat_pre_concat)
SL_matrix = Reshape((int(num_primcaps/3),m_dim))(SL_matrix)
S_matrix_s1 = MatrixMultiplyLayer(name="S_matrix_s1")([SL_matrix,SR_matrix_s1])
S_matrix_s2 = MatrixMultiplyLayer(name='S_matrix_s2')([SL_matrix,SR_matrix_s2])
S_matrix_s3 = MatrixMultiplyLayer(name='S_matrix_s3')([SL_matrix,SR_matrix_s3])
# Very important!!! Without this training won't converge.
# norm_S_s1 = Lambda(lambda x: K.tile(K.sum(x,axis=-1,keepdims=True),(1,1,64)))(S_matrix_s1)
norm_S_s1 = MatrixNormLayer(tile_count=64)(S_matrix_s1)
norm_S_s2 = MatrixNormLayer(tile_count=64)(S_matrix_s2)
norm_S_s3 = MatrixNormLayer(tile_count=64)(S_matrix_s3)
# map_xy_size == 8
# feat_sk_pre : (8, 8, 64)
feat_s1_pre = Reshape((self.map_xy_size*self.map_xy_size,64))(input_s1_preS)
feat_s2_pre = Reshape((self.map_xy_size*self.map_xy_size,64))(input_s2_preS)
feat_s3_pre = Reshape((self.map_xy_size*self.map_xy_size,64))(input_s3_preS)
# feat_pre_concat : (8, 24, 64)
feat_pre_concat = Concatenate(axis=1)([feat_s1_pre, feat_s2_pre, feat_s3_pre])
# Warining: don't use keras's 'K.dot'. It is very weird when high dimension is used.
# https://github.com/keras-team/keras/issues/9779
# Make sure 'tf.matmul' is used
# primcaps = Lambda(lambda x: tf.matmul(x[0],x[1])/x[2])([S_matrix,feat_pre_concat, norm_S])
primcaps_s1 = PrimCapsLayer()([S_matrix_s1,feat_pre_concat, norm_S_s1])
primcaps_s2 = PrimCapsLayer()([S_matrix_s2,feat_pre_concat, norm_S_s2])
primcaps_s3 = PrimCapsLayer()([S_matrix_s3,feat_pre_concat, norm_S_s3])
primcaps = Concatenate(axis=1)([primcaps_s1,primcaps_s2,primcaps_s3])
return Model(inputs=[input_s1_preS, input_s2_preS, input_s3_preS],outputs=primcaps, name='ssr_S_model_{}'.format(vec_order))
def ssr_noS_model_build(self, vec_order, **kwargs):
input_s1_preS = Input((self.map_xy_size,self.map_xy_size,64))
input_s2_preS = Input((self.map_xy_size,self.map_xy_size,64))
input_s3_preS = Input((self.map_xy_size,self.map_xy_size,64))
primcaps_s1 = Reshape((self.map_xy_size*self.map_xy_size,64))(input_s1_preS)
primcaps_s2 = Reshape((self.map_xy_size*self.map_xy_size,64))(input_s2_preS)
primcaps_s3 = Reshape((self.map_xy_size*self.map_xy_size,64))(input_s3_preS)
primcaps = Concatenate(axis=1)([primcaps_s1,primcaps_s2,primcaps_s3])
print(vec_order)
return Model(inputs=[input_s1_preS, input_s2_preS, input_s3_preS],outputs=primcaps, name='ssr_S_model_{}'.format(vec_order))
def __call__(self):
logging.debug("Creating model...")
img_inputs = Input(self._input_shape)
# Build various models
# Two-stream structure for extracting the features.
ssr_G_model = self.ssr_build_resnet(self._input_shape)
if self.is_noS_model:
ssr_S_model_0 = self.ssr_noS_model_build(vec_order=0)
ssr_S_model_1 = self.ssr_noS_model_build(vec_order=1)
ssr_S_model_2 = self.ssr_noS_model_build(vec_order=2)
else:
ssr_S_model_0 = self.ssr_S_model_build(num_primcaps=self.num_primcaps,m_dim=self.m_dim, vec_order=0)
ssr_S_model_1 = self.ssr_S_model_build(num_primcaps=self.num_primcaps,m_dim=self.m_dim, vec_order=1)
ssr_S_model_2 = self.ssr_S_model_build(num_primcaps=self.num_primcaps,m_dim=self.m_dim, vec_order=2)
ssr_aggregation_model_0 = self.ssr_aggregation_model_build((self.num_primcaps,64), vec_order=0)
ssr_aggregation_model_1 = self.ssr_aggregation_model_build((self.num_primcaps,64), vec_order=1)
ssr_aggregation_model_2 = self.ssr_aggregation_model_build((self.num_primcaps,64), vec_order=2)
if self.is_fc_model:
ssr_F_Cap_model = self.ssr_FC_model_build(self.F_shape,'ssr_FC_Cap_model')
else:
ssr_F_Cap_model_0 = self.ssr_F_model_build(self.F_shape,'ssr_NoFC_Cap_model', vec_order=0)
ssr_F_Cap_model_1 = self.ssr_F_model_build(self.F_shape,'ssr_NoFC_Cap_model', vec_order=1)
ssr_F_Cap_model_2 = self.ssr_F_model_build(self.F_shape,'ssr_NoFC_Cap_model', vec_order=2)
# Wire them up
# ssr_G_list: [(batch_size, 8, 8, 64), (batch_size, 8, 8, 64), (batch_size, 8, 8, 64)]
# Two-stream structure for extracting the features.
ssr_G_list = ssr_G_model(img_inputs)
# ssr_primcaps: (batch_size, 21, 64)
# Generating fine-grained structure mapping from different scoring functions.
# Apply the mapping on to the features and generate primary capsules.
ssr_primcaps_0 = ssr_S_model_0(ssr_G_list)
ssr_primcaps_1 = ssr_S_model_1(ssr_G_list)
ssr_primcaps_2 = ssr_S_model_2(ssr_G_list)
# ssr_Cap_list: [(None, None), (None, None), (None, None)]
# Feed the primary capsules into capsule layer and output the final aggregated capsule features. And divide them into 3 stages.
ssr_Cap_list_0 = ssr_aggregation_model_0(ssr_primcaps_0)
ssr_Cap_list_1 = ssr_aggregation_model_1(ssr_primcaps_1)
ssr_Cap_list_2 = ssr_aggregation_model_2(ssr_primcaps_2)
print('*'*50)
print('ssr_Cap_list_0[0]: ', ssr_Cap_list_0[0].shape)
print('*'*50)
# ssr_F_Cap_list: [(batch_size, 3, 3), (batch_size, 3, 3), (batch_size, 3, 3), ~> p
# (batch_size, 3), (batch_size, 3), (batch_size, 3), ~> delta
# (batch_size, 3), (batch_size, 3), (batch_size, 3)] ~> eta
# Taking the previous 3 stages features for Soft-Stagewise Regression (SSR) module.
# Each stage further splits into three parts: prediction, dynamic index shifting, and dynamic scaling.
# This part please check the '[IJCAI18] SSR-Net' for more detail explanation.
# ssr_F_Cap_list_0 : (None, 3, 39)
ssr_F_Cap_list_0 = ssr_F_Cap_model_0(ssr_Cap_list_0)
ssr_F_Cap_list_1 = ssr_F_Cap_model_1(ssr_Cap_list_1)
ssr_F_Cap_list_2 = ssr_F_Cap_model_2(ssr_Cap_list_2)
print('*'*50)
print('ssr_F_Cap_list_0', ssr_F_Cap_list_0.shape)
print('*'*50)
# pred_pose_l : (None, 3)
# Taking the prediction, dynamic index shifting, and dynamic scaling for the final regression output. In this case, there are three outputs (yaw, pitch, roll).
pred_vec_0 = SSRLayer(s1=self.stage_num[0], s2=self.stage_num[1], s3=self.stage_num[2], lambda_d=self.lambda_d, name="pred_pose_0")(ssr_F_Cap_list_0)
pred_vec_1 = SSRLayer(s1=self.stage_num[0], s2=self.stage_num[1], s3=self.stage_num[2], lambda_d=self.lambda_d, name="pred_pose_1")(ssr_F_Cap_list_1)
pred_vec_2 = SSRLayer(s1=self.stage_num[0], s2=self.stage_num[1], s3=self.stage_num[2], lambda_d=self.lambda_d, name="pred_pose_2")(ssr_F_Cap_list_2)
print('*'*50)
print('pred_vec_0: ', pred_vec_0.shape)
print('*'*50)
pred_vecs = Concatenate(axis=-1)([pred_vec_0, pred_vec_1, pred_vec_2])
print('*'*50)
print('pred_vecs: ', pred_vecs.shape)
print('*'*50)
return Model(inputs=img_inputs, outputs=[pred_vecs, pred_vecs])
# return Model(inputs=img_inputs, outputs=pred_pose)
# Capsule FSANetworks
class BaseCapsuleFSANet(BaseFSANet):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(BaseCapsuleFSANet, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
def ssr_aggregation_model_build(self, shape_primcaps, vec_order):
input_primcaps = Input(shape_primcaps)
capsule = CapsuleLayer(self.num_capsule, self.dim_capsule, routings=self.routings, name='caps')(input_primcaps)
feat_s1_div, feat_s2_div, feat_s3_div = AggregatedFeatureExtractionLayer(num_capsule=self.num_capsule)(capsule)
feat_s1_div = Reshape((-1,))(feat_s1_div)
feat_s2_div = Reshape((-1,))(feat_s2_div)
feat_s3_div = Reshape((-1,))(feat_s3_div)
return Model(inputs=input_primcaps,outputs=[feat_s1_div,feat_s2_div,feat_s3_div], name='ssr_Cap_model_{}'.format(vec_order))
class FSA_net_Capsule(BaseCapsuleFSANet):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(FSA_net_Capsule, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
self.is_varS_model = False
class FSA_net_Var_Capsule(BaseCapsuleFSANet):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(FSA_net_Var_Capsule, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
self.is_varS_model = True
class FSA_net_noS_Capsule(BaseCapsuleFSANet):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(FSA_net_noS_Capsule, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
self.is_noS_model = True
class FSA_net_Capsule_FC(FSA_net_Capsule):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(FSA_net_Capsule_FC, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
self.is_fc_model = True
class FSA_net_Var_Capsule_FC(FSA_net_Var_Capsule):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(FSA_net_Var_Capsule_FC, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
self.is_fc_model = True
class FSA_net_noS_Capsule_FC(FSA_net_noS_Capsule):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(FSA_net_noS_Capsule_FC, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
self.is_fc_model = True
# Metric models
class BaseMetricFSANet(BaseFSANet):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(BaseMetricFSANet, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
def ssr_aggregation_model_build(self, shape_primcaps, vec_order):
input_primcaps = Input(shape_primcaps)
metric_feat = MatMulLayer(16,type=1)(input_primcaps)
metric_feat = MatMulLayer(3,type=2)(metric_feat)
feat_s1_div, feat_s2_div, feat_s3_div = AggregatedFeatureExtractionLayer(num_capsule=self.num_capsule)(metric_feat)
feat_s1_div = Reshape((-1,))(feat_s1_div)
feat_s2_div = Reshape((-1,))(feat_s2_div)
feat_s3_div = Reshape((-1,))(feat_s3_div)
return Model(inputs=input_primcaps,outputs=[feat_s1_div,feat_s2_div,feat_s3_div], name='ssr_Metric_model_{}'.format(vec_order))
class FSA_net_Metric(BaseMetricFSANet):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(FSA_net_Metric, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
self.is_varS_model = False
class FSA_net_Var_Metric(BaseMetricFSANet):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(FSA_net_Var_Metric, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
self.is_varS_model = True
class FSA_net_noS_Metric(BaseMetricFSANet):
def __init__(self, image_size,num_classes,stage_num,lambda_d, S_set):
super(FSA_net_noS_Metric, self).__init__(image_size,num_classes,stage_num,lambda_d, S_set)
self.is_noS_model = True
|
from .utils import register_keras_custom_object
|
msd.rs
|
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Event {}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct InterfaceSubClass {
pub subclass: u8,
pub protocol: u8,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct ClassSpecificDescriptor;
impl ClassSpecificDescriptor {
pub fn parse(response: &[u8]) -> anyhow::Result<(&[u8], Self)> {
Ok((&response[..response[0].into()], Self))
}
}
pub struct
|
;
impl super::Endpoint for MsdEndpoint {
fn update(
&mut self,
_timestamp: f64,
_transaction: super::protocol::Transaction,
) -> Option<anyhow::Result<super::DeviceEvent>> {
None
}
}
|
MsdEndpoint
|
capture.py
|
# RedisEdge realtime video analytics video capture script
import argparse
import cv2
import redis
import time
from urllib.parse import urlparse
class SimpleMovingAverage(object):
''' Simple moving average '''
def __init__(self, value=0.0, count=7):
self.count = int(count)
self.current = float(value)
self.samples = [self.current] * self.count
def __str__(self):
return str(round(self.current, 3))
def add(self, value):
v = float(value)
self.samples.insert(0, v)
o = self.samples.pop()
self.current = self.current + (v-o)/self.count
class Video:
def __init__(self, infile=0, fps=30.0):
self.isFile = not str(infile).isdecimal()
self.ts = time.time()
self.infile = infile
self.cam = cv2.VideoCapture(self.infile)
if not self.isFile:
self.cam.set(cv2.CAP_PROP_FPS, fps)
self.fps = fps
# TODO: some cameras don't respect the fps directive
self.cam.set(cv2.CAP_PROP_FRAME_WIDTH, 800)
self.cam.set(cv2.CAP_PROP_FRAME_HEIGHT, 600)
else:
self.fps = self.cam.get(cv2.CAP_PROP_FPS)
self.sma = SimpleMovingAverage(value=0.1, count=19)
def __iter__(self):
self.count = -1
return self
def __next__(self):
|
def __len__(self):
return 0
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('infile', help='Input file (leave empty to use webcam)', nargs='?', type=str, default=None)
parser.add_argument('-o', '--output', help='Output stream key name', type=str, default='camera:0')
parser.add_argument('-u', '--url', help='Redis URL', type=str, default='redis://127.0.0.1:6379')
parser.add_argument('-w', '--webcam', help='Webcam device number', type=int, default=0)
parser.add_argument('-v', '--verbose', help='Verbose output', type=bool, default=False)
parser.add_argument('--count', help='Count of frames to capture', type=int, default=None)
parser.add_argument('--fmt', help='Frame storage format', type=str, default='.jpg')
parser.add_argument('--fps', help='Frames per second (webcam)', type=float, default=15.0)
parser.add_argument('--maxlen', help='Maximum length of output stream', type=int, default=10000)
args = parser.parse_args()
# Set up Redis connection
url = urlparse(args.url)
conn = redis.Redis(host=url.hostname, port=url.port)
if not conn.ping():
raise Exception('Redis unavailable')
# Choose video source
if args.infile is None:
loader = Video(infile=args.webcam, fps=args.fps) # Default to webcam
else:
loader = Video(infile=args.infile, fps=args.fps) # Unless an input file (image or video) was specified
for (count, img) in loader:
_, data = cv2.imencode(args.fmt, img)
msg = {
'count': count,
'image': data.tobytes()
}
_id = conn.xadd(args.output, msg, maxlen=args.maxlen)
if args.verbose:
print('frame: {} id: {}'.format(count, _id))
if args.count is not None and count+1 == args.count:
print('Stopping after {} frames.'.format(count))
break
|
self.count += 1
# Respect FPS for files
if self.isFile:
delta = time.time() - self.ts
self.sma.add(delta)
time.sleep(max(0,(1 - self.sma.current*self.fps)/self.fps))
self.ts = time.time()
# Read image
ret_val, img0 = self.cam.read()
if not ret_val and self.isFile:
self.cam.set(cv2.CAP_PROP_POS_FRAMES, 0)
ret_val, img0 = self.cam.read()
assert ret_val, 'Video Error'
# Preprocess
img = img0
if not self.isFile:
img = cv2.flip(img, 1)
return self.count, img
|
test_datetime.py
|
import numpy
import numpy as np
import datetime
import pytest
from numpy.testing import (
assert_, assert_equal, assert_raises, assert_warns, suppress_warnings,
assert_raises_regex,
)
from numpy.compat import pickle
# Use pytz to test out various time zones if available
try:
from pytz import timezone as tz
_has_pytz = True
except ImportError:
_has_pytz = False
try:
RecursionError
except NameError:
RecursionError = RuntimeError # python < 3.5
class TestDateTime:
def test_datetime_dtype_creation(self):
for unit in ['Y', 'M', 'W', 'D',
'h', 'm', 's', 'ms', 'us',
'μs', # alias for us
'ns', 'ps', 'fs', 'as']:
dt1 = np.dtype('M8[750%s]' % unit)
assert_(dt1 == np.dtype('datetime64[750%s]' % unit))
dt2 = np.dtype('m8[%s]' % unit)
assert_(dt2 == np.dtype('timedelta64[%s]' % unit))
# Generic units shouldn't add [] to the end
assert_equal(str(np.dtype("M8")), "datetime64")
# Should be possible to specify the endianness
assert_equal(np.dtype("=M8"), np.dtype("M8"))
assert_equal(np.dtype("=M8[s]"), np.dtype("M8[s]"))
assert_(np.dtype(">M8") == np.dtype("M8") or
np.dtype("<M8") == np.dtype("M8"))
assert_(np.dtype(">M8[D]") == np.dtype("M8[D]") or
np.dtype("<M8[D]") == np.dtype("M8[D]"))
assert_(np.dtype(">M8") != np.dtype("<M8"))
assert_equal(np.dtype("=m8"), np.dtype("m8"))
assert_equal(np.dtype("=m8[s]"), np.dtype("m8[s]"))
assert_(np.dtype(">m8") == np.dtype("m8") or
np.dtype("<m8") == np.dtype("m8"))
assert_(np.dtype(">m8[D]") == np.dtype("m8[D]") or
np.dtype("<m8[D]") == np.dtype("m8[D]"))
assert_(np.dtype(">m8") != np.dtype("<m8"))
# Check that the parser rejects bad datetime types
assert_raises(TypeError, np.dtype, 'M8[badunit]')
assert_raises(TypeError, np.dtype, 'm8[badunit]')
assert_raises(TypeError, np.dtype, 'M8[YY]')
assert_raises(TypeError, np.dtype, 'm8[YY]')
assert_raises(TypeError, np.dtype, 'm4')
assert_raises(TypeError, np.dtype, 'M7')
assert_raises(TypeError, np.dtype, 'm7')
assert_raises(TypeError, np.dtype, 'M16')
assert_raises(TypeError, np.dtype, 'm16')
def test_datetime_casting_rules(self):
# Cannot cast safely/same_kind between timedelta and datetime
assert_(not np.can_cast('m8', 'M8', casting='same_kind'))
assert_(not np.can_cast('M8', 'm8', casting='same_kind'))
assert_(not np.can_cast('m8', 'M8', casting='safe'))
assert_(not np.can_cast('M8', 'm8', casting='safe'))
# Can cast safely/same_kind from integer to timedelta
assert_(np.can_cast('i8', 'm8', casting='same_kind'))
assert_(np.can_cast('i8', 'm8', casting='safe'))
assert_(np.can_cast('i4', 'm8', casting='same_kind'))
assert_(np.can_cast('i4', 'm8', casting='safe'))
assert_(np.can_cast('u4', 'm8', casting='same_kind'))
assert_(np.can_cast('u4', 'm8', casting='safe'))
# Cannot cast safely from unsigned integer of the same size, which
# could overflow
assert_(np.can_cast('u8', 'm8', casting='same_kind'))
assert_(not np.can_cast('u8', 'm8', casting='safe'))
# Cannot cast safely/same_kind from float to timedelta
assert_(not np.can_cast('f4', 'm8', casting='same_kind'))
assert_(not np.can_cast('f4', 'm8', casting='safe'))
# Cannot cast safely/same_kind from integer to datetime
assert_(not np.can_cast('i8', 'M8', casting='same_kind'))
assert_(not np.can_cast('i8', 'M8', casting='safe'))
# Cannot cast safely/same_kind from bool to datetime
assert_(not np.can_cast('b1', 'M8', casting='same_kind'))
assert_(not np.can_cast('b1', 'M8', casting='safe'))
# Can cast safely/same_kind from bool to timedelta
assert_(np.can_cast('b1', 'm8', casting='same_kind'))
assert_(np.can_cast('b1', 'm8', casting='safe'))
# Can cast datetime safely from months/years to days
assert_(np.can_cast('M8[M]', 'M8[D]', casting='safe'))
assert_(np.can_cast('M8[Y]', 'M8[D]', casting='safe'))
# Cannot cast timedelta safely from months/years to days
assert_(not np.can_cast('m8[M]', 'm8[D]', casting='safe'))
assert_(not np.can_cast('m8[Y]', 'm8[D]', casting='safe'))
# Can cast datetime same_kind from months/years to days
assert_(np.can_cast('M8[M]', 'M8[D]', casting='same_kind'))
assert_(np.can_cast('M8[Y]', 'M8[D]', casting='same_kind'))
# Can't cast timedelta same_kind from months/years to days
assert_(not np.can_cast('m8[M]', 'm8[D]', casting='same_kind'))
assert_(not np.can_cast('m8[Y]', 'm8[D]', casting='same_kind'))
# Can cast datetime same_kind across the date/time boundary
assert_(np.can_cast('M8[D]', 'M8[h]', casting='same_kind'))
# Can cast timedelta same_kind across the date/time boundary
assert_(np.can_cast('m8[D]', 'm8[h]', casting='same_kind'))
assert_(np.can_cast('m8[h]', 'm8[D]', casting='same_kind'))
# Cannot cast safely if the integer multiplier doesn't divide
assert_(not np.can_cast('M8[7h]', 'M8[3h]', casting='safe'))
assert_(not np.can_cast('M8[3h]', 'M8[6h]', casting='safe'))
# But can cast same_kind
assert_(np.can_cast('M8[7h]', 'M8[3h]', casting='same_kind'))
# Can cast safely if the integer multiplier does divide
assert_(np.can_cast('M8[6h]', 'M8[3h]', casting='safe'))
# We can always cast types with generic units (corresponding to NaT) to
# more specific types
assert_(np.can_cast('m8', 'm8[h]', casting='same_kind'))
assert_(np.can_cast('m8', 'm8[h]', casting='safe'))
assert_(np.can_cast('M8', 'M8[h]', casting='same_kind'))
assert_(np.can_cast('M8', 'M8[h]', casting='safe'))
# but not the other way around
assert_(not np.can_cast('m8[h]', 'm8', casting='same_kind'))
assert_(not np.can_cast('m8[h]', 'm8', casting='safe'))
assert_(not np.can_cast('M8[h]', 'M8', casting='same_kind'))
assert_(not np.can_cast('M8[h]', 'M8', casting='safe'))
def test_compare_generic_nat(self):
# regression tests for gh-6452
assert_(np.datetime64('NaT') !=
np.datetime64('2000') + np.timedelta64('NaT'))
assert_(np.datetime64('NaT') != np.datetime64('NaT', 'us'))
assert_(np.datetime64('NaT', 'us') != np.datetime64('NaT'))
@pytest.mark.parametrize("size", [
3, 21, 217, 1000])
def test_datetime_nat_argsort_stability(self, size):
# NaT < NaT should be False internally for
# sort stability
expected = np.arange(size)
arr = np.tile(np.datetime64('NaT'), size)
assert_equal(np.argsort(arr, kind='mergesort'), expected)
@pytest.mark.parametrize("size", [
3, 21, 217, 1000])
def test_timedelta_nat_argsort_stability(self, size):
# NaT < NaT should be False internally for
# sort stability
expected = np.arange(size)
arr = np.tile(np.timedelta64('NaT'), size)
assert_equal(np.argsort(arr, kind='mergesort'), expected)
@pytest.mark.parametrize("arr, expected", [
# the example provided in gh-12629
(['NaT', 1, 2, 3],
[1, 2, 3, 'NaT']),
# multiple NaTs
(['NaT', 9, 'NaT', -707],
[-707, 9, 'NaT', 'NaT']),
# this sort explores another code path for NaT
([1, -2, 3, 'NaT'],
[-2, 1, 3, 'NaT']),
# 2-D array
([[51, -220, 'NaT'],
[-17, 'NaT', -90]],
[[-220, 51, 'NaT'],
[-90, -17, 'NaT']]),
])
@pytest.mark.parametrize("dtype", [
'M8[ns]', 'M8[us]',
'm8[ns]', 'm8[us]'])
def test_datetime_timedelta_sort_nat(self, arr, expected, dtype):
# fix for gh-12629 and gh-15063; NaT sorting to end of array
arr = np.array(arr, dtype=dtype)
expected = np.array(expected, dtype=dtype)
arr.sort()
assert_equal(arr, expected)
def test_datetime_scalar_construction(self):
# Construct with different units
assert_equal(np.datetime64('1950-03-12', 'D'),
np.datetime64('1950-03-12'))
assert_equal(np.datetime64('1950-03-12T13', 's'),
np.datetime64('1950-03-12T13', 'm'))
# Default construction means NaT
assert_equal(np.datetime64(), np.datetime64('NaT'))
# Some basic strings and repr
assert_equal(str(np.datetime64('NaT')), 'NaT')
assert_equal(repr(np.datetime64('NaT')),
"numpy.datetime64('NaT')")
assert_equal(str(np.datetime64('2011-02')), '2011-02')
assert_equal(repr(np.datetime64('2011-02')),
"numpy.datetime64('2011-02')")
# None gets constructed as NaT
assert_equal(np.datetime64(None), np.datetime64('NaT'))
# Default construction of NaT is in generic units
assert_equal(np.datetime64().dtype, np.dtype('M8'))
assert_equal(np.datetime64('NaT').dtype, np.dtype('M8'))
# Construction from integers requires a specified unit
assert_raises(ValueError, np.datetime64, 17)
# When constructing from a scalar or zero-dimensional array,
# it either keeps the units or you can override them.
a = np.datetime64('2000-03-18T16', 'h')
b = np.array('2000-03-18T16', dtype='M8[h]')
assert_equal(a.dtype, np.dtype('M8[h]'))
assert_equal(b.dtype, np.dtype('M8[h]'))
assert_equal(np.datetime64(a), a)
assert_equal(np.datetime64(a).dtype, np.dtype('M8[h]'))
assert_equal(np.datetime64(b), a)
assert_equal(np.datetime64(b).dtype, np.dtype('M8[h]'))
assert_equal(np.datetime64(a, 's'), a)
assert_equal(np.datetime64(a, 's').dtype, np.dtype('M8[s]'))
assert_equal(np.datetime64(b, 's'), a)
assert_equal(np.datetime64(b, 's').dtype, np.dtype('M8[s]'))
# Construction from datetime.date
assert_equal(np.datetime64('1945-03-25'),
np.datetime64(datetime.date(1945, 3, 25)))
assert_equal(np.datetime64('2045-03-25', 'D'),
np.datetime64(datetime.date(2045, 3, 25), 'D'))
# Construction from datetime.datetime
assert_equal(np.datetime64('1980-01-25T14:36:22.5'),
np.datetime64(datetime.datetime(1980, 1, 25,
14, 36, 22, 500000)))
# Construction with time units from a date is okay
assert_equal(np.datetime64('1920-03-13', 'h'),
np.datetime64('1920-03-13T00'))
assert_equal(np.datetime64('1920-03', 'm'),
np.datetime64('1920-03-01T00:00'))
assert_equal(np.datetime64('1920', 's'),
np.datetime64('1920-01-01T00:00:00'))
assert_equal(np.datetime64(datetime.date(2045, 3, 25), 'ms'),
np.datetime64('2045-03-25T00:00:00.000'))
# Construction with date units from a datetime is also okay
assert_equal(np.datetime64('1920-03-13T18', 'D'),
np.datetime64('1920-03-13'))
assert_equal(np.datetime64('1920-03-13T18:33:12', 'M'),
np.datetime64('1920-03'))
assert_equal(np.datetime64('1920-03-13T18:33:12.5', 'Y'),
np.datetime64('1920'))
def test_datetime_scalar_construction_timezone(self):
# verify that supplying an explicit timezone works, but is deprecated
with assert_warns(DeprecationWarning):
assert_equal(np.datetime64('2000-01-01T00Z'),
np.datetime64('2000-01-01T00'))
with assert_warns(DeprecationWarning):
assert_equal(np.datetime64('2000-01-01T00-08'),
np.datetime64('2000-01-01T08'))
def test_datetime_array_find_type(self):
dt = np.datetime64('1970-01-01', 'M')
arr = np.array([dt])
assert_equal(arr.dtype, np.dtype('M8[M]'))
# at the moment, we don't automatically convert these to datetime64
dt = datetime.date(1970, 1, 1)
arr = np.array([dt])
assert_equal(arr.dtype, np.dtype('O'))
dt = datetime.datetime(1970, 1, 1, 12, 30, 40)
arr = np.array([dt])
assert_equal(arr.dtype, np.dtype('O'))
# find "supertype" for non-dates and dates
b = np.bool_(True)
dm = np.datetime64('1970-01-01', 'M')
d = datetime.date(1970, 1, 1)
dt = datetime.datetime(1970, 1, 1, 12, 30, 40)
arr = np.array([b, dm])
assert_equal(arr.dtype, np.dtype('O'))
arr = np.array([b, d])
assert_equal(arr.dtype, np.dtype('O'))
arr = np.array([b, dt])
assert_equal(arr.dtype, np.dtype('O'))
arr = np.array([d, d]).astype('datetime64')
assert_equal(arr.dtype, np.dtype('M8[D]'))
arr = np.array([dt, dt]).astype('datetime64')
assert_equal(arr.dtype, np.dtype('M8[us]'))
@pytest.mark.parametrize("unit", [
# test all date / time units and use
# "generic" to select generic unit
("Y"), ("M"), ("W"), ("D"), ("h"), ("m"),
("s"), ("ms"), ("us"), ("ns"), ("ps"),
("fs"), ("as"), ("generic") ])
def test_timedelta_np_int_construction(self, unit):
# regression test for gh-7617
if unit != "generic":
assert_equal(np.timedelta64(np.int64(123), unit),
np.timedelta64(123, unit))
else:
assert_equal(np.timedelta64(np.int64(123)),
np.timedelta64(123))
def test_timedelta_scalar_construction(self):
# Construct with different units
assert_equal(np.timedelta64(7, 'D'),
np.timedelta64(1, 'W'))
assert_equal(np.timedelta64(120, 's'),
np.timedelta64(2, 'm'))
# Default construction means 0
assert_equal(np.timedelta64(), np.timedelta64(0))
# None gets constructed as NaT
assert_equal(np.timedelta64(None), np.timedelta64('NaT'))
# Some basic strings and repr
assert_equal(str(np.timedelta64('NaT')), 'NaT')
assert_equal(repr(np.timedelta64('NaT')),
"numpy.timedelta64('NaT')")
assert_equal(str(np.timedelta64(3, 's')), '3 seconds')
assert_equal(repr(np.timedelta64(-3, 's')),
"numpy.timedelta64(-3,'s')")
assert_equal(repr(np.timedelta64(12)),
"numpy.timedelta64(12)")
# Construction from an integer produces generic units
assert_equal(np.timedelta64(12).dtype, np.dtype('m8'))
# When constructing from a scalar or zero-dimensional array,
# it either keeps the units or you can override them.
a = np.timedelta64(2, 'h')
b = np.array(2, dtype='m8[h]')
assert_equal(a.dtype, np.dtype('m8[h]'))
assert_equal(b.dtype, np.dtype('m8[h]'))
assert_equal(np.timedelta64(a), a)
assert_equal(np.timedelta64(a).dtype, np.dtype('m8[h]'))
assert_equal(np.timedelta64(b), a)
assert_equal(np.timedelta64(b).dtype, np.dtype('m8[h]'))
assert_equal(np.timedelta64(a, 's'), a)
assert_equal(np.timedelta64(a, 's').dtype, np.dtype('m8[s]'))
assert_equal(np.timedelta64(b, 's'), a)
assert_equal(np.timedelta64(b, 's').dtype, np.dtype('m8[s]'))
# Construction from datetime.timedelta
assert_equal(np.timedelta64(5, 'D'),
np.timedelta64(datetime.timedelta(days=5)))
assert_equal(np.timedelta64(102347621, 's'),
np.timedelta64(datetime.timedelta(seconds=102347621)))
assert_equal(np.timedelta64(-10234760000, 'us'),
np.timedelta64(datetime.timedelta(
microseconds=-10234760000)))
assert_equal(np.timedelta64(10234760000, 'us'),
np.timedelta64(datetime.timedelta(
microseconds=10234760000)))
assert_equal(np.timedelta64(1023476, 'ms'),
np.timedelta64(datetime.timedelta(milliseconds=1023476)))
assert_equal(np.timedelta64(10, 'm'),
np.timedelta64(datetime.timedelta(minutes=10)))
assert_equal(np.timedelta64(281, 'h'),
np.timedelta64(datetime.timedelta(hours=281)))
assert_equal(np.timedelta64(28, 'W'),
np.timedelta64(datetime.timedelta(weeks=28)))
# Cannot construct across nonlinear time unit boundaries
a = np.timedelta64(3, 's')
assert_raises(TypeError, np.timedelta64, a, 'M')
assert_raises(TypeError, np.timedelta64, a, 'Y')
a = np.timedelta64(6, 'M')
assert_raises(TypeError, np.timedelta64, a, 'D')
assert_raises(TypeError, np.timedelta64, a, 'h')
a = np.timedelta64(1, 'Y')
assert_raises(TypeError, np.timedelta64, a, 'D')
assert_raises(TypeError, np.timedelta64, a, 'm')
a = datetime.timedelta(seconds=3)
assert_raises(TypeError, np.timedelta64, a, 'M')
assert_raises(TypeError, np.timedelta64, a, 'Y')
a = datetime.timedelta(weeks=3)
assert_raises(TypeError, np.timedelta64, a, 'M')
assert_raises(TypeError, np.timedelta64, a, 'Y')
a = datetime.timedelta()
assert_raises(TypeError, np.timedelta64, a, 'M')
assert_raises(TypeError, np.timedelta64, a, 'Y')
def test_timedelta_object_array_conversion(self):
# Regression test for gh-11096
inputs = [datetime.timedelta(28),
datetime.timedelta(30),
datetime.timedelta(31)]
expected = np.array([28, 30, 31], dtype='timedelta64[D]')
actual = np.array(inputs, dtype='timedelta64[D]')
assert_equal(expected, actual)
def test_timedelta_0_dim_object_array_conversion(self):
# Regression test for gh-11151
test = np.array(datetime.timedelta(seconds=20))
actual = test.astype(np.timedelta64)
# expected value from the array constructor workaround
# described in above issue
expected = np.array(datetime.timedelta(seconds=20),
np.timedelta64)
assert_equal(actual, expected)
def test_timedelta_scalar_construction_units(self):
# String construction detecting units
assert_equal(np.datetime64('2010').dtype,
np.dtype('M8[Y]'))
assert_equal(np.datetime64('2010-03').dtype,
np.dtype('M8[M]'))
assert_equal(np.datetime64('2010-03-12').dtype,
np.dtype('M8[D]'))
assert_equal(np.datetime64('2010-03-12T17').dtype,
np.dtype('M8[h]'))
assert_equal(np.datetime64('2010-03-12T17:15').dtype,
np.dtype('M8[m]'))
assert_equal(np.datetime64('2010-03-12T17:15:08').dtype,
np.dtype('M8[s]'))
assert_equal(np.datetime64('2010-03-12T17:15:08.1').dtype,
np.dtype('M8[ms]'))
assert_equal(np.datetime64('2010-03-12T17:15:08.12').dtype,
np.dtype('M8[ms]'))
assert_equal(np.datetime64('2010-03-12T17:15:08.123').dtype,
np.dtype('M8[ms]'))
assert_equal(np.datetime64('2010-03-12T17:15:08.1234').dtype,
np.dtype('M8[us]'))
assert_equal(np.datetime64('2010-03-12T17:15:08.12345').dtype,
np.dtype('M8[us]'))
assert_equal(np.datetime64('2010-03-12T17:15:08.123456').dtype,
np.dtype('M8[us]'))
assert_equal(np.datetime64('1970-01-01T00:00:02.1234567').dtype,
np.dtype('M8[ns]'))
assert_equal(np.datetime64('1970-01-01T00:00:02.12345678').dtype,
np.dtype('M8[ns]'))
assert_equal(np.datetime64('1970-01-01T00:00:02.123456789').dtype,
np.dtype('M8[ns]'))
assert_equal(np.datetime64('1970-01-01T00:00:02.1234567890').dtype,
np.dtype('M8[ps]'))
assert_equal(np.datetime64('1970-01-01T00:00:02.12345678901').dtype,
np.dtype('M8[ps]'))
assert_equal(np.datetime64('1970-01-01T00:00:02.123456789012').dtype,
np.dtype('M8[ps]'))
assert_equal(np.datetime64(
'1970-01-01T00:00:02.1234567890123').dtype,
np.dtype('M8[fs]'))
assert_equal(np.datetime64(
'1970-01-01T00:00:02.12345678901234').dtype,
np.dtype('M8[fs]'))
assert_equal(np.datetime64(
'1970-01-01T00:00:02.123456789012345').dtype,
np.dtype('M8[fs]'))
assert_equal(np.datetime64(
'1970-01-01T00:00:02.1234567890123456').dtype,
np.dtype('M8[as]'))
assert_equal(np.datetime64(
'1970-01-01T00:00:02.12345678901234567').dtype,
np.dtype('M8[as]'))
assert_equal(np.datetime64(
'1970-01-01T00:00:02.123456789012345678').dtype,
np.dtype('M8[as]'))
# Python date object
assert_equal(np.datetime64(datetime.date(2010, 4, 16)).dtype,
np.dtype('M8[D]'))
# Python datetime object
assert_equal(np.datetime64(
datetime.datetime(2010, 4, 16, 13, 45, 18)).dtype,
np.dtype('M8[us]'))
# 'today' special value
assert_equal(np.datetime64('today').dtype,
np.dtype('M8[D]'))
# 'now' special value
assert_equal(np.datetime64('now').dtype,
np.dtype('M8[s]'))
def test_datetime_nat_casting(self):
a = np.array('NaT', dtype='M8[D]')
b = np.datetime64('NaT', '[D]')
# Arrays
assert_equal(a.astype('M8[s]'), np.array('NaT', dtype='M8[s]'))
assert_equal(a.astype('M8[ms]'), np.array('NaT', dtype='M8[ms]'))
assert_equal(a.astype('M8[M]'), np.array('NaT', dtype='M8[M]'))
assert_equal(a.astype('M8[Y]'), np.array('NaT', dtype='M8[Y]'))
assert_equal(a.astype('M8[W]'), np.array('NaT', dtype='M8[W]'))
# Scalars -> Scalars
assert_equal(np.datetime64(b, '[s]'), np.datetime64('NaT', '[s]'))
assert_equal(np.datetime64(b, '[ms]'), np.datetime64('NaT', '[ms]'))
assert_equal(np.datetime64(b, '[M]'), np.datetime64('NaT', '[M]'))
assert_equal(np.datetime64(b, '[Y]'), np.datetime64('NaT', '[Y]'))
assert_equal(np.datetime64(b, '[W]'), np.datetime64('NaT', '[W]'))
# Arrays -> Scalars
assert_equal(np.datetime64(a, '[s]'), np.datetime64('NaT', '[s]'))
assert_equal(np.datetime64(a, '[ms]'), np.datetime64('NaT', '[ms]'))
assert_equal(np.datetime64(a, '[M]'), np.datetime64('NaT', '[M]'))
assert_equal(np.datetime64(a, '[Y]'), np.datetime64('NaT', '[Y]'))
assert_equal(np.datetime64(a, '[W]'), np.datetime64('NaT', '[W]'))
# NaN -> NaT
nan = np.array([np.nan] * 8)
fnan = nan.astype('f')
lnan = nan.astype('g')
cnan = nan.astype('D')
cfnan = nan.astype('F')
clnan = nan.astype('G')
nat = np.array([np.datetime64('NaT')] * 8)
assert_equal(nan.astype('M8[ns]'), nat)
assert_equal(fnan.astype('M8[ns]'), nat)
assert_equal(lnan.astype('M8[ns]'), nat)
assert_equal(cnan.astype('M8[ns]'), nat)
assert_equal(cfnan.astype('M8[ns]'), nat)
assert_equal(clnan.astype('M8[ns]'), nat)
nat = np.array([np.timedelta64('NaT')] * 8)
assert_equal(nan.astype('timedelta64[ns]'), nat)
assert_equal(fnan.astype('timedelta64[ns]'), nat)
assert_equal(lnan.astype('timedelta64[ns]'), nat)
assert_equal(cnan.astype('timedelta64[ns]'), nat)
assert_equal(cfnan.astype('timedelta64[ns]'), nat)
assert_equal(clnan.astype('timedelta64[ns]'), nat)
def test_days_creation(self):
assert_equal(np.array('1599', dtype='M8[D]').astype('i8'),
(1600-1970)*365 - (1972-1600)/4 + 3 - 365)
assert_equal(np.array('1600', dtype='M8[D]').astype('i8'),
(1600-1970)*365 - (1972-1600)/4 + 3)
assert_equal(np.array('1601', dtype='M8[D]').astype('i8'),
(1600-1970)*365 - (1972-1600)/4 + 3 + 366)
assert_equal(np.array('1900', dtype='M8[D]').astype('i8'),
(1900-1970)*365 - (1970-1900)//4)
assert_equal(np.array('1901', dtype='M8[D]').astype('i8'),
(1900-1970)*365 - (1970-1900)//4 + 365)
assert_equal(np.array('1967', dtype='M8[D]').astype('i8'), -3*365 - 1)
assert_equal(np.array('1968', dtype='M8[D]').astype('i8'), -2*365 - 1)
assert_equal(np.array('1969', dtype='M8[D]').astype('i8'), -1*365)
assert_equal(np.array('1970', dtype='M8[D]').astype('i8'), 0*365)
assert_equal(np.array('1971', dtype='M8[D]').astype('i8'), 1*365)
assert_equal(np.array('1972', dtype='M8[D]').astype('i8'), 2*365)
assert_equal(np.array('1973', dtype='M8[D]').astype('i8'), 3*365 + 1)
assert_equal(np.array('1974', dtype='M8[D]').astype('i8'), 4*365 + 1)
assert_equal(np.array('2000', dtype='M8[D]').astype('i8'),
(2000 - 1970)*365 + (2000 - 1972)//4)
assert_equal(np.array('2001', dtype='M8[D]').astype('i8'),
(2000 - 1970)*365 + (2000 - 1972)//4 + 366)
assert_equal(np.array('2400', dtype='M8[D]').astype('i8'),
(2400 - 1970)*365 + (2400 - 1972)//4 - 3)
assert_equal(np.array('2401', dtype='M8[D]').astype('i8'),
(2400 - 1970)*365 + (2400 - 1972)//4 - 3 + 366)
assert_equal(np.array('1600-02-29', dtype='M8[D]').astype('i8'),
(1600-1970)*365 - (1972-1600)//4 + 3 + 31 + 28)
assert_equal(np.array('1600-03-01', dtype='M8[D]').astype('i8'),
(1600-1970)*365 - (1972-1600)//4 + 3 + 31 + 29)
assert_equal(np.array('2000-02-29', dtype='M8[D]').astype('i8'),
(2000 - 1970)*365 + (2000 - 1972)//4 + 31 + 28)
assert_equal(np.array('2000-03-01', dtype='M8[D]').astype('i8'),
(2000 - 1970)*365 + (2000 - 1972)//4 + 31 + 29)
assert_equal(np.array('2001-03-22', dtype='M8[D]').astype('i8'),
(2000 - 1970)*365 + (2000 - 1972)//4 + 366 + 31 + 28 + 21)
def test_days_to_pydate(self):
assert_equal(np.array('1599', dtype='M8[D]').astype('O'),
datetime.date(1599, 1, 1))
assert_equal(np.array('1600', dtype='M8[D]').astype('O'),
datetime.date(1600, 1, 1))
assert_equal(np.array('1601', dtype='M8[D]').astype('O'),
datetime.date(1601, 1, 1))
assert_equal(np.array('1900', dtype='M8[D]').astype('O'),
datetime.date(1900, 1, 1))
assert_equal(np.array('1901', dtype='M8[D]').astype('O'),
datetime.date(1901, 1, 1))
assert_equal(np.array('2000', dtype='M8[D]').astype('O'),
datetime.date(2000, 1, 1))
assert_equal(np.array('2001', dtype='M8[D]').astype('O'),
datetime.date(2001, 1, 1))
assert_equal(np.array('1600-02-29', dtype='M8[D]').astype('O'),
datetime.date(1600, 2, 29))
assert_equal(np.array('1600-03-01', dtype='M8[D]').astype('O'),
datetime.date(1600, 3, 1))
assert_equal(np.array('2001-03-22', dtype='M8[D]').astype('O'),
datetime.date(2001, 3, 22))
def test_dtype_comparison(self):
assert_(not (np.dtype('M8[us]') == np.dtype('M8[ms]')))
assert_(np.dtype('M8[us]') != np.dtype('M8[ms]'))
assert_(np.dtype('M8[2D]') != np.dtype('M8[D]'))
assert_(np.dtype('M8[D]') != np.dtype('M8[2D]'))
def test_pydatetime_creation(self):
a = np.array(['1960-03-12', datetime.date(1960, 3, 12)], dtype='M8[D]')
assert_equal(a[0], a[1])
a = np.array(['1999-12-31', datetime.date(1999, 12, 31)], dtype='M8[D]')
assert_equal(a[0], a[1])
a = np.array(['2000-01-01', datetime.date(2000, 1, 1)], dtype='M8[D]')
assert_equal(a[0], a[1])
# Will fail if the date changes during the exact right moment
a = np.array(['today', datetime.date.today()], dtype='M8[D]')
assert_equal(a[0], a[1])
# datetime.datetime.now() returns local time, not UTC
#a = np.array(['now', datetime.datetime.now()], dtype='M8[s]')
#assert_equal(a[0], a[1])
# we can give a datetime.date time units
assert_equal(np.array(datetime.date(1960, 3, 12), dtype='M8[s]'),
np.array(np.datetime64('1960-03-12T00:00:00')))
def test_datetime_string_conversion(self):
a = ['2011-03-16', '1920-01-01', '2013-05-19']
str_a = np.array(a, dtype='S')
uni_a = np.array(a, dtype='U')
dt_a = np.array(a, dtype='M')
# String to datetime
assert_equal(dt_a, str_a.astype('M'))
assert_equal(dt_a.dtype, str_a.astype('M').dtype)
dt_b = np.empty_like(dt_a)
dt_b[...] = str_a
assert_equal(dt_a, dt_b)
# Datetime to string
assert_equal(str_a, dt_a.astype('S0'))
str_b = np.empty_like(str_a)
str_b[...] = dt_a
assert_equal(str_a, str_b)
# Unicode to datetime
assert_equal(dt_a, uni_a.astype('M'))
assert_equal(dt_a.dtype, uni_a.astype('M').dtype)
dt_b = np.empty_like(dt_a)
dt_b[...] = uni_a
assert_equal(dt_a, dt_b)
# Datetime to unicode
assert_equal(uni_a, dt_a.astype('U'))
uni_b = np.empty_like(uni_a)
uni_b[...] = dt_a
assert_equal(uni_a, uni_b)
# Datetime to long string - gh-9712
assert_equal(str_a, dt_a.astype((np.string_, 128)))
str_b = np.empty(str_a.shape, dtype=(np.string_, 128))
str_b[...] = dt_a
assert_equal(str_a, str_b)
def test_datetime_array_str(self):
a = np.array(['2011-03-16', '1920-01-01', '2013-05-19'], dtype='M')
assert_equal(str(a), "['2011-03-16' '1920-01-01' '2013-05-19']")
a = np.array(['2011-03-16T13:55', '1920-01-01T03:12'], dtype='M')
assert_equal(np.array2string(a, separator=', ',
formatter={'datetime': lambda x:
"'%s'" % np.datetime_as_string(x, timezone='UTC')}),
"['2011-03-16T13:55Z', '1920-01-01T03:12Z']")
# Check that one NaT doesn't corrupt subsequent entries
a = np.array(['2010', 'NaT', '2030']).astype('M')
assert_equal(str(a), "['2010' 'NaT' '2030']")
def test_timedelta_array_str(self):
a = np.array([-1, 0, 100], dtype='m')
assert_equal(str(a), "[ -1 0 100]")
a = np.array(['NaT', 'NaT'], dtype='m')
assert_equal(str(a), "['NaT' 'NaT']")
# Check right-alignment with NaTs
a = np.array([-1, 'NaT', 0], dtype='m')
assert_equal(str(a), "[ -1 'NaT' 0]")
a = np.array([-1, 'NaT', 1234567], dtype='m')
assert_equal(str(a), "[ -1 'NaT' 1234567]")
# Test with other byteorder:
a = np.array([-1, 'NaT', 1234567], dtype='>m')
assert_equal(str(a), "[ -1 'NaT' 1234567]")
a = np.array([-1, 'NaT', 1234567], dtype='<m')
assert_equal(str(a), "[ -1 'NaT' 1234567]")
def test_pickle(self):
# Check that pickle roundtripping works
for proto in range(2, pickle.HIGHEST_PROTOCOL + 1):
dt = np.dtype('M8[7D]')
assert_equal(pickle.loads(pickle.dumps(dt, protocol=proto)), dt)
dt = np.dtype('M8[W]')
assert_equal(pickle.loads(pickle.dumps(dt, protocol=proto)), dt)
scalar = np.datetime64('2016-01-01T00:00:00.000000000')
assert_equal(pickle.loads(pickle.dumps(scalar, protocol=proto)),
scalar)
delta = scalar - np.datetime64('2015-01-01T00:00:00.000000000')
assert_equal(pickle.loads(pickle.dumps(delta, protocol=proto)),
delta)
# Check that loading pickles from 1.6 works
pkl = b"cnumpy\ndtype\np0\n(S'M8'\np1\nI0\nI1\ntp2\nRp3\n" + \
b"(I4\nS'<'\np4\nNNNI-1\nI-1\nI0\n((dp5\n(S'D'\np6\n" + \
b"I7\nI1\nI1\ntp7\ntp8\ntp9\nb."
assert_equal(pickle.loads(pkl), np.dtype('<M8[7D]'))
pkl = b"cnumpy\ndtype\np0\n(S'M8'\np1\nI0\nI1\ntp2\nRp3\n" + \
b"(I4\nS'<'\np4\nNNNI-1\nI-1\nI0\n((dp5\n(S'W'\np6\n" + \
b"I1\nI1\nI1\ntp7\ntp8\ntp9\nb."
assert_equal(pickle.loads(pkl), np.dtype('<M8[W]'))
pkl = b"cnumpy\ndtype\np0\n(S'M8'\np1\nI0\nI1\ntp2\nRp3\n" + \
b"(I4\nS'>'\np4\nNNNI-1\nI-1\nI0\n((dp5\n(S'us'\np6\n" + \
b"I1\nI1\nI1\ntp7\ntp8\ntp9\nb."
assert_equal(pickle.loads(pkl), np.dtype('>M8[us]'))
def test_setstate(self):
"Verify that datetime dtype __setstate__ can handle bad arguments"
dt = np.dtype('>M8[us]')
assert_raises(ValueError, dt.__setstate__, (4, '>', None, None, None, -1, -1, 0, 1))
assert_(dt.__reduce__()[2] == np.dtype('>M8[us]').__reduce__()[2])
assert_raises(TypeError, dt.__setstate__, (4, '>', None, None, None, -1, -1, 0, ({}, 'xxx')))
assert_(dt.__reduce__()[2] == np.dtype('>M8[us]').__reduce__()[2])
def test_dtype_promotion(self):
# datetime <op> datetime computes the metadata gcd
# timedelta <op> timedelta computes the metadata gcd
for mM in ['m', 'M']:
assert_equal(
np.promote_types(np.dtype(mM+'8[2Y]'), np.dtype(mM+'8[2Y]')),
np.dtype(mM+'8[2Y]'))
assert_equal(
np.promote_types(np.dtype(mM+'8[12Y]'), np.dtype(mM+'8[15Y]')),
np.dtype(mM+'8[3Y]'))
assert_equal(
np.promote_types(np.dtype(mM+'8[62M]'), np.dtype(mM+'8[24M]')),
np.dtype(mM+'8[2M]'))
assert_equal(
np.promote_types(np.dtype(mM+'8[1W]'), np.dtype(mM+'8[2D]')),
np.dtype(mM+'8[1D]'))
assert_equal(
np.promote_types(np.dtype(mM+'8[W]'), np.dtype(mM+'8[13s]')),
np.dtype(mM+'8[s]'))
assert_equal(
np.promote_types(np.dtype(mM+'8[13W]'), np.dtype(mM+'8[49s]')),
np.dtype(mM+'8[7s]'))
# timedelta <op> timedelta raises when there is no reasonable gcd
assert_raises(TypeError, np.promote_types,
np.dtype('m8[Y]'), np.dtype('m8[D]'))
assert_raises(TypeError, np.promote_types,
np.dtype('m8[M]'), np.dtype('m8[W]'))
# timedelta and float cannot be safely cast with each other
assert_raises(TypeError, np.promote_types, "float32", "m8")
assert_raises(TypeError, np.promote_types, "m8", "float32")
assert_raises(TypeError, np.promote_types, "uint64", "m8")
assert_raises(TypeError, np.promote_types, "m8", "uint64")
# timedelta <op> timedelta may overflow with big unit ranges
assert_raises(OverflowError, np.promote_types,
np.dtype('m8[W]'), np.dtype('m8[fs]'))
assert_raises(OverflowError, np.promote_types,
np.dtype('m8[s]'), np.dtype('m8[as]'))
def test_cast_overflow(self):
# gh-4486
def cast():
numpy.datetime64("1971-01-01 00:00:00.000000000000000").astype("<M8[D]")
assert_raises(OverflowError, cast)
def cast2():
numpy.datetime64("2014").astype("<M8[fs]")
assert_raises(OverflowError, cast2)
def test_pyobject_roundtrip(self):
# All datetime types should be able to roundtrip through object
a = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0,
-1020040340, -2942398, -1, 0, 1, 234523453, 1199164176],
dtype=np.int64)
# With date units
for unit in ['M8[D]', 'M8[W]', 'M8[M]', 'M8[Y]']:
b = a.copy().view(dtype=unit)
b[0] = '-0001-01-01'
b[1] = '-0001-12-31'
b[2] = '0000-01-01'
b[3] = '0001-01-01'
b[4] = '1969-12-31'
b[5] = '1970-01-01'
b[6] = '9999-12-31'
b[7] = '10000-01-01'
b[8] = 'NaT'
assert_equal(b.astype(object).astype(unit), b,
"Error roundtripping unit %s" % unit)
# With time units
for unit in ['M8[as]', 'M8[16fs]', 'M8[ps]', 'M8[us]',
'M8[300as]', 'M8[20us]']:
b = a.copy().view(dtype=unit)
b[0] = '-0001-01-01T00'
b[1] = '-0001-12-31T00'
b[2] = '0000-01-01T00'
b[3] = '0001-01-01T00'
b[4] = '1969-12-31T23:59:59.999999'
b[5] = '1970-01-01T00'
b[6] = '9999-12-31T23:59:59.999999'
b[7] = '10000-01-01T00'
b[8] = 'NaT'
assert_equal(b.astype(object).astype(unit), b,
"Error roundtripping unit %s" % unit)
def test_month_truncation(self):
# Make sure that months are truncating correctly
assert_equal(np.array('1945-03-01', dtype='M8[M]'),
np.array('1945-03-31', dtype='M8[M]'))
assert_equal(np.array('1969-11-01', dtype='M8[M]'),
np.array('1969-11-30T23:59:59.99999', dtype='M').astype('M8[M]'))
assert_equal(np.array('1969-12-01', dtype='M8[M]'),
np.array('1969-12-31T23:59:59.99999', dtype='M').astype('M8[M]'))
assert_equal(np.array('1970-01-01', dtype='M8[M]'),
np.array('1970-01-31T23:59:59.99999', dtype='M').astype('M8[M]'))
assert_equal(np.array('1980-02-01', dtype='M8[M]'),
np.array('1980-02-29T23:59:59.99999', dtype='M').astype('M8[M]'))
def test_different_unit_comparison(self):
# Check some years with date units
for unit1 in ['Y', 'M', 'D']:
dt1 = np.dtype('M8[%s]' % unit1)
for unit2 in ['Y', 'M', 'D']:
dt2 = np.dtype('M8[%s]' % unit2)
assert_equal(np.array('1945', dtype=dt1),
np.array('1945', dtype=dt2))
assert_equal(np.array('1970', dtype=dt1),
np.array('1970', dtype=dt2))
assert_equal(np.array('9999', dtype=dt1),
np.array('9999', dtype=dt2))
assert_equal(np.array('10000', dtype=dt1),
np.array('10000-01-01', dtype=dt2))
assert_equal(np.datetime64('1945', unit1),
np.datetime64('1945', unit2))
assert_equal(np.datetime64('1970', unit1),
np.datetime64('1970', unit2))
assert_equal(np.datetime64('9999', unit1),
np.datetime64('9999', unit2))
assert_equal(np.datetime64('10000', unit1),
np.datetime64('10000-01-01', unit2))
# Check some datetimes with time units
for unit1 in ['6h', 'h', 'm', 's', '10ms', 'ms', 'us']:
dt1 = np.dtype('M8[%s]' % unit1)
for unit2 in ['h', 'm', 's', 'ms', 'us']:
dt2 = np.dtype('M8[%s]' % unit2)
assert_equal(np.array('1945-03-12T18', dtype=dt1),
np.array('1945-03-12T18', dtype=dt2))
assert_equal(np.array('1970-03-12T18', dtype=dt1),
np.array('1970-03-12T18', dtype=dt2))
assert_equal(np.array('9999-03-12T18', dtype=dt1),
np.array('9999-03-12T18', dtype=dt2))
assert_equal(np.array('10000-01-01T00', dtype=dt1),
np.array('10000-01-01T00', dtype=dt2))
assert_equal(np.datetime64('1945-03-12T18', unit1),
np.datetime64('1945-03-12T18', unit2))
assert_equal(np.datetime64('1970-03-12T18', unit1),
np.datetime64('1970-03-12T18', unit2))
assert_equal(np.datetime64('9999-03-12T18', unit1),
np.datetime64('9999-03-12T18', unit2))
assert_equal(np.datetime64('10000-01-01T00', unit1),
np.datetime64('10000-01-01T00', unit2))
# Check some days with units that won't overflow
for unit1 in ['D', '12h', 'h', 'm', 's', '4s', 'ms', 'us']:
dt1 = np.dtype('M8[%s]' % unit1)
for unit2 in ['D', 'h', 'm', 's', 'ms', 'us']:
dt2 = np.dtype('M8[%s]' % unit2)
assert_(np.equal(np.array('1932-02-17', dtype='M').astype(dt1),
np.array('1932-02-17T00:00:00', dtype='M').astype(dt2),
casting='unsafe'))
assert_(np.equal(np.array('10000-04-27', dtype='M').astype(dt1),
np.array('10000-04-27T00:00:00', dtype='M').astype(dt2),
casting='unsafe'))
# Shouldn't be able to compare datetime and timedelta
# TODO: Changing to 'same_kind' or 'safe' casting in the ufuncs by
# default is needed to properly catch this kind of thing...
a = np.array('2012-12-21', dtype='M8[D]')
b = np.array(3, dtype='m8[D]')
#assert_raises(TypeError, np.less, a, b)
assert_raises(TypeError, np.less, a, b, casting='same_kind')
def test_datetime_like(self):
a = np.array([3], dtype='m8[4D]')
b = np.array(['2012-12-21'], dtype='M8[D]')
assert_equal(np.ones_like(a).dtype, a.dtype)
assert_equal(np.zeros_like(a).dtype, a.dtype)
assert_equal(np.empty_like(a).dtype, a.dtype)
assert_equal(np.ones_like(b).dtype, b.dtype)
assert_equal(np.zeros_like(b).dtype, b.dtype)
assert_equal(np.empty_like(b).dtype, b.dtype)
def test_datetime_unary(self):
for tda, tdb, tdzero, tdone, tdmone in \
[
# One-dimensional arrays
(np.array([3], dtype='m8[D]'),
np.array([-3], dtype='m8[D]'),
np.array([0], dtype='m8[D]'),
np.array([1], dtype='m8[D]'),
np.array([-1], dtype='m8[D]')),
# NumPy scalars
(np.timedelta64(3, '[D]'),
np.timedelta64(-3, '[D]'),
np.timedelta64(0, '[D]'),
np.timedelta64(1, '[D]'),
np.timedelta64(-1, '[D]'))]:
# negative ufunc
assert_equal(-tdb, tda)
assert_equal((-tdb).dtype, tda.dtype)
assert_equal(np.negative(tdb), tda)
assert_equal(np.negative(tdb).dtype, tda.dtype)
# positive ufunc
assert_equal(np.positive(tda), tda)
assert_equal(np.positive(tda).dtype, tda.dtype)
assert_equal(np.positive(tdb), tdb)
assert_equal(np.positive(tdb).dtype, tdb.dtype)
# absolute ufunc
assert_equal(np.absolute(tdb), tda)
assert_equal(np.absolute(tdb).dtype, tda.dtype)
# sign ufunc
assert_equal(np.sign(tda), tdone)
assert_equal(np.sign(tdb), tdmone)
assert_equal(np.sign(tdzero), tdzero)
assert_equal(np.sign(tda).dtype, tda.dtype)
# The ufuncs always produce native-endian results
assert_
def test_datetime_add(self):
for dta, dtb, dtc, dtnat, tda, tdb, tdc in \
[
# One-dimensional arrays
(np.array(['2012-12-21'], dtype='M8[D]'),
np.array(['2012-12-24'], dtype='M8[D]'),
np.array(['2012-12-21T11'], dtype='M8[h]'),
np.array(['NaT'], dtype='M8[D]'),
np.array([3], dtype='m8[D]'),
np.array([11], dtype='m8[h]'),
np.array([3*24 + 11], dtype='m8[h]')),
# NumPy scalars
(np.datetime64('2012-12-21', '[D]'),
np.datetime64('2012-12-24', '[D]'),
np.datetime64('2012-12-21T11', '[h]'),
np.datetime64('NaT', '[D]'),
np.timedelta64(3, '[D]'),
np.timedelta64(11, '[h]'),
np.timedelta64(3*24 + 11, '[h]'))]:
# m8 + m8
assert_equal(tda + tdb, tdc)
assert_equal((tda + tdb).dtype, np.dtype('m8[h]'))
# m8 + bool
assert_equal(tdb + True, tdb + 1)
assert_equal((tdb + True).dtype, np.dtype('m8[h]'))
# m8 + int
assert_equal(tdb + 3*24, tdc)
assert_equal((tdb + 3*24).dtype, np.dtype('m8[h]'))
# bool + m8
assert_equal(False + tdb, tdb)
assert_equal((False + tdb).dtype, np.dtype('m8[h]'))
# int + m8
assert_equal(3*24 + tdb, tdc)
assert_equal((3*24 + tdb).dtype, np.dtype('m8[h]'))
# M8 + bool
assert_equal(dta + True, dta + 1)
assert_equal(dtnat + True, dtnat)
assert_equal((dta + True).dtype, np.dtype('M8[D]'))
# M8 + int
assert_equal(dta + 3, dtb)
assert_equal(dtnat + 3, dtnat)
assert_equal((dta + 3).dtype, np.dtype('M8[D]'))
# bool + M8
assert_equal(False + dta, dta)
assert_equal(False + dtnat, dtnat)
assert_equal((False + dta).dtype, np.dtype('M8[D]'))
# int + M8
assert_equal(3 + dta, dtb)
assert_equal(3 + dtnat, dtnat)
assert_equal((3 + dta).dtype, np.dtype('M8[D]'))
# M8 + m8
assert_equal(dta + tda, dtb)
assert_equal(dtnat + tda, dtnat)
assert_equal((dta + tda).dtype, np.dtype('M8[D]'))
# m8 + M8
assert_equal(tda + dta, dtb)
assert_equal(tda + dtnat, dtnat)
assert_equal((tda + dta).dtype, np.dtype('M8[D]'))
# In M8 + m8, the result goes to higher precision
assert_equal(np.add(dta, tdb, casting='unsafe'), dtc)
assert_equal(np.add(dta, tdb, casting='unsafe').dtype,
np.dtype('M8[h]'))
assert_equal(np.add(tdb, dta, casting='unsafe'), dtc)
assert_equal(np.add(tdb, dta, casting='unsafe').dtype,
np.dtype('M8[h]'))
# M8 + M8
assert_raises(TypeError, np.add, dta, dtb)
def test_datetime_subtract(self):
for dta, dtb, dtc, dtd, dte, dtnat, tda, tdb, tdc in \
[
# One-dimensional arrays
(np.array(['2012-12-21'], dtype='M8[D]'),
np.array(['2012-12-24'], dtype='M8[D]'),
np.array(['1940-12-24'], dtype='M8[D]'),
np.array(['1940-12-24T00'], dtype='M8[h]'),
np.array(['1940-12-23T13'], dtype='M8[h]'),
np.array(['NaT'], dtype='M8[D]'),
np.array([3], dtype='m8[D]'),
np.array([11], dtype='m8[h]'),
np.array([3*24 - 11], dtype='m8[h]')),
# NumPy scalars
(np.datetime64('2012-12-21', '[D]'),
np.datetime64('2012-12-24', '[D]'),
np.datetime64('1940-12-24', '[D]'),
np.datetime64('1940-12-24T00', '[h]'),
np.datetime64('1940-12-23T13', '[h]'),
np.datetime64('NaT', '[D]'),
np.timedelta64(3, '[D]'),
np.timedelta64(11, '[h]'),
np.timedelta64(3*24 - 11, '[h]'))]:
# m8 - m8
assert_equal(tda - tdb, tdc)
assert_equal((tda - tdb).dtype, np.dtype('m8[h]'))
assert_equal(tdb - tda, -tdc)
assert_equal((tdb - tda).dtype, np.dtype('m8[h]'))
# m8 - bool
assert_equal(tdc - True, tdc - 1)
assert_equal((tdc - True).dtype, np.dtype('m8[h]'))
# m8 - int
assert_equal(tdc - 3*24, -tdb)
assert_equal((tdc - 3*24).dtype, np.dtype('m8[h]'))
# int - m8
assert_equal(False - tdb, -tdb)
assert_equal((False - tdb).dtype, np.dtype('m8[h]'))
# int - m8
assert_equal(3*24 - tdb, tdc)
assert_equal((3*24 - tdb).dtype, np.dtype('m8[h]'))
# M8 - bool
assert_equal(dtb - True, dtb - 1)
assert_equal(dtnat - True, dtnat)
assert_equal((dtb - True).dtype, np.dtype('M8[D]'))
# M8 - int
assert_equal(dtb - 3, dta)
assert_equal(dtnat - 3, dtnat)
assert_equal((dtb - 3).dtype, np.dtype('M8[D]'))
# M8 - m8
assert_equal(dtb - tda, dta)
assert_equal(dtnat - tda, dtnat)
assert_equal((dtb - tda).dtype, np.dtype('M8[D]'))
# In M8 - m8, the result goes to higher precision
assert_equal(np.subtract(dtc, tdb, casting='unsafe'), dte)
assert_equal(np.subtract(dtc, tdb, casting='unsafe').dtype,
np.dtype('M8[h]'))
# M8 - M8 with different goes to higher precision
assert_equal(np.subtract(dtc, dtd, casting='unsafe'),
np.timedelta64(0, 'h'))
assert_equal(np.subtract(dtc, dtd, casting='unsafe').dtype,
np.dtype('m8[h]'))
assert_equal(np.subtract(dtd, dtc, casting='unsafe'),
np.timedelta64(0, 'h'))
assert_equal(np.subtract(dtd, dtc, casting='unsafe').dtype,
np.dtype('m8[h]'))
# m8 - M8
assert_raises(TypeError, np.subtract, tda, dta)
# bool - M8
assert_raises(TypeError, np.subtract, False, dta)
# int - M8
assert_raises(TypeError, np.subtract, 3, dta)
def test_datetime_multiply(self):
for dta, tda, tdb, tdc in \
[
# One-dimensional arrays
(np.array(['2012-12-21'], dtype='M8[D]'),
np.array([6], dtype='m8[h]'),
np.array([9], dtype='m8[h]'),
np.array([12], dtype='m8[h]')),
# NumPy scalars
(np.datetime64('2012-12-21', '[D]'),
np.timedelta64(6, '[h]'),
np.timedelta64(9, '[h]'),
np.timedelta64(12, '[h]'))]:
# m8 * int
assert_equal(tda * 2, tdc)
assert_equal((tda * 2).dtype, np.dtype('m8[h]'))
# int * m8
assert_equal(2 * tda, tdc)
assert_equal((2 * tda).dtype, np.dtype('m8[h]'))
# m8 * float
assert_equal(tda * 1.5, tdb)
assert_equal((tda * 1.5).dtype, np.dtype('m8[h]'))
# float * m8
assert_equal(1.5 * tda, tdb)
assert_equal((1.5 * tda).dtype, np.dtype('m8[h]'))
# m8 * m8
assert_raises(TypeError, np.multiply, tda, tdb)
# m8 * M8
assert_raises(TypeError, np.multiply, dta, tda)
# M8 * m8
assert_raises(TypeError, np.multiply, tda, dta)
# M8 * int
assert_raises(TypeError, np.multiply, dta, 2)
# int * M8
assert_raises(TypeError, np.multiply, 2, dta)
# M8 * float
assert_raises(TypeError, np.multiply, dta, 1.5)
# float * M8
assert_raises(TypeError, np.multiply, 1.5, dta)
# NaTs
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, "invalid value encountered in multiply")
nat = np.timedelta64('NaT')
def check(a, b, res):
assert_equal(a * b, res)
assert_equal(b * a, res)
for tp in (int, float):
check(nat, tp(2), nat)
check(nat, tp(0), nat)
for f in (float('inf'), float('nan')):
check(np.timedelta64(1), f, nat)
check(np.timedelta64(0), f, nat)
check(nat, f, nat)
@pytest.mark.parametrize("op1, op2, exp", [
# m8 same units round down
(np.timedelta64(7, 's'),
np.timedelta64(4, 's'),
1),
# m8 same units round down with negative
(np.timedelta64(7, 's'),
np.timedelta64(-4, 's'),
-2),
# m8 same units negative no round down
(np.timedelta64(8, 's'),
np.timedelta64(-4, 's'),
-2),
# m8 different units
(np.timedelta64(1, 'm'),
np.timedelta64(31, 's'),
1),
# m8 generic units
(np.timedelta64(1890),
np.timedelta64(31),
60),
# Y // M works
(np.timedelta64(2, 'Y'),
np.timedelta64('13', 'M'),
1),
# handle 1D arrays
(np.array([1, 2, 3], dtype='m8'),
np.array([2], dtype='m8'),
np.array([0, 1, 1], dtype=np.int64)),
])
def test_timedelta_floor_divide(self, op1, op2, exp):
assert_equal(op1 // op2, exp)
@pytest.mark.parametrize("op1, op2", [
# div by 0
(np.timedelta64(10, 'us'),
np.timedelta64(0, 'us')),
# div with NaT
(np.timedelta64('NaT'),
np.timedelta64(50, 'us')),
# special case for int64 min
# in integer floor division
(np.timedelta64(np.iinfo(np.int64).min),
np.timedelta64(-1)),
])
def test_timedelta_floor_div_warnings(self, op1, op2):
with assert_warns(RuntimeWarning):
actual = op1 // op2
assert_equal(actual, 0)
assert_equal(actual.dtype, np.int64)
@pytest.mark.parametrize("val1, val2", [
# the smallest integer that can't be represented
# exactly in a double should be preserved if we avoid
# casting to double in floordiv operation
(9007199254740993, 1),
# stress the alternate floordiv code path where
# operand signs don't match and remainder isn't 0
(9007199254740999, -2),
])
def test_timedelta_floor_div_precision(self, val1, val2):
op1 = np.timedelta64(val1)
op2 = np.timedelta64(val2)
actual = op1 // op2
# Python reference integer floor
expected = val1 // val2
assert_equal(actual, expected)
@pytest.mark.parametrize("val1, val2", [
# years and months sometimes can't be unambiguously
# divided for floor division operation
(np.timedelta64(7, 'Y'),
np.timedelta64(3, 's')),
(np.timedelta64(7, 'M'),
np.timedelta64(1, 'D')),
])
def test_timedelta_floor_div_error(self, val1, val2):
with assert_raises_regex(TypeError, "common metadata divisor"):
val1 // val2
@pytest.mark.parametrize("op1, op2", [
# reuse the test cases from floordiv
(np.timedelta64(7, 's'),
np.timedelta64(4, 's')),
# m8 same units round down with negative
(np.timedelta64(7, 's'),
np.timedelta64(-4, 's')),
# m8 same units negative no round down
(np.timedelta64(8, 's'),
np.timedelta64(-4, 's')),
# m8 different units
(np.timedelta64(1, 'm'),
np.timedelta64(31, 's')),
# m8 generic units
(np.timedelta64(1890),
np.timedelta64(31)),
# Y // M works
(np.timedelta64(2, 'Y'),
np.timedelta64('13', 'M')),
# handle 1D arrays
(np.array([1, 2, 3], dtype='m8'),
np.array([2], dtype='m8')),
])
def test_timedelta_divmod(self, op1, op2):
expected = (op1 // op2, op1 % op2)
assert_equal(divmod(op1, op2), expected)
@pytest.mark.parametrize("op1, op2", [
# reuse cases from floordiv
# div by 0
(np.timedelta64(10, 'us'),
np.timedelta64(0, 'us')),
# div with NaT
(np.timedelta64('NaT'),
np.timedelta64(50, 'us')),
# special case for int64 min
# in integer floor division
(np.timedelta64(np.iinfo(np.int64).min),
np.timedelta64(-1)),
])
def test_timedelta_divmod_warnings(self, op1, op2):
with assert_warns(RuntimeWarning):
expected = (op1 // op2, op1 % op2)
with assert_warns(RuntimeWarning):
actual = divmod(op1, op2)
assert_equal(actual, expected)
def test_datetime_divide(self):
for dta, tda, tdb, tdc, tdd in \
[
# One-dimensional arrays
(np.array(['2012-12-21'], dtype='M8[D]'),
np.array([6], dtype='m8[h]'),
np.array([9], dtype='m8[h]'),
np.array([12], dtype='m8[h]'),
np.array([6], dtype='m8[m]')),
# NumPy scalars
(np.datetime64('2012-12-21', '[D]'),
np.timedelta64(6, '[h]'),
np.timedelta64(9, '[h]'),
np.timedelta64(12, '[h]'),
np.timedelta64(6, '[m]'))]:
# m8 / int
assert_equal(tdc / 2, tda)
assert_equal((tdc / 2).dtype, np.dtype('m8[h]'))
# m8 / float
assert_equal(tda / 0.5, tdc)
assert_equal((tda / 0.5).dtype, np.dtype('m8[h]'))
# m8 / m8
assert_equal(tda / tdb, 6.0 / 9.0)
assert_equal(np.divide(tda, tdb), 6.0 / 9.0)
assert_equal(np.true_divide(tda, tdb), 6.0 / 9.0)
assert_equal(tdb / tda, 9.0 / 6.0)
assert_equal((tda / tdb).dtype, np.dtype('f8'))
assert_equal(tda / tdd, 60.0)
assert_equal(tdd / tda, 1.0 / 60.0)
# int / m8
assert_raises(TypeError, np.divide, 2, tdb)
# float / m8
assert_raises(TypeError, np.divide, 0.5, tdb)
# m8 / M8
assert_raises(TypeError, np.divide, dta, tda)
# M8 / m8
assert_raises(TypeError, np.divide, tda, dta)
# M8 / int
assert_raises(TypeError, np.divide, dta, 2)
# int / M8
assert_raises(TypeError, np.divide, 2, dta)
# M8 / float
assert_raises(TypeError, np.divide, dta, 1.5)
# float / M8
assert_raises(TypeError, np.divide, 1.5, dta)
# NaTs
with suppress_warnings() as sup:
sup.filter(RuntimeWarning, r".*encountered in true\_divide")
nat = np.timedelta64('NaT')
for tp in (int, float):
assert_equal(np.timedelta64(1) / tp(0), nat)
assert_equal(np.timedelta64(0) / tp(0), nat)
assert_equal(nat / tp(0), nat)
assert_equal(nat / tp(2), nat)
# Division by inf
assert_equal(np.timedelta64(1) / float('inf'), np.timedelta64(0))
assert_equal(np.timedelta64(0) / float('inf'), np.timedelta64(0))
assert_equal(nat / float('inf'), nat)
# Division by nan
assert_equal(np.timedelta64(1) / float('nan'), nat)
assert_equal(np.timedelta64(0) / float('nan'), nat)
assert_equal(nat / float('nan'), nat)
def test_datetime_compare(self):
# Test all the comparison operators
a = np.datetime64('2000-03-12T18:00:00.000000')
b = np.array(['2000-03-12T18:00:00.000000',
'2000-03-12T17:59:59.999999',
'2000-03-12T18:00:00.000001',
'1970-01-11T12:00:00.909090',
'2016-01-11T12:00:00.909090'],
dtype='datetime64[us]')
assert_equal(np.equal(a, b), [1, 0, 0, 0, 0])
assert_equal(np.not_equal(a, b), [0, 1, 1, 1, 1])
assert_equal(np.less(a, b), [0, 0, 1, 0, 1])
assert_equal(np.less_equal(a, b), [1, 0, 1, 0, 1])
assert_equal(np.greater(a, b), [0, 1, 0, 1, 0])
assert_equal(np.greater_equal(a, b), [1, 1, 0, 1, 0])
def test_datetime_compare_nat(self):
dt_nat = np.datetime64('NaT', 'D')
dt_other = np.datetime64('2000-01-01')
td_nat = np.timedelta64('NaT', 'h')
td_other = np.timedelta64(1, 'h')
for op in [np.equal, np.less, np.less_equal,
np.greater, np.greater_equal]:
assert_(not op(dt_nat, dt_nat))
assert_(not op(dt_nat, dt_other))
assert_(not op(dt_other, dt_nat))
assert_(not op(td_nat, td_nat))
assert_(not op(td_nat, td_other))
assert_(not op(td_other, td_nat))
assert_(np.not_equal(dt_nat, dt_nat))
assert_(np.not_equal(dt_nat, dt_other))
assert_(np.not_equal(dt_other, dt_nat))
assert_(np.not_equal(td_nat, td_nat))
assert_(np.not_equal(td_nat, td_other))
assert_(np.not_equal(td_other, td_nat))
def test_datetime_minmax(self):
# The metadata of the result should become the GCD
# of the operand metadata
a = np.array('1999-03-12T13', dtype='M8[2m]')
b = np.array('1999-03-12T12', dtype='M8[s]')
assert_equal(np.minimum(a, b), b)
assert_equal(np.minimum(a, b).dtype, np.dtype('M8[s]'))
assert_equal(np.fmin(a, b), b)
assert_equal(np.fmin(a, b).dtype, np.dtype('M8[s]'))
assert_equal(np.maximum(a, b), a)
assert_equal(np.maximum(a, b).dtype, np.dtype('M8[s]'))
assert_equal(np.fmax(a, b), a)
assert_equal(np.fmax(a, b).dtype, np.dtype('M8[s]'))
# Viewed as integers, the comparison is opposite because
# of the units chosen
assert_equal(np.minimum(a.view('i8'), b.view('i8')), a.view('i8'))
# Interaction with NaT
a = np.array('1999-03-12T13', dtype='M8[2m]')
dtnat = np.array('NaT', dtype='M8[h]')
assert_equal(np.minimum(a, dtnat), dtnat)
assert_equal(np.minimum(dtnat, a), dtnat)
assert_equal(np.maximum(a, dtnat), dtnat)
assert_equal(np.maximum(dtnat, a), dtnat)
assert_equal(np.fmin(dtnat, a), a)
assert_equal(np.fmin(a, dtnat), a)
assert_equal(np.fmax(dtnat, a), a)
assert_equal(np.fmax(a, dtnat), a)
# Also do timedelta
a = np.array(3, dtype='m8[h]')
b = np.array(3*3600 - 3, dtype='m8[s]')
assert_equal(np.minimum(a, b), b)
assert_equal(np.minimum(a, b).dtype, np.dtype('m8[s]'))
assert_equal(np.fmin(a, b), b)
assert_equal(np.fmin(a, b).dtype, np.dtype('m8[s]'))
assert_equal(np.maximum(a, b), a)
assert_equal(np.maximum(a, b).dtype, np.dtype('m8[s]'))
assert_equal(np.fmax(a, b), a)
assert_equal(np.fmax(a, b).dtype, np.dtype('m8[s]'))
# Viewed as integers, the comparison is opposite because
# of the units chosen
assert_equal(np.minimum(a.view('i8'), b.view('i8')), a.view('i8'))
# should raise between datetime and timedelta
#
# TODO: Allowing unsafe casting by
# default in ufuncs strikes again... :(
a = np.array(3, dtype='m8[h]')
b = np.array('1999-03-12T12', dtype='M8[s]')
#assert_raises(TypeError, np.minimum, a, b)
#assert_raises(TypeError, np.maximum, a, b)
#assert_raises(TypeError, np.fmin, a, b)
#assert_raises(TypeError, np.fmax, a, b)
assert_raises(TypeError, np.minimum, a, b, casting='same_kind')
assert_raises(TypeError, np.maximum, a, b, casting='same_kind')
assert_raises(TypeError, np.fmin, a, b, casting='same_kind')
assert_raises(TypeError, np.fmax, a, b, casting='same_kind')
def test_hours(self):
t = np.ones(3, dtype='M8[s]')
t[0] = 60*60*24 + 60*60*10
assert_(t[0].item().hour == 10)
def test_divisor_conversion_year(self):
assert_(np.dtype('M8[Y/4]') == np.dtype('M8[3M]'))
assert_(np.dtype('M8[Y/13]') == np.dtype('M8[4W]'))
assert_(np.dtype('M8[3Y/73]') == np.dtype('M8[15D]'))
def test_divisor_conversion_month(self):
assert_(np.dtype('M8[M/2]') == np.dtype('M8[2W]'))
assert_(np.dtype('M8[M/15]') == np.dtype('M8[2D]'))
assert_(np.dtype('M8[3M/40]') == np.dtype('M8[54h]'))
def test_divisor_conversion_week(self):
assert_(np.dtype('m8[W/7]') == np.dtype('m8[D]'))
assert_(np.dtype('m8[3W/14]') == np.dtype('m8[36h]'))
assert_(np.dtype('m8[5W/140]') == np.dtype('m8[360m]'))
def test_divisor_conversion_day(self):
assert_(np.dtype('M8[D/12]') == np.dtype('M8[2h]'))
assert_(np.dtype('M8[D/120]') == np.dtype('M8[12m]'))
assert_(np.dtype('M8[3D/960]') == np.dtype('M8[270s]'))
def test_divisor_conversion_hour(self):
assert_(np.dtype('m8[h/30]') == np.dtype('m8[2m]'))
assert_(np.dtype('m8[3h/300]') == np.dtype('m8[36s]'))
def test_divisor_conversion_minute(self):
assert_(np.dtype('m8[m/30]') == np.dtype('m8[2s]'))
assert_(np.dtype('m8[3m/300]') == np.dtype('m8[600ms]'))
def t
|
self):
assert_(np.dtype('m8[s/100]') == np.dtype('m8[10ms]'))
assert_(np.dtype('m8[3s/10000]') == np.dtype('m8[300us]'))
def test_divisor_conversion_fs(self):
assert_(np.dtype('M8[fs/100]') == np.dtype('M8[10as]'))
assert_raises(ValueError, lambda: np.dtype('M8[3fs/10000]'))
def test_divisor_conversion_as(self):
assert_raises(ValueError, lambda: np.dtype('M8[as/10]'))
def test_string_parser_variants(self):
# Allow space instead of 'T' between date and time
assert_equal(np.array(['1980-02-29T01:02:03'], np.dtype('M8[s]')),
np.array(['1980-02-29 01:02:03'], np.dtype('M8[s]')))
# Allow positive years
assert_equal(np.array(['+1980-02-29T01:02:03'], np.dtype('M8[s]')),
np.array(['+1980-02-29 01:02:03'], np.dtype('M8[s]')))
# Allow negative years
assert_equal(np.array(['-1980-02-29T01:02:03'], np.dtype('M8[s]')),
np.array(['-1980-02-29 01:02:03'], np.dtype('M8[s]')))
# UTC specifier
with assert_warns(DeprecationWarning):
assert_equal(
np.array(['+1980-02-29T01:02:03'], np.dtype('M8[s]')),
np.array(['+1980-02-29 01:02:03Z'], np.dtype('M8[s]')))
with assert_warns(DeprecationWarning):
assert_equal(
np.array(['-1980-02-29T01:02:03'], np.dtype('M8[s]')),
np.array(['-1980-02-29 01:02:03Z'], np.dtype('M8[s]')))
# Time zone offset
with assert_warns(DeprecationWarning):
assert_equal(
np.array(['1980-02-29T02:02:03'], np.dtype('M8[s]')),
np.array(['1980-02-29 00:32:03-0130'], np.dtype('M8[s]')))
with assert_warns(DeprecationWarning):
assert_equal(
np.array(['1980-02-28T22:32:03'], np.dtype('M8[s]')),
np.array(['1980-02-29 00:02:03+01:30'], np.dtype('M8[s]')))
with assert_warns(DeprecationWarning):
assert_equal(
np.array(['1980-02-29T02:32:03.506'], np.dtype('M8[s]')),
np.array(['1980-02-29 00:32:03.506-02'], np.dtype('M8[s]')))
with assert_warns(DeprecationWarning):
assert_equal(np.datetime64('1977-03-02T12:30-0230'),
np.datetime64('1977-03-02T15:00'))
def test_string_parser_error_check(self):
# Arbitrary bad string
assert_raises(ValueError, np.array, ['badvalue'], np.dtype('M8[us]'))
# Character after year must be '-'
assert_raises(ValueError, np.array, ['1980X'], np.dtype('M8[us]'))
# Cannot have trailing '-'
assert_raises(ValueError, np.array, ['1980-'], np.dtype('M8[us]'))
# Month must be in range [1,12]
assert_raises(ValueError, np.array, ['1980-00'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-13'], np.dtype('M8[us]'))
# Month must have two digits
assert_raises(ValueError, np.array, ['1980-1'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-1-02'], np.dtype('M8[us]'))
# 'Mor' is not a valid month
assert_raises(ValueError, np.array, ['1980-Mor'], np.dtype('M8[us]'))
# Cannot have trailing '-'
assert_raises(ValueError, np.array, ['1980-01-'], np.dtype('M8[us]'))
# Day must be in range [1,len(month)]
assert_raises(ValueError, np.array, ['1980-01-0'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-01-00'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-01-32'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1979-02-29'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-02-30'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-03-32'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-04-31'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-05-32'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-06-31'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-07-32'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-08-32'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-09-31'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-10-32'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-11-31'], np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-12-32'], np.dtype('M8[us]'))
# Cannot have trailing characters
assert_raises(ValueError, np.array, ['1980-02-03%'],
np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-02-03 q'],
np.dtype('M8[us]'))
# Hours must be in range [0, 23]
assert_raises(ValueError, np.array, ['1980-02-03 25'],
np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-02-03T25'],
np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-02-03 24:01'],
np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-02-03T24:01'],
np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-02-03 -1'],
np.dtype('M8[us]'))
# No trailing ':'
assert_raises(ValueError, np.array, ['1980-02-03 01:'],
np.dtype('M8[us]'))
# Minutes must be in range [0, 59]
assert_raises(ValueError, np.array, ['1980-02-03 01:-1'],
np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-02-03 01:60'],
np.dtype('M8[us]'))
# No trailing ':'
assert_raises(ValueError, np.array, ['1980-02-03 01:60:'],
np.dtype('M8[us]'))
# Seconds must be in range [0, 59]
assert_raises(ValueError, np.array, ['1980-02-03 01:10:-1'],
np.dtype('M8[us]'))
assert_raises(ValueError, np.array, ['1980-02-03 01:01:60'],
np.dtype('M8[us]'))
# Timezone offset must within a reasonable range
with assert_warns(DeprecationWarning):
assert_raises(ValueError, np.array, ['1980-02-03 01:01:00+0661'],
np.dtype('M8[us]'))
with assert_warns(DeprecationWarning):
assert_raises(ValueError, np.array, ['1980-02-03 01:01:00+2500'],
np.dtype('M8[us]'))
with assert_warns(DeprecationWarning):
assert_raises(ValueError, np.array, ['1980-02-03 01:01:00-0070'],
np.dtype('M8[us]'))
with assert_warns(DeprecationWarning):
assert_raises(ValueError, np.array, ['1980-02-03 01:01:00-3000'],
np.dtype('M8[us]'))
with assert_warns(DeprecationWarning):
assert_raises(ValueError, np.array, ['1980-02-03 01:01:00-25:00'],
np.dtype('M8[us]'))
def test_creation_overflow(self):
date = '1980-03-23 20:00:00'
timesteps = np.array([date], dtype='datetime64[s]')[0].astype(np.int64)
for unit in ['ms', 'us', 'ns']:
timesteps *= 1000
x = np.array([date], dtype='datetime64[%s]' % unit)
assert_equal(timesteps, x[0].astype(np.int64),
err_msg='Datetime conversion error for unit %s' % unit)
assert_equal(x[0].astype(np.int64), 322689600000000000)
# gh-13062
with pytest.raises(OverflowError):
np.datetime64(2**64, 'D')
with pytest.raises(OverflowError):
np.timedelta64(2**64, 'D')
def test_datetime_as_string(self):
# Check all the units with default string conversion
date = '1959-10-13'
datetime = '1959-10-13T12:34:56.789012345678901234'
assert_equal(np.datetime_as_string(np.datetime64(date, 'Y')),
'1959')
assert_equal(np.datetime_as_string(np.datetime64(date, 'M')),
'1959-10')
assert_equal(np.datetime_as_string(np.datetime64(date, 'D')),
'1959-10-13')
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'h')),
'1959-10-13T12')
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'm')),
'1959-10-13T12:34')
assert_equal(np.datetime_as_string(np.datetime64(datetime, 's')),
'1959-10-13T12:34:56')
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'ms')),
'1959-10-13T12:34:56.789')
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'us')),
'1959-10-13T12:34:56.789012')
datetime = '1969-12-31T23:34:56.789012345678901234'
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'ns')),
'1969-12-31T23:34:56.789012345')
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'ps')),
'1969-12-31T23:34:56.789012345678')
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'fs')),
'1969-12-31T23:34:56.789012345678901')
datetime = '1969-12-31T23:59:57.789012345678901234'
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'as')),
datetime)
datetime = '1970-01-01T00:34:56.789012345678901234'
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'ns')),
'1970-01-01T00:34:56.789012345')
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'ps')),
'1970-01-01T00:34:56.789012345678')
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'fs')),
'1970-01-01T00:34:56.789012345678901')
datetime = '1970-01-01T00:00:05.789012345678901234'
assert_equal(np.datetime_as_string(np.datetime64(datetime, 'as')),
datetime)
# String conversion with the unit= parameter
a = np.datetime64('2032-07-18T12:23:34.123456', 'us')
assert_equal(np.datetime_as_string(a, unit='Y', casting='unsafe'),
'2032')
assert_equal(np.datetime_as_string(a, unit='M', casting='unsafe'),
'2032-07')
assert_equal(np.datetime_as_string(a, unit='W', casting='unsafe'),
'2032-07-18')
assert_equal(np.datetime_as_string(a, unit='D', casting='unsafe'),
'2032-07-18')
assert_equal(np.datetime_as_string(a, unit='h'), '2032-07-18T12')
assert_equal(np.datetime_as_string(a, unit='m'),
'2032-07-18T12:23')
assert_equal(np.datetime_as_string(a, unit='s'),
'2032-07-18T12:23:34')
assert_equal(np.datetime_as_string(a, unit='ms'),
'2032-07-18T12:23:34.123')
assert_equal(np.datetime_as_string(a, unit='us'),
'2032-07-18T12:23:34.123456')
assert_equal(np.datetime_as_string(a, unit='ns'),
'2032-07-18T12:23:34.123456000')
assert_equal(np.datetime_as_string(a, unit='ps'),
'2032-07-18T12:23:34.123456000000')
assert_equal(np.datetime_as_string(a, unit='fs'),
'2032-07-18T12:23:34.123456000000000')
assert_equal(np.datetime_as_string(a, unit='as'),
'2032-07-18T12:23:34.123456000000000000')
# unit='auto' parameter
assert_equal(np.datetime_as_string(
np.datetime64('2032-07-18T12:23:34.123456', 'us'), unit='auto'),
'2032-07-18T12:23:34.123456')
assert_equal(np.datetime_as_string(
np.datetime64('2032-07-18T12:23:34.12', 'us'), unit='auto'),
'2032-07-18T12:23:34.120')
assert_equal(np.datetime_as_string(
np.datetime64('2032-07-18T12:23:34', 'us'), unit='auto'),
'2032-07-18T12:23:34')
assert_equal(np.datetime_as_string(
np.datetime64('2032-07-18T12:23:00', 'us'), unit='auto'),
'2032-07-18T12:23')
# 'auto' doesn't split up hour and minute
assert_equal(np.datetime_as_string(
np.datetime64('2032-07-18T12:00:00', 'us'), unit='auto'),
'2032-07-18T12:00')
assert_equal(np.datetime_as_string(
np.datetime64('2032-07-18T00:00:00', 'us'), unit='auto'),
'2032-07-18')
# 'auto' doesn't split up the date
assert_equal(np.datetime_as_string(
np.datetime64('2032-07-01T00:00:00', 'us'), unit='auto'),
'2032-07-01')
assert_equal(np.datetime_as_string(
np.datetime64('2032-01-01T00:00:00', 'us'), unit='auto'),
'2032-01-01')
@pytest.mark.skipif(not _has_pytz, reason="The pytz module is not available.")
def test_datetime_as_string_timezone(self):
# timezone='local' vs 'UTC'
a = np.datetime64('2010-03-15T06:30', 'm')
assert_equal(np.datetime_as_string(a),
'2010-03-15T06:30')
assert_equal(np.datetime_as_string(a, timezone='naive'),
'2010-03-15T06:30')
assert_equal(np.datetime_as_string(a, timezone='UTC'),
'2010-03-15T06:30Z')
assert_(np.datetime_as_string(a, timezone='local') !=
'2010-03-15T06:30')
b = np.datetime64('2010-02-15T06:30', 'm')
assert_equal(np.datetime_as_string(a, timezone=tz('US/Central')),
'2010-03-15T01:30-0500')
assert_equal(np.datetime_as_string(a, timezone=tz('US/Eastern')),
'2010-03-15T02:30-0400')
assert_equal(np.datetime_as_string(a, timezone=tz('US/Pacific')),
'2010-03-14T23:30-0700')
assert_equal(np.datetime_as_string(b, timezone=tz('US/Central')),
'2010-02-15T00:30-0600')
assert_equal(np.datetime_as_string(b, timezone=tz('US/Eastern')),
'2010-02-15T01:30-0500')
assert_equal(np.datetime_as_string(b, timezone=tz('US/Pacific')),
'2010-02-14T22:30-0800')
# Dates to strings with a timezone attached is disabled by default
assert_raises(TypeError, np.datetime_as_string, a, unit='D',
timezone=tz('US/Pacific'))
# Check that we can print out the date in the specified time zone
assert_equal(np.datetime_as_string(a, unit='D',
timezone=tz('US/Pacific'), casting='unsafe'),
'2010-03-14')
assert_equal(np.datetime_as_string(b, unit='D',
timezone=tz('US/Central'), casting='unsafe'),
'2010-02-15')
def test_datetime_arange(self):
# With two datetimes provided as strings
a = np.arange('2010-01-05', '2010-01-10', dtype='M8[D]')
assert_equal(a.dtype, np.dtype('M8[D]'))
assert_equal(a,
np.array(['2010-01-05', '2010-01-06', '2010-01-07',
'2010-01-08', '2010-01-09'], dtype='M8[D]'))
a = np.arange('1950-02-10', '1950-02-06', -1, dtype='M8[D]')
assert_equal(a.dtype, np.dtype('M8[D]'))
assert_equal(a,
np.array(['1950-02-10', '1950-02-09', '1950-02-08',
'1950-02-07'], dtype='M8[D]'))
# Unit should be detected as months here
a = np.arange('1969-05', '1970-05', 2, dtype='M8')
assert_equal(a.dtype, np.dtype('M8[M]'))
assert_equal(a,
np.datetime64('1969-05') + np.arange(12, step=2))
# datetime, integer|timedelta works as well
# produces arange (start, start + stop) in this case
a = np.arange('1969', 18, 3, dtype='M8')
assert_equal(a.dtype, np.dtype('M8[Y]'))
assert_equal(a,
np.datetime64('1969') + np.arange(18, step=3))
a = np.arange('1969-12-19', 22, np.timedelta64(2), dtype='M8')
assert_equal(a.dtype, np.dtype('M8[D]'))
assert_equal(a,
np.datetime64('1969-12-19') + np.arange(22, step=2))
# Step of 0 is disallowed
assert_raises(ValueError, np.arange, np.datetime64('today'),
np.datetime64('today') + 3, 0)
# Promotion across nonlinear unit boundaries is disallowed
assert_raises(TypeError, np.arange, np.datetime64('2011-03-01', 'D'),
np.timedelta64(5, 'M'))
assert_raises(TypeError, np.arange,
np.datetime64('2012-02-03T14', 's'),
np.timedelta64(5, 'Y'))
def test_datetime_arange_no_dtype(self):
d = np.array('2010-01-04', dtype="M8[D]")
assert_equal(np.arange(d, d + 1), d)
assert_raises(ValueError, np.arange, d)
def test_timedelta_arange(self):
a = np.arange(3, 10, dtype='m8')
assert_equal(a.dtype, np.dtype('m8'))
assert_equal(a, np.timedelta64(0) + np.arange(3, 10))
a = np.arange(np.timedelta64(3, 's'), 10, 2, dtype='m8')
assert_equal(a.dtype, np.dtype('m8[s]'))
assert_equal(a, np.timedelta64(0, 's') + np.arange(3, 10, 2))
# Step of 0 is disallowed
assert_raises(ValueError, np.arange, np.timedelta64(0),
np.timedelta64(5), 0)
# Promotion across nonlinear unit boundaries is disallowed
assert_raises(TypeError, np.arange, np.timedelta64(0, 'D'),
np.timedelta64(5, 'M'))
assert_raises(TypeError, np.arange, np.timedelta64(0, 'Y'),
np.timedelta64(5, 'D'))
@pytest.mark.parametrize("val1, val2, expected", [
# case from gh-12092
(np.timedelta64(7, 's'),
np.timedelta64(3, 's'),
np.timedelta64(1, 's')),
# negative value cases
(np.timedelta64(3, 's'),
np.timedelta64(-2, 's'),
np.timedelta64(-1, 's')),
(np.timedelta64(-3, 's'),
np.timedelta64(2, 's'),
np.timedelta64(1, 's')),
# larger value cases
(np.timedelta64(17, 's'),
np.timedelta64(22, 's'),
np.timedelta64(17, 's')),
(np.timedelta64(22, 's'),
np.timedelta64(17, 's'),
np.timedelta64(5, 's')),
# different units
(np.timedelta64(1, 'm'),
np.timedelta64(57, 's'),
np.timedelta64(3, 's')),
(np.timedelta64(1, 'us'),
np.timedelta64(727, 'ns'),
np.timedelta64(273, 'ns')),
# NaT is propagated
(np.timedelta64('NaT'),
np.timedelta64(50, 'ns'),
np.timedelta64('NaT')),
# Y % M works
(np.timedelta64(2, 'Y'),
np.timedelta64(22, 'M'),
np.timedelta64(2, 'M')),
])
def test_timedelta_modulus(self, val1, val2, expected):
assert_equal(val1 % val2, expected)
@pytest.mark.parametrize("val1, val2", [
# years and months sometimes can't be unambiguously
# divided for modulus operation
(np.timedelta64(7, 'Y'),
np.timedelta64(3, 's')),
(np.timedelta64(7, 'M'),
np.timedelta64(1, 'D')),
])
def test_timedelta_modulus_error(self, val1, val2):
with assert_raises_regex(TypeError, "common metadata divisor"):
val1 % val2
def test_timedelta_modulus_div_by_zero(self):
with assert_warns(RuntimeWarning):
actual = np.timedelta64(10, 's') % np.timedelta64(0, 's')
assert_equal(actual, np.timedelta64('NaT'))
@pytest.mark.parametrize("val1, val2", [
# cases where one operand is not
# timedelta64
(np.timedelta64(7, 'Y'),
15,),
(7.5,
np.timedelta64(1, 'D')),
])
def test_timedelta_modulus_type_resolution(self, val1, val2):
# NOTE: some of the operations may be supported
# in the future
with assert_raises_regex(TypeError,
"'remainder' cannot use operands with types"):
val1 % val2
def test_timedelta_arange_no_dtype(self):
d = np.array(5, dtype="m8[D]")
assert_equal(np.arange(d, d + 1), d)
assert_equal(np.arange(d), np.arange(0, d))
def test_datetime_maximum_reduce(self):
a = np.array(['2010-01-02', '1999-03-14', '1833-03'], dtype='M8[D]')
assert_equal(np.maximum.reduce(a).dtype, np.dtype('M8[D]'))
assert_equal(np.maximum.reduce(a),
np.datetime64('2010-01-02'))
a = np.array([1, 4, 0, 7, 2], dtype='m8[s]')
assert_equal(np.maximum.reduce(a).dtype, np.dtype('m8[s]'))
assert_equal(np.maximum.reduce(a),
np.timedelta64(7, 's'))
def test_datetime_busday_offset(self):
# First Monday in June
assert_equal(
np.busday_offset('2011-06', 0, roll='forward', weekmask='Mon'),
np.datetime64('2011-06-06'))
# Last Monday in June
assert_equal(
np.busday_offset('2011-07', -1, roll='forward', weekmask='Mon'),
np.datetime64('2011-06-27'))
assert_equal(
np.busday_offset('2011-07', -1, roll='forward', weekmask='Mon'),
np.datetime64('2011-06-27'))
# Default M-F business days, different roll modes
assert_equal(np.busday_offset('2010-08', 0, roll='backward'),
np.datetime64('2010-07-30'))
assert_equal(np.busday_offset('2010-08', 0, roll='preceding'),
np.datetime64('2010-07-30'))
assert_equal(np.busday_offset('2010-08', 0, roll='modifiedpreceding'),
np.datetime64('2010-08-02'))
assert_equal(np.busday_offset('2010-08', 0, roll='modifiedfollowing'),
np.datetime64('2010-08-02'))
assert_equal(np.busday_offset('2010-08', 0, roll='forward'),
np.datetime64('2010-08-02'))
assert_equal(np.busday_offset('2010-08', 0, roll='following'),
np.datetime64('2010-08-02'))
assert_equal(np.busday_offset('2010-10-30', 0, roll='following'),
np.datetime64('2010-11-01'))
assert_equal(
np.busday_offset('2010-10-30', 0, roll='modifiedfollowing'),
np.datetime64('2010-10-29'))
assert_equal(
np.busday_offset('2010-10-30', 0, roll='modifiedpreceding'),
np.datetime64('2010-10-29'))
assert_equal(
np.busday_offset('2010-10-16', 0, roll='modifiedfollowing'),
np.datetime64('2010-10-18'))
assert_equal(
np.busday_offset('2010-10-16', 0, roll='modifiedpreceding'),
np.datetime64('2010-10-15'))
# roll='raise' by default
assert_raises(ValueError, np.busday_offset, '2011-06-04', 0)
# Bigger offset values
assert_equal(np.busday_offset('2006-02-01', 25),
np.datetime64('2006-03-08'))
assert_equal(np.busday_offset('2006-03-08', -25),
np.datetime64('2006-02-01'))
assert_equal(np.busday_offset('2007-02-25', 11, weekmask='SatSun'),
np.datetime64('2007-04-07'))
assert_equal(np.busday_offset('2007-04-07', -11, weekmask='SatSun'),
np.datetime64('2007-02-25'))
# NaT values when roll is not raise
assert_equal(np.busday_offset(np.datetime64('NaT'), 1, roll='nat'),
np.datetime64('NaT'))
assert_equal(np.busday_offset(np.datetime64('NaT'), 1, roll='following'),
np.datetime64('NaT'))
assert_equal(np.busday_offset(np.datetime64('NaT'), 1, roll='preceding'),
np.datetime64('NaT'))
def test_datetime_busdaycalendar(self):
# Check that it removes NaT, duplicates, and weekends
# and sorts the result.
bdd = np.busdaycalendar(
holidays=['NaT', '2011-01-17', '2011-03-06', 'NaT',
'2011-12-26', '2011-05-30', '2011-01-17'])
assert_equal(bdd.holidays,
np.array(['2011-01-17', '2011-05-30', '2011-12-26'], dtype='M8'))
# Default M-F weekmask
assert_equal(bdd.weekmask, np.array([1, 1, 1, 1, 1, 0, 0], dtype='?'))
# Check string weekmask with varying whitespace.
bdd = np.busdaycalendar(weekmask="Sun TueWed Thu\tFri")
assert_equal(bdd.weekmask, np.array([0, 1, 1, 1, 1, 0, 1], dtype='?'))
# Check length 7 0/1 string
bdd = np.busdaycalendar(weekmask="0011001")
assert_equal(bdd.weekmask, np.array([0, 0, 1, 1, 0, 0, 1], dtype='?'))
# Check length 7 string weekmask.
bdd = np.busdaycalendar(weekmask="Mon Tue")
assert_equal(bdd.weekmask, np.array([1, 1, 0, 0, 0, 0, 0], dtype='?'))
# All-zeros weekmask should raise
assert_raises(ValueError, np.busdaycalendar, weekmask=[0, 0, 0, 0, 0, 0, 0])
# weekday names must be correct case
assert_raises(ValueError, np.busdaycalendar, weekmask="satsun")
# All-zeros weekmask should raise
assert_raises(ValueError, np.busdaycalendar, weekmask="")
# Invalid weekday name codes should raise
assert_raises(ValueError, np.busdaycalendar, weekmask="Mon Tue We")
assert_raises(ValueError, np.busdaycalendar, weekmask="Max")
assert_raises(ValueError, np.busdaycalendar, weekmask="Monday Tue")
def test_datetime_busday_holidays_offset(self):
# With exactly one holiday
assert_equal(
np.busday_offset('2011-11-10', 1, holidays=['2011-11-11']),
np.datetime64('2011-11-14'))
assert_equal(
np.busday_offset('2011-11-04', 5, holidays=['2011-11-11']),
np.datetime64('2011-11-14'))
assert_equal(
np.busday_offset('2011-11-10', 5, holidays=['2011-11-11']),
np.datetime64('2011-11-18'))
assert_equal(
np.busday_offset('2011-11-14', -1, holidays=['2011-11-11']),
np.datetime64('2011-11-10'))
assert_equal(
np.busday_offset('2011-11-18', -5, holidays=['2011-11-11']),
np.datetime64('2011-11-10'))
assert_equal(
np.busday_offset('2011-11-14', -5, holidays=['2011-11-11']),
np.datetime64('2011-11-04'))
# With the holiday appearing twice
assert_equal(
np.busday_offset('2011-11-10', 1,
holidays=['2011-11-11', '2011-11-11']),
np.datetime64('2011-11-14'))
assert_equal(
np.busday_offset('2011-11-14', -1,
holidays=['2011-11-11', '2011-11-11']),
np.datetime64('2011-11-10'))
# With a NaT holiday
assert_equal(
np.busday_offset('2011-11-10', 1,
holidays=['2011-11-11', 'NaT']),
np.datetime64('2011-11-14'))
assert_equal(
np.busday_offset('2011-11-14', -1,
holidays=['NaT', '2011-11-11']),
np.datetime64('2011-11-10'))
# With another holiday after
assert_equal(
np.busday_offset('2011-11-10', 1,
holidays=['2011-11-11', '2011-11-24']),
np.datetime64('2011-11-14'))
assert_equal(
np.busday_offset('2011-11-14', -1,
holidays=['2011-11-11', '2011-11-24']),
np.datetime64('2011-11-10'))
# With another holiday before
assert_equal(
np.busday_offset('2011-11-10', 1,
holidays=['2011-10-10', '2011-11-11']),
np.datetime64('2011-11-14'))
assert_equal(
np.busday_offset('2011-11-14', -1,
holidays=['2011-10-10', '2011-11-11']),
np.datetime64('2011-11-10'))
# With another holiday before and after
assert_equal(
np.busday_offset('2011-11-10', 1,
holidays=['2011-10-10', '2011-11-11', '2011-11-24']),
np.datetime64('2011-11-14'))
assert_equal(
np.busday_offset('2011-11-14', -1,
holidays=['2011-10-10', '2011-11-11', '2011-11-24']),
np.datetime64('2011-11-10'))
# A bigger forward jump across more than one week/holiday
holidays = ['2011-10-10', '2011-11-11', '2011-11-24',
'2011-12-25', '2011-05-30', '2011-02-21',
'2011-12-26', '2012-01-02']
bdd = np.busdaycalendar(weekmask='1111100', holidays=holidays)
assert_equal(
np.busday_offset('2011-10-03', 4, holidays=holidays),
np.busday_offset('2011-10-03', 4))
assert_equal(
np.busday_offset('2011-10-03', 5, holidays=holidays),
np.busday_offset('2011-10-03', 5 + 1))
assert_equal(
np.busday_offset('2011-10-03', 27, holidays=holidays),
np.busday_offset('2011-10-03', 27 + 1))
assert_equal(
np.busday_offset('2011-10-03', 28, holidays=holidays),
np.busday_offset('2011-10-03', 28 + 2))
assert_equal(
np.busday_offset('2011-10-03', 35, holidays=holidays),
np.busday_offset('2011-10-03', 35 + 2))
assert_equal(
np.busday_offset('2011-10-03', 36, holidays=holidays),
np.busday_offset('2011-10-03', 36 + 3))
assert_equal(
np.busday_offset('2011-10-03', 56, holidays=holidays),
np.busday_offset('2011-10-03', 56 + 3))
assert_equal(
np.busday_offset('2011-10-03', 57, holidays=holidays),
np.busday_offset('2011-10-03', 57 + 4))
assert_equal(
np.busday_offset('2011-10-03', 60, holidays=holidays),
np.busday_offset('2011-10-03', 60 + 4))
assert_equal(
np.busday_offset('2011-10-03', 61, holidays=holidays),
np.busday_offset('2011-10-03', 61 + 5))
assert_equal(
np.busday_offset('2011-10-03', 61, busdaycal=bdd),
np.busday_offset('2011-10-03', 61 + 5))
# A bigger backward jump across more than one week/holiday
assert_equal(
np.busday_offset('2012-01-03', -1, holidays=holidays),
np.busday_offset('2012-01-03', -1 - 1))
assert_equal(
np.busday_offset('2012-01-03', -4, holidays=holidays),
np.busday_offset('2012-01-03', -4 - 1))
assert_equal(
np.busday_offset('2012-01-03', -5, holidays=holidays),
np.busday_offset('2012-01-03', -5 - 2))
assert_equal(
np.busday_offset('2012-01-03', -25, holidays=holidays),
np.busday_offset('2012-01-03', -25 - 2))
assert_equal(
np.busday_offset('2012-01-03', -26, holidays=holidays),
np.busday_offset('2012-01-03', -26 - 3))
assert_equal(
np.busday_offset('2012-01-03', -33, holidays=holidays),
np.busday_offset('2012-01-03', -33 - 3))
assert_equal(
np.busday_offset('2012-01-03', -34, holidays=holidays),
np.busday_offset('2012-01-03', -34 - 4))
assert_equal(
np.busday_offset('2012-01-03', -56, holidays=holidays),
np.busday_offset('2012-01-03', -56 - 4))
assert_equal(
np.busday_offset('2012-01-03', -57, holidays=holidays),
np.busday_offset('2012-01-03', -57 - 5))
assert_equal(
np.busday_offset('2012-01-03', -57, busdaycal=bdd),
np.busday_offset('2012-01-03', -57 - 5))
# Can't supply both a weekmask/holidays and busdaycal
assert_raises(ValueError, np.busday_offset, '2012-01-03', -15,
weekmask='1111100', busdaycal=bdd)
assert_raises(ValueError, np.busday_offset, '2012-01-03', -15,
holidays=holidays, busdaycal=bdd)
# Roll with the holidays
assert_equal(
np.busday_offset('2011-12-25', 0,
roll='forward', holidays=holidays),
np.datetime64('2011-12-27'))
assert_equal(
np.busday_offset('2011-12-26', 0,
roll='forward', holidays=holidays),
np.datetime64('2011-12-27'))
assert_equal(
np.busday_offset('2011-12-26', 0,
roll='backward', holidays=holidays),
np.datetime64('2011-12-23'))
assert_equal(
np.busday_offset('2012-02-27', 0,
roll='modifiedfollowing',
holidays=['2012-02-27', '2012-02-26', '2012-02-28',
'2012-03-01', '2012-02-29']),
np.datetime64('2012-02-24'))
assert_equal(
np.busday_offset('2012-03-06', 0,
roll='modifiedpreceding',
holidays=['2012-03-02', '2012-03-03', '2012-03-01',
'2012-03-05', '2012-03-07', '2012-03-06']),
np.datetime64('2012-03-08'))
def test_datetime_busday_holidays_count(self):
holidays = ['2011-01-01', '2011-10-10', '2011-11-11', '2011-11-24',
'2011-12-25', '2011-05-30', '2011-02-21', '2011-01-17',
'2011-12-26', '2012-01-02', '2011-02-21', '2011-05-30',
'2011-07-01', '2011-07-04', '2011-09-05', '2011-10-10']
bdd = np.busdaycalendar(weekmask='1111100', holidays=holidays)
# Validate against busday_offset broadcast against
# a range of offsets
dates = np.busday_offset('2011-01-01', np.arange(366),
roll='forward', busdaycal=bdd)
assert_equal(np.busday_count('2011-01-01', dates, busdaycal=bdd),
np.arange(366))
# Returns negative value when reversed
assert_equal(np.busday_count(dates, '2011-01-01', busdaycal=bdd),
-np.arange(366))
dates = np.busday_offset('2011-12-31', -np.arange(366),
roll='forward', busdaycal=bdd)
assert_equal(np.busday_count(dates, '2011-12-31', busdaycal=bdd),
np.arange(366))
# Returns negative value when reversed
assert_equal(np.busday_count('2011-12-31', dates, busdaycal=bdd),
-np.arange(366))
# Can't supply both a weekmask/holidays and busdaycal
assert_raises(ValueError, np.busday_offset, '2012-01-03', '2012-02-03',
weekmask='1111100', busdaycal=bdd)
assert_raises(ValueError, np.busday_offset, '2012-01-03', '2012-02-03',
holidays=holidays, busdaycal=bdd)
# Number of Mondays in March 2011
assert_equal(np.busday_count('2011-03', '2011-04', weekmask='Mon'), 4)
# Returns negative value when reversed
assert_equal(np.busday_count('2011-04', '2011-03', weekmask='Mon'), -4)
def test_datetime_is_busday(self):
holidays = ['2011-01-01', '2011-10-10', '2011-11-11', '2011-11-24',
'2011-12-25', '2011-05-30', '2011-02-21', '2011-01-17',
'2011-12-26', '2012-01-02', '2011-02-21', '2011-05-30',
'2011-07-01', '2011-07-04', '2011-09-05', '2011-10-10',
'NaT']
bdd = np.busdaycalendar(weekmask='1111100', holidays=holidays)
# Weekend/weekday tests
assert_equal(np.is_busday('2011-01-01'), False)
assert_equal(np.is_busday('2011-01-02'), False)
assert_equal(np.is_busday('2011-01-03'), True)
# All the holidays are not business days
assert_equal(np.is_busday(holidays, busdaycal=bdd),
np.zeros(len(holidays), dtype='?'))
def test_datetime_y2038(self):
# Test parsing on either side of the Y2038 boundary
a = np.datetime64('2038-01-19T03:14:07')
assert_equal(a.view(np.int64), 2**31 - 1)
a = np.datetime64('2038-01-19T03:14:08')
assert_equal(a.view(np.int64), 2**31)
# Test parsing on either side of the Y2038 boundary with
# a manually specified timezone offset
with assert_warns(DeprecationWarning):
a = np.datetime64('2038-01-19T04:14:07+0100')
assert_equal(a.view(np.int64), 2**31 - 1)
with assert_warns(DeprecationWarning):
a = np.datetime64('2038-01-19T04:14:08+0100')
assert_equal(a.view(np.int64), 2**31)
# Test parsing a date after Y2038
a = np.datetime64('2038-01-20T13:21:14')
assert_equal(str(a), '2038-01-20T13:21:14')
def test_isnat(self):
assert_(np.isnat(np.datetime64('NaT', 'ms')))
assert_(np.isnat(np.datetime64('NaT', 'ns')))
assert_(not np.isnat(np.datetime64('2038-01-19T03:14:07')))
assert_(np.isnat(np.timedelta64('NaT', "ms")))
assert_(not np.isnat(np.timedelta64(34, "ms")))
res = np.array([False, False, True])
for unit in ['Y', 'M', 'W', 'D',
'h', 'm', 's', 'ms', 'us',
'ns', 'ps', 'fs', 'as']:
arr = np.array([123, -321, "NaT"], dtype='<datetime64[%s]' % unit)
assert_equal(np.isnat(arr), res)
arr = np.array([123, -321, "NaT"], dtype='>datetime64[%s]' % unit)
assert_equal(np.isnat(arr), res)
arr = np.array([123, -321, "NaT"], dtype='<timedelta64[%s]' % unit)
assert_equal(np.isnat(arr), res)
arr = np.array([123, -321, "NaT"], dtype='>timedelta64[%s]' % unit)
assert_equal(np.isnat(arr), res)
def test_isnat_error(self):
# Test that only datetime dtype arrays are accepted
for t in np.typecodes["All"]:
if t in np.typecodes["Datetime"]:
continue
assert_raises(TypeError, np.isnat, np.zeros(10, t))
def test_isfinite_scalar(self):
assert_(not np.isfinite(np.datetime64('NaT', 'ms')))
assert_(not np.isfinite(np.datetime64('NaT', 'ns')))
assert_(np.isfinite(np.datetime64('2038-01-19T03:14:07')))
assert_(not np.isfinite(np.timedelta64('NaT', "ms")))
assert_(np.isfinite(np.timedelta64(34, "ms")))
@pytest.mark.parametrize('unit', ['Y', 'M', 'W', 'D', 'h', 'm', 's', 'ms',
'us', 'ns', 'ps', 'fs', 'as'])
@pytest.mark.parametrize('dstr', ['<datetime64[%s]', '>datetime64[%s]',
'<timedelta64[%s]', '>timedelta64[%s]'])
def test_isfinite_isinf_isnan_units(self, unit, dstr):
'''check isfinite, isinf, isnan for all units of <M, >M, <m, >m dtypes
'''
arr_val = [123, -321, "NaT"]
arr = np.array(arr_val, dtype= dstr % unit)
pos = np.array([True, True, False])
neg = np.array([False, False, True])
false = np.array([False, False, False])
assert_equal(np.isfinite(arr), pos)
assert_equal(np.isinf(arr), false)
assert_equal(np.isnan(arr), neg)
def test_assert_equal(self):
assert_raises(AssertionError, assert_equal,
np.datetime64('nat'), np.timedelta64('nat'))
def test_corecursive_input(self):
# construct a co-recursive list
a, b = [], []
a.append(b)
b.append(a)
obj_arr = np.array([None])
obj_arr[0] = a
# At some point this caused a stack overflow (gh-11154). Now raises
# ValueError since the nested list cannot be converted to a datetime.
assert_raises(ValueError, obj_arr.astype, 'M8')
assert_raises(ValueError, obj_arr.astype, 'm8')
@pytest.mark.parametrize("shape", [(), (1,)])
def test_discovery_from_object_array(self, shape):
arr = np.array("2020-10-10", dtype=object).reshape(shape)
res = np.array("2020-10-10", dtype="M8").reshape(shape)
assert res.dtype == np.dtype("M8[D]")
assert_equal(arr.astype("M8"), res)
arr[...] = np.bytes_("2020-10-10") # try a numpy string type
assert_equal(arr.astype("M8"), res)
arr = arr.astype("S")
assert_equal(arr.astype("S").astype("M8"), res)
@pytest.mark.parametrize("time_unit", [
"Y", "M", "W", "D", "h", "m", "s", "ms", "us", "ns", "ps", "fs", "as",
# compound units
"10D", "2M",
])
def test_limit_symmetry(self, time_unit):
"""
Dates should have symmetric limits around the unix epoch at +/-np.int64
"""
epoch = np.datetime64(0, time_unit)
latest = np.datetime64(np.iinfo(np.int64).max, time_unit)
earliest = np.datetime64(-np.iinfo(np.int64).max, time_unit)
# above should not have overflowed
assert earliest < epoch < latest
@pytest.mark.parametrize("time_unit", [
"Y", "M",
pytest.param("W", marks=pytest.mark.xfail(reason="gh-13197")),
"D", "h", "m",
"s", "ms", "us", "ns", "ps", "fs", "as",
pytest.param("10D", marks=pytest.mark.xfail(reason="similar to gh-13197")),
])
@pytest.mark.parametrize("sign", [-1, 1])
def test_limit_str_roundtrip(self, time_unit, sign):
"""
Limits should roundtrip when converted to strings.
This tests the conversion to and from npy_datetimestruct.
"""
# TODO: add absolute (gold standard) time span limit strings
limit = np.datetime64(np.iinfo(np.int64).max * sign, time_unit)
# Convert to string and back. Explicit unit needed since the day and
# week reprs are not distinguishable.
limit_via_str = np.datetime64(str(limit), time_unit)
assert limit_via_str == limit
class TestDateTimeData:
def test_basic(self):
a = np.array(['1980-03-23'], dtype=np.datetime64)
assert_equal(np.datetime_data(a.dtype), ('D', 1))
def test_bytes(self):
# byte units are converted to unicode
dt = np.datetime64('2000', (b'ms', 5))
assert np.datetime_data(dt.dtype) == ('ms', 5)
dt = np.datetime64('2000', b'5ms')
assert np.datetime_data(dt.dtype) == ('ms', 5)
def test_non_ascii(self):
# μs is normalized to μ
dt = np.datetime64('2000', ('μs', 5))
assert np.datetime_data(dt.dtype) == ('us', 5)
dt = np.datetime64('2000', '5μs')
assert np.datetime_data(dt.dtype) == ('us', 5)
|
est_divisor_conversion_second(
|
macos.rs
|
mod readmem;
mod vmmap;
mod writemem;
use crate::target::thread::Thread;
use crate::CrabResult;
use libc::pid_t;
use mach::{
kern_return, mach_types, mach_types::ipc_space_t, message::mach_msg_type_number_t, port,
port::mach_port_name_t, port::mach_port_t, traps, traps::current_task, vm, vm_types::*,
};
use nix::{
sys::signal::{self, Signal},
unistd,
unistd::Pid,
};
use security_framework_sys::authorization::*;
use std::{
error::Error,
ffi::CStr,
ffi::CString,
io,
marker::PhantomData,
mem::{self, MaybeUninit},
ptr,
};
pub use readmem::ReadMemory;
pub use writemem::WriteMemory;
// Undocumented flag to disable address space layout randomization.
// For more information about ASLR, you can refer to https://en.wikipedia.org/wiki/Address_space_layout_randomization
const _POSIX_SPAWN_DISABLE_ASLR: i32 = 0x0100;
// Max number of characters to read from a thread name.
const MAX_THREAD_NAME: usize = 100;
struct OSXThread {
port: mach_port_name_t,
pthread_id: Option<usize>,
task_port: ipc_space_t,
}
impl Drop for OSXThread {
fn drop(&mut self) {
let result = unsafe { mach::mach_port::mach_port_deallocate(self.task_port, self.port) };
if result != kern_return::KERN_SUCCESS {
panic!("Failed to deallocate port!");
}
}
}
extern "C" {
// FIXME: Use libc > 0.2.74 when available
pub fn pthread_from_mach_thread_np(port: libc::c_uint) -> libc::pthread_t;
}
impl Thread for OSXThread {
type ThreadId = mach_port_t;
fn name(&self) -> CrabResult<Option<String>> {
if let Some(pt_id) = self.pthread_id {
let mut name = [0 as libc::c_char; MAX_THREAD_NAME];
let name_ptr = &mut name as *mut [libc::c_char] as *mut libc::c_char;
let get_name = unsafe { libc::pthread_getname_np(pt_id, name_ptr, MAX_THREAD_NAME) };
if get_name == 0 {
let name = unsafe { CStr::from_ptr(name_ptr) }.to_str()?.to_owned();
Ok(Some(name))
} else {
Err(format!(
"Failure to read pthread {} name. Error: {}",
pt_id, get_name
)
.into())
}
} else {
Ok(None)
}
}
fn thread_id(&self) -> Self::ThreadId {
self.port
}
}
pub struct Target {
/// Port for a target task
port: port::mach_port_name_t,
pid: Pid,
}
impl Target {
/// Launch a new debuggee process.
/// Returns an opaque target handle which you can use to control the debuggee.
pub fn launch(path: &str) -> CrabResult<Target> {
request_authorization()?;
let path = CString::new(path)?;
let child = unsafe {
let mut pid: pid_t = 0;
let mut attr = MaybeUninit::<libc::posix_spawnattr_t>::uninit();
let res = libc::posix_spawnattr_init(attr.as_mut_ptr());
if res != 0 {
// TODO: properly wrap error types
return Err(Box::new(io::Error::last_os_error()));
}
let mut attr = attr.assume_init();
let res = libc::posix_spawnattr_setflags(
&mut attr,
(libc::POSIX_SPAWN_START_SUSPENDED | _POSIX_SPAWN_DISABLE_ASLR) as i16,
);
if res != 0 {
// TODO: properly wrap error types
return Err(Box::new(io::Error::last_os_error()));
}
let res = libc::posix_spawn(
&mut pid,
path.as_ptr(),
ptr::null(),
&attr,
ptr::null(),
ptr::null(),
);
if res != 0 {
// TODO: properly wrap error types
return Err(Box::new(io::Error::last_os_error()));
}
pid
};
let target_port = unsafe {
let self_port = traps::mach_task_self();
let mut target_port = 0;
let res = traps::task_for_pid(self_port, child, &mut target_port);
if res != kern_return::KERN_SUCCESS {
// TODO: properly wrap return errors
return Err(Box::new(io::Error::new(
io::ErrorKind::Other,
"Could not obtain task port for a process. This might be caused by insufficient permissions.",
)));
}
target_port
};
Ok(Target {
port: target_port,
pid: Pid::from_raw(child),
})
}
/// Returns a list of maps in the debuggee's virtual adddress space.
pub fn get_addr_range(&self) -> CrabResult<usize> {
let regs = vmmap::macosx_debug_regions(self.pid, self.port);
for r in regs {
println!(
"{:x} -> {:x}, exec: {}, read: {}, write: {} [{:?}]",
r.address,
r.end(),
r.is_exec(),
r.is_read(),
r.is_write(),
r
);
}
Ok(0)
}
/// Reads memory from a debuggee process.
pub fn read(&self) -> ReadMemory {
ReadMemory::new(self.port)
}
/// Uses this process as a debuggee.
pub fn me() -> Target {
let port = unsafe { current_task() };
let pid = unistd::getpid();
Target { port, pid }
}
/// Returns the current snapshot view of this debuggee process threads.
pub fn threads(&self) -> CrabResult<Vec<Box<dyn Thread<ThreadId = mach_port_t>>>> {
let mut threads: mach_types::thread_act_array_t = std::ptr::null_mut();
let mut tcount: mach_msg_type_number_t = 0;
let result = unsafe { mach::task::task_threads(self.port, &mut threads, &mut tcount) };
if result == kern_return::KERN_SUCCESS {
let tcount = tcount as usize;
let mut osx_threads = Vec::with_capacity(tcount);
for i in 0..tcount {
let port = unsafe { *threads.add(i) };
let pthread_id = match unsafe { pthread_from_mach_thread_np(port) } {
0 => None,
id => Some(id),
};
let task_port = self.port;
let thread = Box::new(OSXThread {
port,
pthread_id,
task_port,
}) as Box<dyn Thread<ThreadId = mach_port_t>>;
osx_threads.push(thread);
}
Ok(osx_threads)
} else {
Err(format!(
"Failure to read task {} threads. Error: {}",
self.port, result
)
.into())
}
}
}
/// Requests task_for_pid privilege for this process.
fn request_authorization() -> CrabResult<()> {
// TODO: rewrite this ugly ugly code when AuthorizationCopyRights is available is security_framework
let name = CString::new("system.privilege.taskport:")?;
let auth_items = [AuthorizationItem {
name: name.as_ptr(),
valueLength: 0,
value: ptr::null_mut(),
flags: 0,
}];
let auth_item_set = AuthorizationRights {
count: 1,
items: auth_items.as_ptr() as *mut _,
};
let auth_flags = kAuthorizationFlagExtendRights
| kAuthorizationFlagPreAuthorize
| kAuthorizationFlagInteractionAllowed
| (1 << 5);
let mut auth_ref = MaybeUninit::<AuthorizationRef>::uninit();
let res =
unsafe { AuthorizationCreate(ptr::null(), ptr::null(), auth_flags, auth_ref.as_mut_ptr()) };
if res != errAuthorizationSuccess {
return Err(Box::new(io::Error::new(
io::ErrorKind::Other,
"AuthorizationCreate",
)));
}
let auth_ref = unsafe { auth_ref.assume_init() };
let mut target_rights = MaybeUninit::<AuthorizationRights>::uninit();
let res = unsafe {
|
auth_flags,
target_rights.as_mut_ptr() as *mut *mut _,
)
};
if res != errAuthorizationSuccess {
return Err(Box::new(io::Error::new(
io::ErrorKind::Other,
"AuthorizationCopyRights",
)));
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::ReadMemory;
use super::*;
use mach::traps::mach_task_self;
use std::sync::{Arc, Barrier};
use std::thread;
#[test]
fn read_memory() {
let var: usize = 52;
let var2: u8 = 128;
let mut read_var_op: usize = 0;
let mut read_var2_op: u8 = 0;
unsafe {
ReadMemory::new(unsafe { mach_task_self() })
.read(&mut read_var_op, &var as *const _ as usize)
.read(&mut read_var2_op, &var2 as *const _ as usize)
.apply()
.expect("Failed to apply memop");
}
assert_eq!(read_var2_op, var2);
assert_eq!(read_var_op, var);
assert!(true);
}
#[test]
fn read_threads() -> CrabResult<()> {
let start_barrier = Arc::new(Barrier::new(2));
let end_barrier = Arc::new(Barrier::new(2));
let t1_start = start_barrier.clone();
let t1_end = end_barrier.clone();
let thread_name = "thread-name";
let t1_handle = thread::Builder::new()
.name(thread_name.to_string())
.spawn(move || {
t1_start.wait();
t1_end.wait();
})
.unwrap();
start_barrier.wait();
let proc = Target::me();
let threads = proc.threads()?;
let threads: Vec<_> = threads
.iter()
.map(|t| {
let name = t.name().unwrap().unwrap_or_else(String::new);
let id = t.thread_id();
(name, id)
})
.collect();
assert!(
threads.len() >= 2,
"Expected at least 2 threads in {:?}",
threads
);
assert!(
threads.iter().any(|(name, _)| name == thread_name),
"Expected to find thread name={} in {:?}",
thread_name,
threads
);
end_barrier.wait();
t1_handle.join().unwrap();
Ok(())
}
}
|
AuthorizationCopyRights(
auth_ref,
&auth_item_set,
ptr::null(),
|
lambda_function.py
|
# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
# SPDX-License-Identifier: MIT-0
import boto3
import json
import logging
import os
import pymssql
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
"""Secrets Manager RDS SQL Server Handler
This handler uses the master-user rotation scheme to rotate an RDS SQL Server user credential. During the first rotation, this
scheme logs into the database as the master user, creates a new user (appending _clone to the username), and grants the
new user all of the permissions from the user being rotated. Once the secret is in this state, every subsequent rotation
simply creates a new secret with the AWSPREVIOUS user credentials, adds any missing permissions that are in the current
secret, changes that user's password, and then marks the latest secret as AWSCURRENT.
The Secret SecretString is expected to be a JSON string with the following format:
{
'engine': <required: must be set to 'sqlserver'>,
'host': <required: instance host name>,
'username': <required: username>,
'password': <required: password>,
'dbname': <optional: database name, default to 'master'>,
'port': <optional: if not specified, default port 1433 will be used>,
'masterarn': <required: the arn of the master secret which will be used to create users/change passwords>
}
Args:
event (dict): Lambda dictionary of event parameters. These keys must include the following:
- SecretId: The secret ARN or identifier
- ClientRequestToken: The ClientRequestToken of the secret version
- Step: The rotation step (one of createSecret, setSecret, testSecret, or finishSecret)
context (LambdaContext): The Lambda runtime information
Raises:
ResourceNotFoundException: If the secret with the specified arn and stage does not exist
ValueError: If the secret is not properly configured for rotation
KeyError: If the secret json does not contain the expected keys
"""
arn = event['SecretId']
token = event['ClientRequestToken']
step = event['Step']
# Setup the client
service_client = boto3.client('secretsmanager', endpoint_url=os.environ['SECRETS_MANAGER_ENDPOINT'])
# Make sure the version is staged correctly
metadata = service_client.describe_secret(SecretId=arn)
if "RotationEnabled" in metadata and not metadata['RotationEnabled']:
logger.error("Secret %s is not enabled for rotation" % arn)
raise ValueError("Secret %s is not enabled for rotation" % arn)
versions = metadata['VersionIdsToStages']
if token not in versions:
logger.error("Secret version %s has no stage for rotation of secret %s." % (token, arn))
raise ValueError("Secret version %s has no stage for rotation of secret %s." % (token, arn))
if "AWSCURRENT" in versions[token]:
logger.info("Secret version %s already set as AWSCURRENT for secret %s." % (token, arn))
return
elif "AWSPENDING" not in versions[token]:
logger.error("Secret version %s not set as AWSPENDING for rotation of secret %s." % (token, arn))
raise ValueError("Secret version %s not set as AWSPENDING for rotation of secret %s." % (token, arn))
# Call the appropriate step
if step == "createSecret":
create_secret(service_client, arn, token)
elif step == "setSecret":
set_secret(service_client, arn, token)
elif step == "testSecret":
test_secret(service_client, arn, token)
elif step == "finishSecret":
finish_secret(service_client, arn, token)
else:
logger.error("lambda_handler: Invalid step parameter %s for secret %s" % (step, arn))
raise ValueError("Invalid step parameter %s for secret %s" % (step, arn))
def create_secret(service_client, arn, token):
"""Generate a new secret
This method first checks for the existence of a secret for the passed in token. If one does not exist, it will generate a
new secret and put it with the passed in token.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
Raises:
ValueError: If the current secret is not valid JSON
KeyError: If the secret json does not contain the expected keys
"""
# Make sure the current secret exists
current_dict = get_secret_dict(service_client, arn, "AWSCURRENT")
# Now try to get the secret version, if that fails, put a new secret
try:
get_secret_dict(service_client, arn, "AWSPENDING", token)
logger.info("createSecret: Successfully retrieved secret for %s." % arn)
except service_client.exceptions.ResourceNotFoundException:
# Get the alternate username swapping between the original user and the user with _clone appended to it
current_dict['username'] = get_alt_username(current_dict['username'])
# Get exclude characters from environment variable
exclude_characters = os.environ['EXCLUDE_CHARACTERS'] if 'EXCLUDE_CHARACTERS' in os.environ else '/@"\'\\'
# Generate a random password
passwd = service_client.get_random_password(ExcludeCharacters=exclude_characters, PasswordLength=30)
current_dict['password'] = passwd['RandomPassword']
# Put the secret
service_client.put_secret_value(SecretId=arn, ClientRequestToken=token, SecretString=json.dumps(current_dict), VersionStages=['AWSPENDING'])
logger.info("createSecret: Successfully put secret for ARN %s and version %s." % (arn, token))
def set_secret(service_client, arn, token):
"""Set the pending secret in the database
This method tries to login to the database with the AWSPENDING secret and returns on success. If that fails, it
tries to login with the master credentials from the masterarn in the current secret. If this succeeds, it adds all
grants for AWSCURRENT user to the AWSPENDING user, creating the user and/or setting the password in the process.
Else, it throws a ValueError.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
Raises:
ResourceNotFoundException: If the secret with the specified arn and stage does not exist
ValueError: If the secret is not valid JSON or master credentials could not be used to login to DB
KeyError: If the secret json does not contain the expected keys
"""
# First try to login with the pending secret, if it succeeds, return
pending_dict = get_secret_dict(service_client, arn, "AWSPENDING", token)
conn = get_connection(pending_dict)
if conn:
conn.close()
logger.info("setSecret: AWSPENDING secret is already set as password in SQL Server DB for secret arn %s." % arn)
return
# Before we do anything with the secret, make sure the AWSCURRENT secret is valid by logging in to the db
# This ensures that the credential we are rotating is valid to protect against a confused deputy attack
current_dict = get_secret_dict(service_client, arn, "AWSCURRENT")
conn = get_connection(current_dict)
if not conn:
logger.error("setSecret: Unable to log into database using current credentials for secret %s" % arn)
raise ValueError("Unable to log into database using current credentials for secret %s" % arn)
conn.close()
# Now get the master arn from the current secret
master_arn = current_dict['masterarn']
master_dict = get_secret_dict(service_client, master_arn, "AWSCURRENT")
if current_dict['host'] != master_dict['host']:
logger.warn("setSecret: Master database host %s is not the same host as current %s" % (master_dict['host'], current_dict['host']))
# Now log into the database with the master credentials
conn = get_connection(master_dict)
if not conn:
logger.error("setSecret: Unable to log into database using credentials in master secret %s" % master_arn)
raise ValueError("Unable to log into database using credentials in master secret %s" % master_arn)
# Now set the password to the pending password
try:
with conn.cursor(as_dict=True) as cursor:
# Get the current version and db
cursor.execute("SELECT @@VERSION AS version")
version = cursor.fetchall()[0]['version']
cursor.execute("SELECT DB_NAME() AS name")
current_db = cursor.fetchall()[0]['name']
# Determine if we are in a contained DB
containment = 0
if not version.startswith("Microsoft SQL Server 2008"): # SQL Server 2008 does not support contained databases
cursor.execute("SELECT containment FROM sys.databases WHERE name = %s", current_db)
containment = cursor.fetchall()[0]['containment']
# Set the user or login password (depending on database containment)
if containment == 0:
set_password_for_login(cursor, current_db, current_dict['username'], pending_dict)
else:
set_password_for_user(cursor, current_dict['username'], pending_dict)
conn.commit()
logger.info("setSecret: Successfully created user %s in SQL Server DB for secret arn %s." % (pending_dict['username'], arn))
finally:
conn.close()
def test_secret(service_client, arn, token):
"""Test the pending secret against the database
This method tries to log into the database with the secrets staged with AWSPENDING and runs
a permissions check to ensure the user has the correct permissions.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
Raises:
ResourceNotFoundException: If the secret with the specified arn and stage does not exist
ValueError: If the secret is not valid JSON or pending credentials could not be used to login to the database
KeyError: If the secret json does not contain the expected keys
"""
# Try to login with the pending secret, if it succeeds, return
conn = get_connection(get_secret_dict(service_client, arn, "AWSPENDING", token))
if conn:
# This is where the lambda will validate the user's permissions. Uncomment/modify the below lines to
# tailor these validations to your needs
try:
with conn.cursor() as cur:
cur.execute("SELECT @@VERSION AS version")
finally:
conn.close()
logger.info("testSecret: Successfully signed into SQL Server DB with AWSPENDING secret in %s." % arn)
return
else:
logger.error("testSecret: Unable to log into database with pending secret of secret ARN %s" % arn)
raise ValueError("Unable to log into database with pending secret of secret ARN %s" % arn)
def finish_secret(service_client, arn, token):
|
def get_connection(secret_dict):
"""Gets a connection to SQL Server DB from a secret dictionary
This helper function tries to connect to the database grabbing connection info
from the secret dictionary. If successful, it returns the connection, else None
Args:
secret_dict (dict): The Secret Dictionary
Returns:
Connection: The pymssql.Connection object if successful. None otherwise
Raises:
KeyError: If the secret json does not contain the expected keys
"""
# Parse and validate the secret JSON string
port = str(secret_dict['port']) if 'port' in secret_dict else '1433'
dbname = secret_dict['dbname'] if 'dbname' in secret_dict else 'master'
# Try to obtain a connection to the db
try:
conn = pymssql.connect(server=secret_dict['host'],
user=secret_dict['username'],
password=secret_dict['password'],
database=dbname,
port=port,
login_timeout=5,
as_dict=True)
return conn
except pymssql.OperationalError:
return None
def get_secret_dict(service_client, arn, stage, token=None):
"""Gets the secret dictionary corresponding for the secret arn, stage, and token
This helper function gets credentials for the arn and stage passed in and returns the dictionary by parsing the JSON string
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version, or None if no validation is desired
stage (string): The stage identifying the secret version
Returns:
SecretDictionary: Secret dictionary
Raises:
ResourceNotFoundException: If the secret with the specified arn and stage does not exist
ValueError: If the secret is not valid JSON
KeyError: If the secret json does not contain the expected keys
"""
required_fields = ['host', 'username', 'password']
# Only do VersionId validation against the stage if a token is passed in
if token:
secret = service_client.get_secret_value(SecretId=arn, VersionId=token, VersionStage=stage)
else:
secret = service_client.get_secret_value(SecretId=arn, VersionStage=stage)
plaintext = secret['SecretString']
secret_dict = json.loads(plaintext)
# Run validations against the secret
if 'engine' not in secret_dict or secret_dict['engine'] != 'sqlserver':
raise KeyError("Database engine must be set to 'sqlserver' in order to use this rotation lambda")
for field in required_fields:
if field not in secret_dict:
raise KeyError("%s key is missing from secret JSON" % field)
# Parse and return the secret JSON string
return secret_dict
def get_alt_username(current_username):
"""Gets the alternate username for the current_username passed in
This helper function gets the username for the alternate user based on the passed in current username.
Args:
current_username (client): The current username
Returns:
AlternateUsername: Alternate username
Raises:
ValueError: If the new username length would exceed the maximum allowed
"""
clone_suffix = "_clone"
if current_username.endswith(clone_suffix):
return current_username[:(len(clone_suffix) * -1)]
else:
new_username = current_username + clone_suffix
if len(new_username) > 128:
raise ValueError("Unable to clone user, username length with _clone appended would exceed 128 characters")
return new_username
def set_password_for_login(cursor, current_db, current_login, pending_dict):
"""Runs various SQL statements in order to set the login password to that of the pending secret dictionary
This helper function runs SQL statements in order to set the login password to that of the pending secret dictionary
Args:
cursor (pymssql.Cursor): The pymssql Cursor object
current_db (string): The current database that we are connected to
current_login (string): The current user login
pending_dict (dict): The Secret Dictionary for the pending secret
Raises:
pymssql.OperationalError: If there are any errors running the SQL statements
"""
# Check if the login exists, if not create it and grant it all permissions from the current user
# If the user exists, just update the password
cursor.execute("SELECT name FROM sys.server_principals WHERE name = %s", pending_dict['username'])
if len(cursor.fetchall()) == 0:
# Create the new login
create_login = "CREATE LOGIN %s" % pending_dict['username']
cursor.execute(create_login + " WITH PASSWORD = %s", pending_dict['password'])
# Only handle server level permissions if we are connected the the master DB
if current_db == 'master':
# Loop through the types of server permissions and grant them to the new login
query = "SELECT state_desc, permission_name FROM sys.server_permissions perm "\
"JOIN sys.server_principals prin ON perm.grantee_principal_id = prin.principal_id "\
"WHERE prin.name = '%s'" % current_login
cursor.execute(query)
for row in cursor.fetchall():
if row['state_desc'] == 'GRANT_WITH_GRANT_OPTION':
cursor.execute("GRANT %s TO %s WITH GRANT OPTION" % (row['permission_name'], pending_dict['username']))
else:
cursor.execute("%s %s TO %s" % (row['state_desc'], row['permission_name'], pending_dict['username']))
# We do not create user objects in the master database
else:
# Get the user for the current login and generate the alt user
cursor.execute("SELECT dbprin.name FROM sys.database_principals dbprin JOIN sys.server_principals sprin ON dbprin.sid = sprin.sid WHERE sprin.name = %s", current_login)
cur_user = cursor.fetchall()[0]['name']
alt_user = get_alt_username(cur_user)
# Check if the user exists. If not, create it
cursor.execute("SELECT name FROM sys.database_principals WHERE name = %s", alt_user)
if len(cursor.fetchall()) == 0:
cursor.execute("CREATE USER %s FOR LOGIN %s" % (alt_user, pending_dict['username']))
apply_database_permissions(cursor, cur_user, pending_dict['username'])
else:
alter_stmt = "ALTER LOGIN %s" % pending_dict['username']
cursor.execute(alter_stmt + " WITH PASSWORD = %s", pending_dict['password'])
def set_password_for_user(cursor, current_user, pending_dict):
"""Runs various SQL statements in order to set the user password to that of the pending secret dictionary
This helper function runs SQL statements in order to set the user password to that of the pending secret dictionary
Args:
cursor (pymssql.Cursor): The pymssql Cursor object
current_user (string): The current username
pending_dict (dict): The Secret Dictionary for the pending secret
Raises:
pymssql.OperationalError: If there are any errors running the SQL statements
"""
# Check if the user exists, if not create it and grant it all permissions from the current user
# If the user exists, just update the password
cursor.execute("SELECT name FROM sys.database_principals WHERE name = %s", pending_dict['username'])
if len(cursor.fetchall()) == 0:
# Create the new user
create_login = "CREATE USER %s" % pending_dict['username']
cursor.execute(create_login + " WITH PASSWORD = %s", pending_dict['password'])
apply_database_permissions(cursor, current_user, pending_dict['username'])
else:
alter_stmt = "ALTER USER %s" % pending_dict['username']
cursor.execute(alter_stmt + " WITH PASSWORD = %s", pending_dict['password'])
def apply_database_permissions(cursor, current_user, pending_user):
"""Runs various SQL statements to apply the database permissions from current_user to pending_user
This helper function runs SQL statements to apply the database permissions from current_user to pending_user
Args:
cursor (pymssql.Cursor): The pymssql Cursor object
current_user (string): The current username
pending_user (string): The pending username
Raises:
pymssql.OperationalError: If there are any errors running the SQL statements
ValueError: If any database values were unexpected/invalid
"""
# Get the roles assigned to the current user and assign it to the pending user
query = "SELECT roleprin.name FROM sys.database_role_members rolemems "\
"JOIN sys.database_principals roleprin ON roleprin.principal_id = rolemems.role_principal_id "\
"JOIN sys.database_principals userprin ON userprin.principal_id = rolemems.member_principal_id "\
"WHERE userprin.name = '%s'" % current_user
cursor.execute(query)
for row in cursor.fetchall():
sql_stmt = "ALTER ROLE %s ADD MEMBER %s" % (row['name'], pending_user)
# Loop through the database permissions and grant them to the user
query = "SELECT "\
"class = perm.class, "\
"state_desc = perm.state_desc, "\
"perm_name = perm.permission_name, "\
"schema_name = permschem.name, "\
"obj_name = obj.name, "\
"obj_schema_name = objschem.name, "\
"col_name = col.name, "\
"imp_name = imp.name, "\
"imp_type = imp.type, "\
"assembly_name = assembly.name, "\
"type_name = types.name, "\
"type_schema = typeschem.name, "\
"schema_coll_name = schema_coll.name, "\
"xml_schema = xmlschem.name, "\
"msg_type_name = msg_type.name, "\
"contract_name = contract.name, "\
"svc_name = svc.name, "\
"binding_name = binding.name, "\
"route_name = route.name, "\
"catalog_name = catalog.name, "\
"symkey_name = symkey.name, "\
"cert_name = cert.name, "\
"asymkey_name = asymkey.name "\
"FROM sys.database_permissions perm "\
"JOIN sys.database_principals prin ON perm.grantee_principal_id = prin.principal_id "\
"LEFT JOIN sys.schemas permschem ON permschem.schema_id = perm.major_id "\
"LEFT JOIN sys.objects obj ON obj.object_id = perm.major_id "\
"LEFT JOIN sys.schemas objschem ON objschem.schema_id = obj.schema_id "\
"LEFT JOIN sys.columns col ON col.object_id = perm.major_id AND col.column_id = perm.minor_id "\
"LEFT JOIN sys.database_principals imp ON imp.principal_id = perm.major_id "\
"LEFT JOIN sys.assemblies assembly ON assembly.assembly_id = perm.major_id "\
"LEFT JOIN sys.types types ON types.user_type_id = perm.major_id "\
"LEFT JOIN sys.schemas typeschem ON typeschem.schema_id = types.schema_id "\
"LEFT JOIN sys.xml_schema_collections schema_coll ON schema_coll.xml_collection_id = perm.major_id "\
"LEFT JOIN sys.schemas xmlschem ON xmlschem.schema_id = schema_coll.schema_id "\
"LEFT JOIN sys.service_message_types msg_type ON msg_type.message_type_id = perm.major_id "\
"LEFT JOIN sys.service_contracts contract ON contract.service_contract_id = perm.major_id "\
"LEFT JOIN sys.services svc ON svc.service_id = perm.major_id "\
"LEFT JOIN sys.remote_service_bindings binding ON binding.remote_service_binding_id = perm.major_id "\
"LEFT JOIN sys.routes route ON route.route_id = perm.major_id "\
"LEFT JOIN sys.fulltext_catalogs catalog ON catalog.fulltext_catalog_id = perm.major_id "\
"LEFT JOIN sys.symmetric_keys symkey ON symkey.symmetric_key_id = perm.major_id "\
"LEFT JOIN sys.certificates cert ON cert.certificate_id = perm.major_id "\
"LEFT JOIN sys.asymmetric_keys asymkey ON asymkey.asymmetric_key_id = perm.major_id "\
"WHERE prin.name = '%s'" % current_user
cursor.execute(query)
for row in cursor.fetchall():
# Determine which type of permission this is and create the sql statement accordingly
if row['class'] == 0: # Database permission
permission = row['perm_name']
elif row['class'] == 1: # Object or Column
permission = "%s ON OBJECT::%s.%s" % (row['perm_name'], row['obj_schema_name'], row['obj_name'])
if row['col_name']:
permission = "%s (%s) " % (permission, row['col_name'])
elif row['class'] == 3: # Schema
permission = "%s ON SCHEMA::%s" % (row['perm_name'], row['schema_name'])
elif row['class'] == 4: # Impersonation (Database Principal)
if row['imp_type'] == 'S': # SQL User
permission = "%s ON USER::%s" % (row['perm_name'], row['imp_name'])
elif row['imp_type'] == 'R': # Role
permission = "%s ON ROLE::%s" % (row['perm_name'], row['imp_name'])
elif row['imp_type'] == 'A': # Application Role
permission = "%s ON APPLICATION ROLE::%s" % (row['perm_name'], row['imp_name'])
else:
raise ValueError("Invalid database principal permission type %s" % row['imp_type'])
elif row['class'] == 5: # Assembly
permission = "%s ON ASSEMBLY::%s" % (row['perm_name'], row['assembly_name'])
elif row['class'] == 6: # Type
permission = "%s ON TYPE::%s.%s" % (row['perm_name'], row['type_schema'], row['type_name'])
elif row['class'] == 10: # XML Schema Collection
permission = "%s ON XML SCHEMA COLLECTION::%s.%s" % (row['perm_name'], row['xml_schema'], row['schema_coll_name'])
elif row['class'] == 15: # Message Type
permission = "%s ON MESSAGE TYPE::%s" % (row['perm_name'], row['msg_type_name'])
elif row['class'] == 16: # Service Contract
permission = "%s ON CONTRACT::%s" % (row['perm_name'], row['contract_name'])
elif row['class'] == 17: # Service
permission = "%s ON SERVICE::%s" % (row['perm_name'], row['svc_name'])
elif row['class'] == 18: # Remote Service Binding
permission = "%s ON REMOTE SERVICE BINDING::%s" % (row['perm_name'], row['binding_name'])
elif row['class'] == 19: # Route
permission = "%s ON ROUTE::%s" % (row['perm_name'], row['route_name'])
elif row['class'] == 23: # Full-Text Catalog
permission = "%s ON FULLTEXT CATALOG::%s" % (row['perm_name'], row['catalog_name'])
elif row['class'] == 24: # Symmetric Key
permission = "%s ON SYMMETRIC KEY::%s" % (row['perm_name'], row['symkey_name'])
elif row['class'] == 25: # Certificate
permission = "%s ON CERTIFICATE::%s" % (row['perm_name'], row['cert_name'])
elif row['class'] == 26: # Asymmetric Key
permission = "%s ON ASYMMETRIC KEY::%s" % (row['perm_name'], row['asymkey_name'])
else:
raise ValueError("Invalid database permission class %s" % row['class'])
# Add the state to the statement
if row['state_desc'] == 'GRANT_WITH_GRANT_OPTION':
sql_stmt = "GRANT %s TO %s WITH GRANT OPTION" % (permission, pending_user)
else:
sql_stmt = "%s %s TO %s" % (row['state_desc'], permission, pending_user)
# Execute the sql
cursor.execute(sql_stmt)
|
"""Finish the rotation by marking the pending secret as current
This method moves the secret from the AWSPENDING stage to the AWSCURRENT stage.
Args:
service_client (client): The secrets manager service client
arn (string): The secret ARN or other identifier
token (string): The ClientRequestToken associated with the secret version
Raises:
ResourceNotFoundException: If the secret with the specified arn does not exist
"""
# First describe the secret to get the current version
metadata = service_client.describe_secret(SecretId=arn)
current_version = None
for version in metadata["VersionIdsToStages"]:
if "AWSCURRENT" in metadata["VersionIdsToStages"][version]:
if version == token:
# The correct version is already marked as current, return
logger.info("finishSecret: Version %s already marked as AWSCURRENT for %s" % (version, arn))
return
current_version = version
break
# Finalize by staging the secret version current
service_client.update_secret_version_stage(SecretId=arn, VersionStage="AWSCURRENT", MoveToVersionId=token, RemoveFromVersionId=current_version)
logger.info("finishSecret: Successfully set AWSCURRENT stage to version %s for secret %s." % (token, arn))
|
keystore.go
|
// Copyright 2016 Yahoo Inc.
// Licensed under the terms of the Apache version 2.0 license. See LICENSE file for terms.
package zmssvctoken
import (
"bytes"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"sync"
"time"
)
type keySource struct {
domain string
name string
keyVersion string
}
type validatorMeta struct {
pubKey []byte
validator TokenValidator
expiry time.Time
}
func (k keySource) String() string {
return fmt.Sprintf("[Domain: '%s', Name: '%s', Key version: '%s']", k.domain, k.name, k.keyVersion)
}
type keyStore struct {
sync.RWMutex
cache map[keySource]*validatorMeta
config *ValidationConfig
}
func newKeyStore(cfg *ValidationConfig) *keyStore {
return &keyStore{
config: cfg,
cache: make(map[keySource]*validatorMeta),
}
|
}
func (k *keyStore) loadKey(src keySource) ([]byte, error) {
client := &http.Client{
Timeout: k.config.PublicKeyFetchTimeout,
}
url := fmt.Sprintf("%s/domain/%s/service/%s/publickey/%s", k.config.ZTSBaseUrl, src.domain, src.name, src.keyVersion)
res, err := client.Get(url)
if err != nil {
return nil, err
}
if res.StatusCode != 200 {
return nil, fmt.Errorf("ZTS returned status %d", res.StatusCode)
}
b, err := ioutil.ReadAll(res.Body)
res.Body.Close()
if err != nil {
return nil, err
}
var data struct {
Key string
}
err = json.Unmarshal(b, &data)
if err != nil {
return nil, err
}
s, err := new(yBase64).DecodeString(data.Key)
if err != nil {
return nil, err
}
return s, nil
}
func (k *keyStore) getValidator(src keySource) (TokenValidator, error) {
var (
oldKey []byte // caches the previous seen key to avoid reloading
oldValidator TokenValidator // caches the previous seen validator, ditto
)
k.RLock()
meta, ok := k.cache[src]
if ok {
oldKey = meta.pubKey
oldValidator = meta.validator
if meta.expiry.Before(time.Now()) { // dead
meta = nil
}
}
k.RUnlock()
// return from cache if valid entry
if meta != nil {
return meta.validator, nil
}
// get remote key, if not
key, err := k.loadKey(src)
if err != nil {
return nil, fmt.Errorf("Unable to get public key from ZTS for %v, err: %v", src, err)
}
var v TokenValidator
if oldKey != nil && bytes.Equal(key, oldKey) { // no changes to key, use old validator
v = oldValidator
} else {
v, err = NewPubKeyTokenValidator(key)
if err != nil {
return nil, err
}
}
meta = &validatorMeta{
pubKey: key,
validator: v,
expiry: time.Now().Add(k.config.CacheTTL),
}
// add to cache
k.Lock()
k.cache[src] = meta
k.Unlock()
return v, nil
}
| |
shoot_test.go
|
// Copyright (c) 2020 SAP SE or an SAP affiliate company. All rights reserved. This file is licensed under the Apache Software License, v. 2 except as noted otherwise in the LICENSE file
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package validation_test
import (
. "github.com/gardener/gardener-extension-provider-azure/pkg/apis/azure/validation"
"github.com/gardener/gardener/pkg/apis/core"
. "github.com/onsi/ginkgo"
. "github.com/onsi/gomega"
. "github.com/onsi/gomega/gstruct"
"k8s.io/apimachinery/pkg/util/validation/field"
"k8s.io/utils/pointer"
)
var _ = Describe("Shoot validation", func() {
Describe("#ValidateNetworking", func() {
var networkingPath = field.NewPath("spec", "networking")
It("should return no error because nodes CIDR was provided", func() {
networking := core.Networking{
Nodes: pointer.StringPtr("1.2.3.4/5"),
}
errorList := ValidateNetworking(networking, networkingPath)
Expect(errorList).To(BeEmpty())
})
It("should return an error because no nodes CIDR was provided", func() {
networking := core.Networking{}
errorList := ValidateNetworking(networking, networkingPath)
Expect(errorList).To(ConsistOf(
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeRequired),
"Field": Equal("spec.networking.nodes"),
})),
))
})
})
Describe("#ValidateWorkerConfig", func() {
var (
workers []core.Worker
zoned bool
)
BeforeEach(func() {
workers = []core.Worker{
{
Name: "worker1",
Volume: &core.Volume{
Type: pointer.StringPtr("Volume"),
VolumeSize: "30G",
},
},
{
Name: "worker2",
Volume: &core.Volume{
Type: pointer.StringPtr("Volume"),
VolumeSize: "20G",
},
},
}
})
Describe("#ValidateWorkers", func() {
Context("Non zoned cluster", func() {
BeforeEach(func() {
zoned = false
})
It("should pass because workers are configured correctly", func() {
errorList := ValidateWorkers(workers, zoned, field.NewPath(""))
Expect(errorList).To(BeEmpty())
})
It("should forbid because zones are configured", func() {
workers[0].Zones = []string{"1", "2"}
errorList := ValidateWorkers(workers, zoned, field.NewPath("workers"))
Expect(errorList).To(ConsistOf(
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeRequired),
"Field": Equal("workers[0].zones"),
})),
))
})
})
Context("Zoned cluster", func() {
BeforeEach(func() {
zoned = true
workers[0].Zones = []string{"1", "2"}
workers[1].Zones = []string{"1", "2"}
})
It("should pass because workers are configured correctly", func() {
errorList := ValidateWorkers(workers, zoned, field.NewPath(""))
Expect(errorList).To(BeEmpty())
})
It("should forbid because volume is not configured", func() {
workers[1].Volume = nil
errorList := ValidateWorkers(workers, zoned, field.NewPath("workers"))
Expect(errorList).To(ConsistOf(
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeRequired),
"Field": Equal("workers[1].volume"),
})),
))
})
It("should forbid because volume type and size are not configured", func() {
workers[0].Volume.Type = nil
workers[0].Volume.VolumeSize = ""
errorList := ValidateWorkers(workers, zoned, field.NewPath("workers"))
Expect(errorList).To(ConsistOf(
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeRequired),
"Field": Equal("workers[0].volume.type"),
})),
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeRequired),
"Field": Equal("workers[0].volume.size"),
})),
))
})
It("should forbid because worker does not specify a zone", func() {
workers[0].Zones = nil
errorList := ValidateWorkers(workers, zoned, field.NewPath("workers"))
Expect(errorList).To(ConsistOf(
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeRequired),
"Field": Equal("workers[0].zones"),
})),
))
})
It("should forbid because worker use zone twice", func() {
workers[0].Zones[1] = workers[0].Zones[0]
errorList := ValidateWorkers(workers, zoned, field.NewPath("workers"))
Expect(errorList).To(ConsistOf(
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeInvalid),
"Field": Equal("workers[0].zones[1]"),
})),
))
})
})
})
Describe("#ValidateWorkersUpdate", func() {
Context("Zoned cluster", func() {
BeforeEach(func() {
workers[0].Zones = []string{"1", "2"}
workers[1].Zones = []string{"1", "2"}
})
It("should pass because workers are unchanged", func() {
newWorkers := copyWorkers(workers)
errorList := ValidateWorkersUpdate(workers, newWorkers, field.NewPath("workers"))
Expect(errorList).To(BeEmpty())
})
It("should allow adding workers", func() {
newWorkers := append(workers[:0:0], workers...)
workers = workers[:1]
errorList := ValidateWorkersUpdate(workers, newWorkers, field.NewPath("workers"))
Expect(errorList).To(BeEmpty())
})
It("should allow adding a zone to a worker", func() {
newWorkers := copyWorkers(workers)
newWorkers[0].Zones = append(newWorkers[0].Zones, "another-zone")
errorList := ValidateWorkersUpdate(workers, newWorkers, field.NewPath("workers"))
Expect(errorList).To(BeEmpty())
})
It("should forbid removing a zone from a worker", func() {
newWorkers := copyWorkers(workers)
newWorkers[1].Zones = newWorkers[1].Zones[1:]
errorList := ValidateWorkersUpdate(workers, newWorkers, field.NewPath("workers"))
Expect(errorList).To(ConsistOf(
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeInvalid),
"Field": Equal("workers[1].zones"),
})),
))
})
It("should forbid changing the zone order", func() {
newWorkers := copyWorkers(workers)
newWorkers[0].Zones[0] = workers[0].Zones[1]
newWorkers[0].Zones[1] = workers[0].Zones[0]
newWorkers[1].Zones[0] = workers[1].Zones[1]
newWorkers[1].Zones[1] = workers[1].Zones[0]
errorList := ValidateWorkersUpdate(workers, newWorkers, field.NewPath("workers"))
Expect(errorList).To(ConsistOf(
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeInvalid),
"Field": Equal("workers[0].zones"),
})),
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeInvalid),
"Field": Equal("workers[1].zones"),
})),
))
})
It("should forbid adding a zone while changing an existing one", func() {
newWorkers := copyWorkers(workers)
newWorkers = append(newWorkers, core.Worker{Name: "worker3", Zones: []string{"zone1"}})
newWorkers[1].Zones[0] = workers[1].Zones[1]
errorList := ValidateWorkersUpdate(workers, newWorkers, field.NewPath("workers"))
Expect(errorList).To(ConsistOf(
PointTo(MatchFields(IgnoreExtras, Fields{
"Type": Equal(field.ErrorTypeInvalid),
"Field": Equal("workers[1].zones"),
})),
))
})
})
})
})
})
func
|
(workers []core.Worker) []core.Worker {
copy := append(workers[:0:0], workers...)
for i := range copy {
copy[i].Zones = append(workers[i].Zones[:0:0], workers[i].Zones...)
}
return copy
}
|
copyWorkers
|
index.js
|
/**
* Copyright (c) Facebook, Inc. and its affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
import React, { useState, useCallback, useEffect, useRef } from "react";
import clsx from "clsx";
import useDocusaurusContext from "@docusaurus/useDocusaurusContext";
import useUserPreferencesContext from "@theme/hooks/useUserPreferencesContext";
import useLockBodyScroll from "@theme/hooks/useLockBodyScroll";
import useWindowSize, { windowSizes } from "@theme/hooks/useWindowSize";
import useScrollPosition from "@theme/hooks/useScrollPosition";
import Link from "@docusaurus/Link";
import isInternalUrl from "@docusaurus/isInternalUrl";
import styles from "./styles.module.css";
const MOBILE_TOGGLE_SIZE = 24;
function usePrevious(value) {
const ref = useRef(value);
useEffect(() => {
ref.current = value;
}, [value]);
return ref.current;
}
// Compare the 2 paths, ignoring trailing /
const isSamePath = (path1, path2) => {
const normalize = (str) => (str.endsWith("/") ? str : `${str}/`);
return normalize(path1) === normalize(path2);
};
const isActiveSidebarItem = (item, activePath) => {
if (item.type === "link") {
return isSamePath(item.href, activePath);
}
if (item.type === "category") {
return item.items.some((subItem) =>
isActiveSidebarItem(subItem, activePath)
);
}
return false;
};
function DocSidebarItemCategory({
item,
onItemClick,
collapsible,
activePath,
...props
}) {
const { items, label } = item;
const isActive = isActiveSidebarItem(item, activePath);
const wasActive = usePrevious(isActive);
// active categories are always initialized as expanded
// the default (item.collapsed) is only used for non-active categories
const [collapsed, setCollapsed] = useState(() => {
if (!collapsible) {
return false;
}
return isActive ? false : item.collapsed;
});
// If we navigate to a category, it should automatically expand itself
useEffect(() => {
const justBecameActive = isActive && !wasActive;
if (justBecameActive && collapsed) {
setCollapsed(false);
}
}, [isActive, wasActive, collapsed]);
const handleItemClick = useCallback(
(e) => {
e.preventDefault();
setCollapsed((state) => !state);
},
[setCollapsed]
);
if (items.length === 0) {
return null;
}
return (
<li
className={clsx("menu__list-item", {
"menu__list-item--collapsed": collapsed,
})}
key={label}
>
<a
className={clsx("menu__link", {
"menu__link--sublist": collapsible,
"menu__link--active": collapsible && isActive,
[styles.menuLinkText]: !collapsible,
})}
onClick={collapsible ? handleItemClick : undefined}
href={collapsible ? "#!" : undefined}
{...props}
>
{label}
</a>
<ul className="menu__list">
{items.map((childItem) => (
<DocSidebarItem
tabIndex={collapsed ? "-1" : "0"}
key={childItem.label}
item={childItem}
onItemClick={onItemClick}
collapsible={collapsible}
activePath={activePath}
/>
))}
</ul>
</li>
);
}
function
|
({
item,
onItemClick,
activePath,
collapsible: _collapsible,
...props
}) {
const { href, label, deprecated } = item;
const isActive = isActiveSidebarItem(item, activePath);
return (
<li className="menu__list-item" key={label}>
<Link
className={clsx("menu__link", {
"menu__link--active": isActive,
"menu__link--deprecated": deprecated,
})}
style={{ justifyContent: "start" }}
to={href}
{...(isInternalUrl(href)
? {
isNavLink: true,
exact: true,
onClick: onItemClick,
}
: {
target: "_blank",
rel: "noreferrer noopener",
})}
{...props}
>
{deprecated && (
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 24 24"
style={{
alignSelf: "center",
flexShrink: 0,
marginRight:
"calc(var(--ifm-menu-link-padding-horizontal) / 1.5)",
}}
fill="currentColor"
width="18px"
height="18px"
>
<path d="M0 0h24v24H0z" fill="none" />
<path d="M12 2C6.48 2 2 6.48 2 12s4.48 10 10 10 10-4.48 10-10S17.52 2 12 2zm0 18c-4.42 0-8-3.58-8-8 0-1.85.63-3.55 1.69-4.9L16.9 18.31C15.55 19.37 13.85 20 12 20zm6.31-3.1L7.1 5.69C8.45 4.63 10.15 4 12 4c4.42 0 8 3.58 8 8 0 1.85-.63 3.55-1.69 4.9z" />
</svg>
)}
{label}
</Link>
</li>
);
}
function DocSidebarItem(props) {
switch (props.item.type) {
case "category":
return <DocSidebarItemCategory {...props} />;
case "link":
default:
return <DocSidebarItemLink {...props} />;
}
}
function DocSidebar(props) {
const [showResponsiveSidebar, setShowResponsiveSidebar] = useState(false);
const {
siteConfig: {
themeConfig: { navbar: { title, hideOnScroll = false } = {} },
} = {},
isClient,
} = useDocusaurusContext();
const { isAnnouncementBarClosed } = useUserPreferencesContext();
const { scrollY } = useScrollPosition();
const {
docsSidebars,
path,
sidebar: currentSidebar,
sidebarCollapsible,
} = props;
useLockBodyScroll(showResponsiveSidebar);
const windowSize = useWindowSize();
useEffect(() => {
if (windowSize === windowSizes.desktop) {
setShowResponsiveSidebar(false);
}
}, [windowSize]);
if (!currentSidebar) {
return null;
}
const sidebarData = docsSidebars[currentSidebar];
if (!sidebarData) {
throw new Error(
`Cannot find the sidebar "${currentSidebar}" in the sidebar config!`
);
}
return (
<div
className={clsx(styles.sidebar, {
[styles.sidebarWithHideableNavbar]: hideOnScroll,
})}
>
<div
className={clsx("menu", "menu--responsive", styles.menu, {
"menu--show": showResponsiveSidebar,
[styles.menuWithAnnouncementBar]:
!isAnnouncementBarClosed && scrollY === 0,
})}
>
<button
aria-label={showResponsiveSidebar ? "Close Menu" : "Open Menu"}
aria-haspopup="true"
className="button button--secondary button--sm menu__button"
type="button"
onClick={() => {
setShowResponsiveSidebar(!showResponsiveSidebar);
}}
>
{showResponsiveSidebar ? (
<span
className={clsx(
styles.sidebarMenuIcon,
styles.sidebarMenuCloseIcon
)}
>
×
</span>
) : (
<svg
aria-label="Menu"
className={styles.sidebarMenuIcon}
xmlns="http://www.w3.org/2000/svg"
height={MOBILE_TOGGLE_SIZE}
width={MOBILE_TOGGLE_SIZE}
viewBox="0 0 32 32"
role="img"
focusable="false"
>
<title>Menu</title>
<path
stroke="currentColor"
strokeLinecap="round"
strokeMiterlimit="10"
strokeWidth="2"
d="M4 7h22M4 15h22M4 23h22"
/>
</svg>
)}
</button>
<ul className="menu__list">
{sidebarData.map((item, idx) => (
<DocSidebarItem
key={idx}
item={item}
onItemClick={(e) => {
e.target.blur();
setShowResponsiveSidebar(false);
}}
collapsible={sidebarCollapsible}
activePath={path}
/>
))}
</ul>
</div>
</div>
);
}
export default DocSidebar;
|
DocSidebarItemLink
|
type.ts
|
/** spell-checker: disable */
export interface IOmniClientInfo {
omnicoreversion_int: number;
omnicoreversion: string;
mastercoreversion: string;
bitcoincoreversion: string;
commitinfo: string;
block: number;
blocktime: number;
blocktransactions: number;
totaltransactions: number;
alerts: Array<{
alerttype: number;
alertype: string;
alertexpiry: string;
alertmessage: string;
}>;
}
export interface IOmniTxInfo {
txid: string;
fee: string;
// from
sendingaddress: string;
// to
referenceaddress: string;
// in rpc wallet?
ismine: boolean;
version: number;
type_int: number;
type: string;
valid?: boolean;
invalidreason?: string;
block: number;
confirmations: number;
propertyid: number;
propertyname: string;
divisible: boolean;
amount: string;
blockhash: string;
blocktime: number;
positioninblock: number;
[key: string]: any;
// CrowSale Purchase
// purchasedpropertyid?: number;
// purchasedpropertyname?: string;
// purchasedpropertydivisible?: boolean;
// purchasedtokens?: string;
// issuertokens?: string;
// ...
}
export interface IOmniPropertyInfo {
propertyid: number; // (number) the identifier
name: string; // (string) the name of the tokens
category: string; // (string) the category used for the tokens
subcategory: string; // (string) the subcategory used for the tokens
data: string; // (string) additional information or a description
url: string; // (string) an URI, for example pointing to a website
divisible: boolean; // (boolean) whether the tokens are divisible
issuer: string; // (string) the Bitcoin address of the issuer on record
creationtxid: string; // (string) the hex-encoded creation transaction hash
|
}
export interface IOmniPropertyBalance {
balance: string;
reserved: string;
frozen: string;
}
|
fixedissuance: boolean; // (boolean) whether the token supply is fixed
managedissuance: boolean; // (boolean) whether the token supply is managed by the issuer
freezingenabled: boolean; // (boolean) whether freezing is enabled for the property (managed properties only)
totaltokens: string; // (string) the total number of tokens in existence
|
policyDefinitionAtManagementGroup.go
|
// *** WARNING: this file was generated by the Pulumi SDK Generator. ***
// *** Do not edit by hand unless you're certain you know what you are doing! ***
package v20190601
import (
"reflect"
"github.com/pkg/errors"
"github.com/pulumi/pulumi/sdk/v2/go/pulumi"
)
// The policy definition.
type PolicyDefinitionAtManagementGroup struct {
pulumi.CustomResourceState
// The policy definition description.
Description pulumi.StringPtrOutput `pulumi:"description"`
// The display name of the policy definition.
DisplayName pulumi.StringPtrOutput `pulumi:"displayName"`
// The policy definition metadata.
Metadata pulumi.AnyOutput `pulumi:"metadata"`
// The policy definition mode. Some examples are All, Indexed, Microsoft.KeyVault.Data.
Mode pulumi.StringPtrOutput `pulumi:"mode"`
// The name of the policy definition.
Name pulumi.StringOutput `pulumi:"name"`
// Required if a parameter is used in policy rule.
Parameters pulumi.AnyOutput `pulumi:"parameters"`
// The policy rule.
PolicyRule pulumi.AnyOutput `pulumi:"policyRule"`
// The type of policy definition. Possible values are NotSpecified, BuiltIn, and Custom.
PolicyType pulumi.StringPtrOutput `pulumi:"policyType"`
// The type of the resource (Microsoft.Authorization/policyDefinitions).
Type pulumi.StringOutput `pulumi:"type"`
}
// NewPolicyDefinitionAtManagementGroup registers a new resource with the given unique name, arguments, and options.
func NewPolicyDefinitionAtManagementGroup(ctx *pulumi.Context,
name string, args *PolicyDefinitionAtManagementGroupArgs, opts ...pulumi.ResourceOption) (*PolicyDefinitionAtManagementGroup, error) {
if args == nil || args.ManagementGroupId == nil
|
if args == nil || args.PolicyDefinitionName == nil {
return nil, errors.New("missing required argument 'PolicyDefinitionName'")
}
if args == nil {
args = &PolicyDefinitionAtManagementGroupArgs{}
}
aliases := pulumi.Aliases([]pulumi.Alias{
{
Type: pulumi.String("azure-nextgen:management/latest:PolicyDefinitionAtManagementGroup"),
},
{
Type: pulumi.String("azure-nextgen:management/v20161201:PolicyDefinitionAtManagementGroup"),
},
{
Type: pulumi.String("azure-nextgen:management/v20180301:PolicyDefinitionAtManagementGroup"),
},
{
Type: pulumi.String("azure-nextgen:management/v20180501:PolicyDefinitionAtManagementGroup"),
},
{
Type: pulumi.String("azure-nextgen:management/v20190101:PolicyDefinitionAtManagementGroup"),
},
{
Type: pulumi.String("azure-nextgen:management/v20190901:PolicyDefinitionAtManagementGroup"),
},
{
Type: pulumi.String("azure-nextgen:management/v20200301:PolicyDefinitionAtManagementGroup"),
},
})
opts = append(opts, aliases)
var resource PolicyDefinitionAtManagementGroup
err := ctx.RegisterResource("azure-nextgen:management/v20190601:PolicyDefinitionAtManagementGroup", name, args, &resource, opts...)
if err != nil {
return nil, err
}
return &resource, nil
}
// GetPolicyDefinitionAtManagementGroup gets an existing PolicyDefinitionAtManagementGroup resource's state with the given name, ID, and optional
// state properties that are used to uniquely qualify the lookup (nil if not required).
func GetPolicyDefinitionAtManagementGroup(ctx *pulumi.Context,
name string, id pulumi.IDInput, state *PolicyDefinitionAtManagementGroupState, opts ...pulumi.ResourceOption) (*PolicyDefinitionAtManagementGroup, error) {
var resource PolicyDefinitionAtManagementGroup
err := ctx.ReadResource("azure-nextgen:management/v20190601:PolicyDefinitionAtManagementGroup", name, id, state, &resource, opts...)
if err != nil {
return nil, err
}
return &resource, nil
}
// Input properties used for looking up and filtering PolicyDefinitionAtManagementGroup resources.
type policyDefinitionAtManagementGroupState struct {
// The policy definition description.
Description *string `pulumi:"description"`
// The display name of the policy definition.
DisplayName *string `pulumi:"displayName"`
// The policy definition metadata.
Metadata interface{} `pulumi:"metadata"`
// The policy definition mode. Some examples are All, Indexed, Microsoft.KeyVault.Data.
Mode *string `pulumi:"mode"`
// The name of the policy definition.
Name *string `pulumi:"name"`
// Required if a parameter is used in policy rule.
Parameters interface{} `pulumi:"parameters"`
// The policy rule.
PolicyRule interface{} `pulumi:"policyRule"`
// The type of policy definition. Possible values are NotSpecified, BuiltIn, and Custom.
PolicyType *string `pulumi:"policyType"`
// The type of the resource (Microsoft.Authorization/policyDefinitions).
Type *string `pulumi:"type"`
}
type PolicyDefinitionAtManagementGroupState struct {
// The policy definition description.
Description pulumi.StringPtrInput
// The display name of the policy definition.
DisplayName pulumi.StringPtrInput
// The policy definition metadata.
Metadata pulumi.Input
// The policy definition mode. Some examples are All, Indexed, Microsoft.KeyVault.Data.
Mode pulumi.StringPtrInput
// The name of the policy definition.
Name pulumi.StringPtrInput
// Required if a parameter is used in policy rule.
Parameters pulumi.Input
// The policy rule.
PolicyRule pulumi.Input
// The type of policy definition. Possible values are NotSpecified, BuiltIn, and Custom.
PolicyType pulumi.StringPtrInput
// The type of the resource (Microsoft.Authorization/policyDefinitions).
Type pulumi.StringPtrInput
}
func (PolicyDefinitionAtManagementGroupState) ElementType() reflect.Type {
return reflect.TypeOf((*policyDefinitionAtManagementGroupState)(nil)).Elem()
}
type policyDefinitionAtManagementGroupArgs struct {
// The policy definition description.
Description *string `pulumi:"description"`
// The display name of the policy definition.
DisplayName *string `pulumi:"displayName"`
// The ID of the management group.
ManagementGroupId string `pulumi:"managementGroupId"`
// The policy definition metadata.
Metadata interface{} `pulumi:"metadata"`
// The policy definition mode. Some examples are All, Indexed, Microsoft.KeyVault.Data.
Mode *string `pulumi:"mode"`
// Required if a parameter is used in policy rule.
Parameters interface{} `pulumi:"parameters"`
// The name of the policy definition to create.
PolicyDefinitionName string `pulumi:"policyDefinitionName"`
// The policy rule.
PolicyRule interface{} `pulumi:"policyRule"`
// The type of policy definition. Possible values are NotSpecified, BuiltIn, and Custom.
PolicyType *string `pulumi:"policyType"`
}
// The set of arguments for constructing a PolicyDefinitionAtManagementGroup resource.
type PolicyDefinitionAtManagementGroupArgs struct {
// The policy definition description.
Description pulumi.StringPtrInput
// The display name of the policy definition.
DisplayName pulumi.StringPtrInput
// The ID of the management group.
ManagementGroupId pulumi.StringInput
// The policy definition metadata.
Metadata pulumi.Input
// The policy definition mode. Some examples are All, Indexed, Microsoft.KeyVault.Data.
Mode pulumi.StringPtrInput
// Required if a parameter is used in policy rule.
Parameters pulumi.Input
// The name of the policy definition to create.
PolicyDefinitionName pulumi.StringInput
// The policy rule.
PolicyRule pulumi.Input
// The type of policy definition. Possible values are NotSpecified, BuiltIn, and Custom.
PolicyType pulumi.StringPtrInput
}
func (PolicyDefinitionAtManagementGroupArgs) ElementType() reflect.Type {
return reflect.TypeOf((*policyDefinitionAtManagementGroupArgs)(nil)).Elem()
}
|
{
return nil, errors.New("missing required argument 'ManagementGroupId'")
}
|
print.ts
|
import chalk from 'chalk';
import { colors, Colors } from './colors';
import { timestamp, isFunction, isString } from './utils';
import { format } from 'util';
export interface PrintOptions {
timePrefix?: boolean;
prefix?: string | (() => string);
silent?: boolean;
}
export enum Levels {
fancy,
info,
debug,
warn,
error,
succuss
}
export class Print {
constructor(private options: PrintOptions = {}) {}
get chalk(): chalk.Chalk {
return chalk;
}
get colors(): Colors {
return colors;
}
/**
* Prints text without theme.
*
* Use this when you're writing stuff outside the toolkit of our
* printing scheme. hint: rarely.
*
* @param message The message to write.
*/
fancy(message: string, ...optionalParams: any[]) {
const msg = this.format(Levels.fancy, message, ...optionalParams);
if (!this.options.silent) {
console.log(msg);
}
}
debug(message: string) {
const msg = this.format(Levels.debug, message);
if (!this.options.silent) {
console.log(msg);
}
}
info(message: string, ...optionalParams: any[]) {
const msg = this.format(Levels.info, message, ...optionalParams);
if (!this.options.silent) {
console.log(msg);
}
}
warn(message: string, ...optionalParams: any[]) {
const msg = this.format(Levels.warn, message, ...optionalParams);
if (!this.options.silent) {
console.log(msg);
}
}
error(message: string | Error, ...optionalParams: any[]) {
if (isString(message)) {
const msg = this.format(Levels.error, message, ...optionalParams);
if (!this.options.silent) {
console.log(msg);
}
} else {
console.error(message);
}
}
succuss(message: string, ...optionalParams: any[]) {
const msg = this.format(Levels.succuss, message, ...optionalParams);
if (!this.options.silent) {
console.log(msg);
}
}
format(level: Levels, message: string, ...optionalParams: any[]) {
let msg = format(message, ...optionalParams);
switch (level) {
case Levels.info:
msg = chalk.blue(msg);
break;
case Levels.debug:
msg = chalk.cyan(msg);
break;
case Levels.warn:
msg = chalk.yellow(msg);
break;
case Levels.error:
msg = chalk.red(msg);
break;
case Levels.succuss:
msg = chalk.green(msg);
break;
}
let prefix: string;
if (this.options.prefix) {
prefix = isFunction(this.options.prefix) ? this.options.prefix() : this.options.prefix;
} else if (this.options.timePrefix) {
prefix = timestamp('HH:mm:ss');
}
if (prefix) {
msg = `[${chalk.gray(prefix)}] ${msg}`;
|
return msg;
}
}
|
}
|
test_mpris2widget.py
|
# Copyright (c) 2021 elParaguayo
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Widget specific tests
import sys
from importlib import reload
from types import ModuleType
import pytest
from libqtile.bar import Bar
def no_op(*args, **kwargs):
pass
async def mock_signal_receiver(*args, **kwargs):
return True
def fake_timer(interval, func, *args, **kwargs):
class TimerObj:
def cancel(self):
pass
@property
def _scheduled(self):
return False
return TimerObj()
class MockConstants(ModuleType):
class MessageType:
SIGNAL = 1
class MockMessage:
def __init__(self, is_signal=True, body=None):
self.message_type = 1 if is_signal else 0
self.body = body
# dbus_next message data is stored in variants. The widget extracts the
# information via the `value` attribute so we just need to mock that here.
class obj: # noqa: N801
def __init__(self, value):
self.value = value
# Creates a mock message body containing both metadata and playback status
def metadata_and_status(status):
return MockMessage(body=(
"",
{
'Metadata': obj(
{
'mpris:trackid': obj(1),
'xesam:url': obj("/path/to/rickroll.mp3"),
'xesam:title': obj("Never Gonna Give You Up"),
'xesam:artist': obj(["Rick Astley"]),
'xesam:album': obj("Whenever You Need Somebody"),
'mpris:length': obj(200000000)
}
),
'PlaybackStatus': obj(status)
},
[])
)
# Creates a mock message body containing just playback status
def playback_status(status, signal=True):
return MockMessage(is_signal=signal, body=(
"",
{
'PlaybackStatus': obj(status)
},
[])
)
METADATA_PLAYING = metadata_and_status("Playing")
METADATA_PAUSED = metadata_and_status("Paused")
STATUS_PLAYING = playback_status("Playing")
STATUS_PAUSED = playback_status("Paused")
STATUS_STOPPED = playback_status("Stopped")
NON_SIGNAL = playback_status("Paused", False)
@pytest.fixture
def patched_module(monkeypatch):
# Remove dbus_next.constants entry from modules. If it's not there, don't raise error
monkeypatch.delitem(sys.modules, "dbus_next.constants", raising=False)
monkeypatch.setitem(sys.modules, "dbus_next.constants", MockConstants("dbus_next.constants"))
from libqtile.widget import mpris2widget
# Need to force reload of the module to ensure patched module is loaded
# This may only be needed if dbus_next is installed on testing system so helpful for
# local tests.
reload(mpris2widget)
monkeypatch.setattr("libqtile.widget.mpris2widget.add_signal_receiver", mock_signal_receiver)
return mpris2widget
def test_mpris2_signal_handling(fake_qtile, patched_module, fake_window):
mp = patched_module.Mpris2(scroll_chars=20, scroll_wait_intervals=5)
fakebar = Bar([mp], 24)
fakebar.window = fake_window
fakebar.width = 10
fakebar.height = 10
fakebar.draw = no_op
mp.timeout_add = fake_timer
mp._configure(fake_qtile, fakebar)
assert mp.displaytext == ""
# No text will be displayed if widget is not configured
mp.message(METADATA_PLAYING)
assert mp.displaytext == ""
# Set configured flag, create a message with the metadata and playback status
mp.configured = True
mp.message(METADATA_PLAYING)
assert mp.displaytext == "Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"
assert mp.text == ""
# Text is displayed after first run of scroll_text
mp.scroll_text()
assert mp.text == "Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"[:mp.scroll_chars]
# Text is scrolled 1 character after `scroll_wait_intervals`runs of scroll_text
for _ in range(mp.scroll_wait_intervals):
mp.scroll_text()
assert mp.text == "Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"[1:mp.scroll_chars + 1]
# Non-signal type message will be ignored
mp.message(NON_SIGNAL)
assert mp.text == "Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"[1:mp.scroll_chars + 1]
# If widget receives "paused" signal with no metadata then default message is "Paused"
mp.message(STATUS_PAUSED)
assert mp.displaytext == "Paused"
# If widget receives "stopped" signal with no metadata then widget is blank
mp.message(STATUS_STOPPED)
assert mp.displaytext == ""
# Reset to playing + metadata
mp.message(METADATA_PLAYING)
mp.scroll_text()
assert mp.text == "Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"[:mp.scroll_chars]
# If widget receives "paused" signal with metadata then message is "Paused: {metadata}"
mp.message(METADATA_PAUSED)
mp.scroll_text()
assert mp.text == "Paused: Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"[:mp.scroll_chars]
# If widget now receives "playing" signal with no metadata, "paused" word is removed
mp.message(STATUS_PLAYING)
mp.scroll_text()
assert mp.text == "Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"[:mp.scroll_chars]
info = mp.cmd_info()
assert info["displaytext"] == "Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"
assert info["isplaying"]
def test_mpris2_custom_stop_text(fake_qtile, patched_module, fake_window):
mp = patched_module.Mpris2(stop_pause_text="Test Paused")
fakebar = Bar([mp], 24)
fakebar.window = fake_window
fakebar.width = 10
fakebar.height = 10
fakebar.draw = no_op
mp.timeout_add = fake_timer
mp._configure(fake_qtile, fakebar)
mp.configured = True
mp.message(METADATA_PLAYING)
assert mp.displaytext == "Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"
assert mp.text == ""
mp.scroll_text()
# Check our custom paused wording is shown
mp.message(STATUS_PAUSED)
|
def test_mpris2_no_metadata(fake_qtile, patched_module, fake_window):
mp = patched_module.Mpris2(stop_pause_text="Test Paused")
fakebar = Bar([mp], 24)
fakebar.window = fake_window
fakebar.width = 10
fakebar.height = 10
fakebar.draw = no_op
mp.timeout_add = fake_timer
mp._configure(fake_qtile, fakebar)
mp.configured = True
mp.message(STATUS_PLAYING)
assert mp.displaytext == "No metadata for current track"
def test_mpris2_no_scroll(fake_qtile, patched_module, fake_window):
# If no scrolling, then the update function creates the text to display
# and draws the bar.
mp = patched_module.Mpris2(scroll_chars=None)
fakebar = Bar([mp], 24)
fakebar.window = fake_window
fakebar.width = 10
fakebar.height = 10
fakebar.draw = no_op
mp.timeout_add = fake_timer
mp._configure(fake_qtile, fakebar)
mp.configured = True
mp.message(METADATA_PLAYING)
assert mp.text == "Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"
mp.message(METADATA_PAUSED)
assert mp.text == "Paused: Never Gonna Give You Up - Whenever You Need Somebody - Rick Astley"
def test_mpris2_clear_after_scroll(fake_qtile, patched_module, fake_window):
mp = patched_module.Mpris2(scroll_chars=60, scroll_wait_intervals=2)
fakebar = Bar([mp], 24)
fakebar.window = fake_window
fakebar.width = 10
fakebar.height = 10
fakebar.draw = no_op
mp.timeout_add = fake_timer
mp._configure(fake_qtile, fakebar)
mp.configured = True
mp.message(METADATA_PLAYING)
# After 10 loops, text should be cleared as scroll reaches end of text.
# 2 loops before starting scroll
# 6 loops to loop over remaining text in dispay
# 1 additional loop at end of text (so total 2 loops on that display)
# 1 loop to clear.
for i in range(10):
mp.scroll_text()
assert mp.text == ""
# TO DO: untested lines
# 85-86: Logging when unable to subscribe to dbus signal. Needs `caplog`
|
assert mp.displaytext == "Test Paused"
|
registryPolicy.go
|
// Code generated by the Pulumi SDK Generator DO NOT EDIT.
// *** WARNING: Do not edit by hand unless you're certain you know what you are doing! ***
package eventschemas
import (
"context"
"reflect"
"github.com/pkg/errors"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
// Resource Type definition for AWS::EventSchemas::RegistryPolicy
type RegistryPolicy struct {
pulumi.CustomResourceState
Policy pulumi.AnyOutput `pulumi:"policy"`
RegistryName pulumi.StringOutput `pulumi:"registryName"`
RevisionId pulumi.StringPtrOutput `pulumi:"revisionId"`
}
// NewRegistryPolicy registers a new resource with the given unique name, arguments, and options.
func NewRegistryPolicy(ctx *pulumi.Context,
name string, args *RegistryPolicyArgs, opts ...pulumi.ResourceOption) (*RegistryPolicy, error) {
if args == nil {
return nil, errors.New("missing one or more required arguments")
}
if args.Policy == nil {
return nil, errors.New("invalid value for required argument 'Policy'")
}
if args.RegistryName == nil {
return nil, errors.New("invalid value for required argument 'RegistryName'")
}
var resource RegistryPolicy
err := ctx.RegisterResource("aws-native:eventschemas:RegistryPolicy", name, args, &resource, opts...)
if err != nil {
return nil, err
}
return &resource, nil
}
// GetRegistryPolicy gets an existing RegistryPolicy resource's state with the given name, ID, and optional
// state properties that are used to uniquely qualify the lookup (nil if not required).
func GetRegistryPolicy(ctx *pulumi.Context,
name string, id pulumi.IDInput, state *RegistryPolicyState, opts ...pulumi.ResourceOption) (*RegistryPolicy, error) {
var resource RegistryPolicy
err := ctx.ReadResource("aws-native:eventschemas:RegistryPolicy", name, id, state, &resource, opts...)
if err != nil {
return nil, err
}
return &resource, nil
}
|
// Input properties used for looking up and filtering RegistryPolicy resources.
type registryPolicyState struct {
}
type RegistryPolicyState struct {
}
func (RegistryPolicyState) ElementType() reflect.Type {
return reflect.TypeOf((*registryPolicyState)(nil)).Elem()
}
type registryPolicyArgs struct {
Policy interface{} `pulumi:"policy"`
RegistryName string `pulumi:"registryName"`
RevisionId *string `pulumi:"revisionId"`
}
// The set of arguments for constructing a RegistryPolicy resource.
type RegistryPolicyArgs struct {
Policy pulumi.Input
RegistryName pulumi.StringInput
RevisionId pulumi.StringPtrInput
}
func (RegistryPolicyArgs) ElementType() reflect.Type {
return reflect.TypeOf((*registryPolicyArgs)(nil)).Elem()
}
type RegistryPolicyInput interface {
pulumi.Input
ToRegistryPolicyOutput() RegistryPolicyOutput
ToRegistryPolicyOutputWithContext(ctx context.Context) RegistryPolicyOutput
}
func (*RegistryPolicy) ElementType() reflect.Type {
return reflect.TypeOf((**RegistryPolicy)(nil)).Elem()
}
func (i *RegistryPolicy) ToRegistryPolicyOutput() RegistryPolicyOutput {
return i.ToRegistryPolicyOutputWithContext(context.Background())
}
func (i *RegistryPolicy) ToRegistryPolicyOutputWithContext(ctx context.Context) RegistryPolicyOutput {
return pulumi.ToOutputWithContext(ctx, i).(RegistryPolicyOutput)
}
type RegistryPolicyOutput struct{ *pulumi.OutputState }
func (RegistryPolicyOutput) ElementType() reflect.Type {
return reflect.TypeOf((**RegistryPolicy)(nil)).Elem()
}
func (o RegistryPolicyOutput) ToRegistryPolicyOutput() RegistryPolicyOutput {
return o
}
func (o RegistryPolicyOutput) ToRegistryPolicyOutputWithContext(ctx context.Context) RegistryPolicyOutput {
return o
}
func (o RegistryPolicyOutput) Policy() pulumi.AnyOutput {
return o.ApplyT(func(v *RegistryPolicy) pulumi.AnyOutput { return v.Policy }).(pulumi.AnyOutput)
}
func (o RegistryPolicyOutput) RegistryName() pulumi.StringOutput {
return o.ApplyT(func(v *RegistryPolicy) pulumi.StringOutput { return v.RegistryName }).(pulumi.StringOutput)
}
func (o RegistryPolicyOutput) RevisionId() pulumi.StringPtrOutput {
return o.ApplyT(func(v *RegistryPolicy) pulumi.StringPtrOutput { return v.RevisionId }).(pulumi.StringPtrOutput)
}
func init() {
pulumi.RegisterInputType(reflect.TypeOf((*RegistryPolicyInput)(nil)).Elem(), &RegistryPolicy{})
pulumi.RegisterOutputType(RegistryPolicyOutput{})
}
| |
test_asl.py
|
# -*- coding: utf-8 -*-
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
# vi: set ft=python sts=4 ts=4 sw=4 et:
import pytest
from ....testing import example_data
from ...niftyreg import get_custom_path
from ..asl import FitAsl
from ...niftyreg.tests.test_regutils import no_nifty_tool
@pytest.mark.skipif(
no_nifty_tool(cmd='fit_asl'), reason="niftyfit is not installed")
def
|
():
""" Testing FitAsl interface."""
# Create the test node
fit_asl = FitAsl()
# Check if the command is properly defined
cmd = get_custom_path('fit_asl', env_dir='NIFTYFIT_DIR')
assert fit_asl.cmd == cmd
# test raising error with mandatory args absent
with pytest.raises(ValueError):
fit_asl.run()
# Tests on the interface:
# Runs cbf fitting assuming all tissue is GM!
in_file = example_data('asl.nii.gz')
fit_asl.inputs.source_file = in_file
cmd_tmp = '{cmd} -source {in_file} -cbf {cbf} -error {error} -syn {syn}'
expected_cmd = cmd_tmp.format(
cmd=cmd,
in_file=in_file,
cbf='asl_cbf.nii.gz',
error='asl_error.nii.gz',
syn='asl_syn.nii.gz',
)
assert fit_asl.cmdline == expected_cmd
# Runs cbf fitting using IR/SR T1 data to estimate the local T1 and uses
# the segmentation data to fit tissue specific blood flow parameters
# (lambda,transit times,T1)
fit_asl2 = FitAsl(sig=True)
in_file = example_data('asl.nii.gz')
t1map = example_data('T1map.nii.gz')
seg = example_data('segmentation0.nii.gz')
fit_asl2.inputs.source_file = in_file
fit_asl2.inputs.t1map = t1map
fit_asl2.inputs.seg = seg
cmd_tmp = '{cmd} -source {in_file} -cbf {cbf} -error {error} \
-seg {seg} -sig -syn {syn} -t1map {t1map}'
expected_cmd = cmd_tmp.format(
cmd=cmd,
in_file=in_file,
t1map=t1map,
seg=seg,
cbf='asl_cbf.nii.gz',
error='asl_error.nii.gz',
syn='asl_syn.nii.gz',
)
assert fit_asl2.cmdline == expected_cmd
|
test_fit_asl
|
object.go
|
package networkdelay
import (
"fmt"
"sync"
"github.com/iotaledger/goshimmer/packages/tangle/payload"
"github.com/iotaledger/hive.go/marshalutil"
"github.com/iotaledger/hive.go/stringify"
"github.com/mr-tron/base58"
)
const (
// ObjectName defines the name of the networkdelay object.
ObjectName = "networkdelay"
)
// ID represents a 32 byte ID of a network delay object.
type ID [32]byte
// String returns a human-friendly representation of the ID.
func (id ID) String() string {
return base58.Encode(id[:])
}
// Object represents the network delay object type.
type Object struct {
id ID
sentTime int64
bytes []byte
bytesMutex sync.RWMutex
}
// NewObject creates a new network delay object.
func NewObject(id ID, sentTime int64) *Object {
return &Object{
id: id,
sentTime: sentTime,
}
}
// FromBytes parses the marshaled version of an Object into a Go object.
// It either returns a new Object or fills an optionally provided Object with the parsed information.
func FromBytes(bytes []byte) (result *Object, consumedBytes int, err error) {
marshalUtil := marshalutil.New(bytes)
result, err = Parse(marshalUtil)
consumedBytes = marshalUtil.ReadOffset()
return
}
// Parse unmarshals an Object using the given marshalUtil (for easier marshaling/unmarshaling).
func Parse(marshalUtil *marshalutil.MarshalUtil) (result *Object, err error) {
// read information that are required to identify the object from the outside
if _, err = marshalUtil.ReadUint32(); err != nil {
err = fmt.Errorf("failed to parse payload size of networkdelay object: %w", err)
return
}
if _, err = marshalUtil.ReadUint32(); err != nil {
err = fmt.Errorf("failed to parse payload type of networkdelay object: %w", err)
return
}
// parse id
result = &Object{}
id, err := marshalUtil.ReadBytes(32)
if err != nil {
err = fmt.Errorf("failed to parse id of networkdelay object: %w", err)
return
}
copy(result.id[:], id)
// parse sent time
if result.sentTime, err = marshalUtil.ReadInt64(); err != nil {
err = fmt.Errorf("failed to parse sent time of networkdelay object: %w", err)
return
}
// store bytes, so we don't have to marshal manually
consumedBytes := marshalUtil.ReadOffset()
copy(result.bytes, marshalUtil.Bytes()[:consumedBytes])
return
}
// Bytes returns a marshaled version of this Object.
|
o.bytesMutex.RLock()
// return if bytes have been determined already
if bytes = o.bytes; bytes != nil {
o.bytesMutex.RUnlock()
return
}
// switch to write lock
o.bytesMutex.RUnlock()
o.bytesMutex.Lock()
defer o.bytesMutex.Unlock()
// return if bytes have been determined in the mean time
if bytes = o.bytes; bytes != nil {
return
}
objectLength := len(o.id) + marshalutil.Int64Size
// initialize helper
marshalUtil := marshalutil.New(marshalutil.Uint32Size + marshalutil.Uint32Size + objectLength)
// marshal the payload specific information
marshalUtil.WriteUint32(uint32(objectLength))
marshalUtil.WriteBytes(Type.Bytes())
marshalUtil.WriteBytes(o.id[:])
marshalUtil.WriteInt64(o.sentTime)
bytes = marshalUtil.Bytes()
return
}
// String returns a human-friendly representation of the Object.
func (o *Object) String() string {
return stringify.Struct("NetworkDelayObject",
stringify.StructField("id", o.id),
stringify.StructField("sentTime", uint64(o.sentTime)),
)
}
// region Payload implementation ///////////////////////////////////////////////////////////////////////////////////////
// Type represents the identifier which addresses the network delay Object type.
var Type = payload.NewType(189, ObjectName, func(data []byte) (payload payload.Payload, err error) {
payload, _, err = FromBytes(data)
return
})
// Type returns the type of the Object.
func (o *Object) Type() payload.Type {
return Type
}
// // endregion ///////////////////////////////////////////////////////////////////////////////////////////////////////////
|
func (o *Object) Bytes() (bytes []byte) {
// acquire lock for reading bytes
|
resolvers.go
|
package pub
import (
"fmt"
"github.com/go-fed/activity/streams"
"github.com/go-fed/activity/vocab"
"net/url"
)
// ToPubObject transforms a json-deserialized ActivityStream object into a
// PubObject for use with the pub library. Note that for an object to be an
// ActivityPub object, it must have an 'id' and at least one 'type'.
func ToPubObject(m map[string]interface{}) (t []PubObject, e error) {
r := &streams.Resolver{
AnyObjectCallback: func(i vocab.ObjectType) error {
if !i.HasId() {
return fmt.Errorf("object type does not have an id: %q", i)
} else if i.TypeLen() == 0 {
return fmt.Errorf("object type does not have a type: %q", i)
}
t = append(t, i)
return nil
},
AnyLinkCallback: func(i vocab.LinkType) error {
if !i.HasId() {
return fmt.Errorf("link type does not have an id: %q", i)
} else if i.TypeLen() == 0 {
return fmt.Errorf("link type does not have a type: %q", i)
}
t = append(t, i)
return nil
},
}
e = r.Deserialize(m)
return t, e
}
func getActorObject(m map[string]interface{}) (actorObject, error) {
var a actorObject
err := toActorObjectResolver(&a).Deserialize(m)
return a, err
}
func toActorObjectResolver(a *actorObject) *streams.Resolver {
return &streams.Resolver{
AnyObjectCallback: func(i vocab.ObjectType) error {
if o, ok := i.(actorObject); ok {
*a = o
|
},
}
}
func toActorResolver(a *actor) *streams.Resolver {
return &streams.Resolver{
AnyObjectCallback: func(i vocab.ObjectType) error {
if o, ok := i.(actor); ok {
*a = o
}
return nil
},
}
}
func toActorCollectionResolver(a *actor, c **streams.Collection, oc **streams.OrderedCollection, cp **streams.CollectionPage, ocp **streams.OrderedCollectionPage) *streams.Resolver {
r := toActorResolver(a)
r.CollectionCallback = func(i *streams.Collection) error {
*c = i
return nil
}
r.OrderedCollectionCallback = func(i *streams.OrderedCollection) error {
*oc = i
return nil
}
r.CollectionPageCallback = func(i *streams.CollectionPage) error {
*cp = i
return nil
}
r.OrderedCollectionPageCallback = func(i *streams.OrderedCollectionPage) error {
*ocp = i
return nil
}
return r
}
func toIdResolver(ok *bool, u **url.URL) *streams.Resolver {
return &streams.Resolver{
AnyObjectCallback: func(i vocab.ObjectType) error {
*ok = i.HasId()
if *ok {
*u = i.GetId()
}
return nil
},
}
}
func toCollectionPage(m map[string]interface{}) (c *streams.CollectionPage, err error) {
r := &streams.Resolver{
CollectionPageCallback: func(i *streams.CollectionPage) error {
c = i
return nil
},
}
err = r.Deserialize(m)
return
}
func toOrderedCollectionPage(m map[string]interface{}) (c *streams.OrderedCollectionPage, err error) {
r := &streams.Resolver{
OrderedCollectionPageCallback: func(i *streams.OrderedCollectionPage) error {
c = i
return nil
},
}
err = r.Deserialize(m)
return
}
func toTypeIder(m map[string]interface{}) (tid typeIder, err error) {
var t []typeIder
r := &streams.Resolver{
AnyObjectCallback: func(i vocab.ObjectType) error {
t = append(t, i)
return nil
},
AnyLinkCallback: func(i vocab.LinkType) error {
t = append(t, i)
return nil
},
}
err = r.Deserialize(m)
if err != nil {
return
}
// This should not be more than 1 as clients are not permitted to send
// an array of objects/links.
if len(t) != 1 {
err = fmt.Errorf("too many object/links: %d", len(t))
return
}
tid = t[0]
return
}
func toAnyActivity(m map[string]interface{}) (o vocab.ActivityType, err error) {
r := &streams.Resolver{
AnyActivityCallback: func(i vocab.ActivityType) error {
o = i
return nil
},
}
err = r.Deserialize(m)
return
}
func toAnyObject(m map[string]interface{}) (o vocab.ObjectType, err error) {
r := &streams.Resolver{
AnyObjectCallback: func(i vocab.ObjectType) error {
o = i
return nil
},
}
err = r.Deserialize(m)
return
}
|
}
return nil
|
constants.js
|
/*
* Copyright 2021 EPAM Systems
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
|
export const RP_CLUSTER_LAST_RUN = 'rp.cluster.lastRun';
|
|
api_video.py
|
# -*- coding: utf-8 -*-
import copy
import json
import os
import re
import shutil
import subprocess
import time
|
from . import config
def download_video(
self,
media_id,
filename=None,
media=False,
folder="videos"
):
video_urls = []
if not media:
self.media_info(media_id)
media = self.last_json["items"][0]
filename = (
"{}_{}.mp4".format(media["user"]["username"], media_id)
if not filename
else "{}.mp4".format(filename)
)
try:
clips = media["video_versions"]
video_urls.append(clips[0]["url"])
except KeyError:
carousels = media.get("carousel_media", [])
for carousel in carousels:
video_urls.append(carousel["video_versions"][0]["url"])
except Exception:
return False
for counter, video_url in enumerate(video_urls):
fname = os.path.join(folder, "{}_{}".format(counter, filename))
if os.path.exists(fname):
print('File %s is exists, return it' % fname)
return os.path.abspath(fname)
response = self.session.get(video_url, stream=True)
if response.status_code == 200:
with open(fname, "wb") as f:
response.raw.decode_content = True
shutil.copyfileobj(response.raw, f)
return os.path.abspath(fname)
# leaving here function used by old upload_video, no more used now
def get_video_info(filename):
res = {}
try:
terminalResult = subprocess.Popen(
["ffprobe", filename],
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT
)
for x in terminalResult.stdout.readlines():
# Duration: 00:00:59.51, start: 0.000000, bitrate: 435 kb/s
m = re.search(
r"duration: (\d\d:\d\d:\d\d\.\d\d),",
str(x),
flags=re.IGNORECASE
)
if m is not None:
res["duration"] = m.group(1)
# Video: h264 (Constrained Baseline)
# (avc1 / 0x31637661), yuv420p, 480x268
m = re.search(
r"video:\s.*\s(\d+)x(\d+)\s",
str(x),
flags=re.IGNORECASE
)
if m is not None:
res["width"] = m.group(1)
res["height"] = m.group(2)
finally:
if "width" not in res:
print(
"ERROR: 'ffprobe' not found, please install "
"'ffprobe' with one of following methods:"
)
print(" sudo apt-get install ffmpeg")
print("or sudo apt-get install -y libav-tools")
return res
def upload_video(
self,
video,
caption=None,
upload_id=None,
thumbnail=None,
options={}
):
"""Upload video to Instagram
@param video Path to video file (String)
@param caption Media description (String)
@param upload_id Unique upload_id (String). When None, then generate
automatically
@param thumbnail Path to thumbnail for video (String). When None, then
thumbnail is generate automatically
@param options Object with difference options, e.g. configure_timeout,
rename_thumbnail, rename (Dict)
Designed to reduce the number of function arguments!
This is the simplest request object.
@return Object with state of uploading to Instagram (or False)
"""
options = dict(
{"configure_timeout": 15, "rename_thumbnail": True, "rename": True},
**(options or {})
)
if upload_id is None:
upload_id = str(int(time.time() * 1000))
video, thumbnail, width, height, duration = resize_video(video, thumbnail)
data = {
"upload_id": upload_id,
"_csrftoken": self.token,
"media_type": "2",
"_uuid": self.uuid,
}
m = MultipartEncoder(data, boundary=self.uuid)
self.session.headers.update(
{
"X-IG-Capabilities": "3Q4=",
"X-IG-Connection-Type": "WIFI",
"Host": "i.instagram.com",
"Cookie2": "$Version=1",
"Accept-Language": "en-US",
"Accept-Encoding": "gzip, deflate",
"Content-type": m.content_type,
"Connection": "keep-alive",
"User-Agent": self.user_agent,
}
)
response = self.session.post(
config.API_URL + "upload/video/", data=m.to_string()
)
if response.status_code == 200:
body = json.loads(response.text)
upload_url = body["video_upload_urls"][3]["url"]
upload_job = body["video_upload_urls"][3]["job"]
with open(video, "rb") as video_bytes:
video_data = video_bytes.read()
# solve issue #85 TypeError:
# slice indices must be integers or None or have an __index__ method
request_size = len(video_data) // 4
last_request_extra = len(video_data) - 3 * request_size
headers = copy.deepcopy(self.session.headers)
self.session.headers.update(
{
"X-IG-Capabilities": "3Q4=",
"X-IG-Connection-Type": "WIFI",
"Cookie2": "$Version=1",
"Accept-Language": "en-US",
"Accept-Encoding": "gzip, deflate",
"Content-type": "application/octet-stream",
"Session-ID": upload_id,
"Connection": "keep-alive",
"Content-Disposition": 'attachment; filename="video.mov"',
"job": upload_job,
"Host": "upload.instagram.com",
"User-Agent": self.user_agent,
}
)
for i in range(4):
start = i * request_size
if i == 3:
end = i * request_size + last_request_extra
else:
end = (i + 1) * request_size
length = last_request_extra if i == 3 else request_size
content_range = "bytes {start}-{end}/{len_video}".format(
start=start, end=end - 1, len_video=len(video_data)
).encode("utf-8")
self.session.headers.update(
{
"Content-Length": str(end - start),
"Content-Range": content_range
}
)
response = self.session.post(
upload_url, data=video_data[start: start + length]
)
self.session.headers = headers
configure_timeout = options.get("configure_timeout")
if response.status_code == 200:
for attempt in range(4):
if configure_timeout:
time.sleep(configure_timeout)
if self.configure_video(
upload_id,
video,
thumbnail,
width,
height,
duration,
caption,
options=options,
):
media = self.last_json.get("media")
self.expose()
if options.get("rename"):
from os import rename
rename(video, "{}.REMOVE_ME".format(video))
return media
return False
def configure_video(
self,
upload_id,
video,
thumbnail,
width,
height,
duration,
caption="",
options={}
):
"""Post Configure Video (send caption, thumbnail and more to Instagram)
@param upload_id Unique upload_id (String). Received from "upload_video"
@param video Path to video file (String)
@param thumbnail Path to thumbnail for video (String). When None,
then thumbnail is generate automatically
@param width Width in px (Integer)
@param height Height in px (Integer)
@param duration Duration in seconds (Integer)
@param caption Media description (String)
@param options Object with difference options, e.g. configure_timeout,
rename_thumbnail, rename (Dict)
Designed to reduce the number of function arguments!
This is the simplest request object.
"""
# clipInfo = get_video_info(video)
options = {"rename": options.get("rename_thumbnail", True)}
self.upload_photo(
photo=thumbnail,
caption=caption,
upload_id=upload_id,
from_video=True,
options=options,
)
data = self.json_data(
{
"upload_id": upload_id,
"source_type": 3,
"poster_frame_index": 0,
"length": 0.00,
"audio_muted": False,
"filter_type": 0,
"video_result": "deprecated",
"clips": {
"length": duration,
"source_type": "3",
"camera_position": "back",
},
"extra": {"source_width": width, "source_height": height},
"device": self.device_settings,
"caption": caption,
}
)
return self.send_request("media/configure/?video=1", data)
def resize_video(fname, thumbnail=None):
from math import ceil
try:
import moviepy.editor as mp
except ImportError as e:
print("ERROR: {}".format(e))
print(
"Required module `moviepy` not installed\n"
"Install with `pip install moviepy` and retry.\n\n"
"You may need also:\n"
"pip install --upgrade setuptools\n"
"pip install numpy --upgrade --ignore-installed"
)
return False
print("Analizing `{}`".format(fname))
h_lim = {"w": 90.0, "h": 47.0}
v_lim = {"w": 4.0, "h": 5.0}
d_lim = 60
vid = mp.VideoFileClip(fname)
(w, h) = vid.size
deg = vid.rotation
ratio = w * 1.0 / h * 1.0
print(
"FOUND w:{w}, h:{h}, rotation={d}, ratio={r}".format(
w=w,
h=h,
r=ratio,
d=deg
)
)
if w > h:
print("Horizontal video")
if ratio > (h_lim["w"] / h_lim["h"]):
print("Cropping video")
cut = int(ceil((w - h * h_lim["w"] / h_lim["h"]) / 2))
left = cut
right = w - cut
top = 0
bottom = h
vid = vid.crop(x1=left, y1=top, x2=right, y2=bottom)
(w, h) = vid.size
if w > 1081:
print("Resizing video")
vid = vid.resize(width=1080)
elif w < h:
print("Vertical video")
if ratio < (v_lim["w"] / v_lim["h"]):
print("Cropping video")
cut = int(ceil((h - w * v_lim["h"] / v_lim["w"]) / 2))
left = 0
right = w
top = cut
bottom = h - cut
vid = vid.crop(x1=left, y1=top, x2=right, y2=bottom)
(w, h) = vid.size
if h > 1081:
print("Resizing video")
vid = vid.resize(height=1080)
else:
print("Square video")
if w > 1081:
print("Resizing video")
vid = vid.resize(width=1080)
(w, h) = vid.size
if vid.duration > d_lim:
print("Cutting video to {} sec from start".format(d_lim))
vid = vid.subclip(0, d_lim)
new_fname = "{}.CONVERTED.mp4".format(fname)
print(
"Saving new video w:{w} h:{h} to `{f}`".format(
w=w,
h=h,
f=new_fname
)
)
vid.write_videofile(new_fname, codec="libx264", audio_codec="aac")
if not thumbnail:
print("Generating thumbnail...")
thumbnail = "{}.jpg".format(fname)
vid.save_frame(thumbnail, t=(vid.duration / 2))
return new_fname, thumbnail, w, h, vid.duration
|
from requests_toolbelt import MultipartEncoder
|
main.go
|
// Copyright © 2016 Abcum Ltd
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this info except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package vat
import (
"bytes"
"io/ioutil"
"net/http"
"strings"
"encoding/xml"
"github.com/abcum/orbit"
"golang.org/x/net/context/ctxhttp"
)
func i
|
) {
orbit.Add("check/vat", New)
}
func New(orb *orbit.Orbit) interface{} {
return &Module{
orb: orb,
}
}
type Module struct {
orb *orbit.Orbit
}
// Check validates the format and existence of a
// European VAT number, for all European member states.
//
// var vat = require('validate/vat');
// vat.Check("GB 982 1503 23");
//
func (this *Module) Check(number string) bool {
if ok := this.checkFormat(number); !ok {
return false
}
if ok := this.checkNumber(number); !ok {
return false
}
return true
}
func (this *Module) clean(value string) string {
value = strings.ToUpper(value)
value = strings.Replace(value, "-", "", -1)
value = strings.Replace(value, " ", "", -1)
return value
}
func (this *Module) checkFormat(value string) bool {
if len(value) < 3 {
return false
}
value = this.clean(value)
if regex, ok := patterns[value[0:2]]; ok {
return regex.MatchString(value)
}
return false
}
func (this *Module) checkNumber(value string) bool {
var err error
var bdy []byte
var ret *response
var res *http.Response
if len(value) < 3 {
return false
}
value = this.clean(value)
body := envelope
body = strings.Replace(body, "{{.country}}", value[:2], 1)
body = strings.Replace(body, "{{.vnumber}}", value[2:], 1)
request := new(http.Client)
content := bytes.NewBufferString(body)
res, err = ctxhttp.Post(this.orb.Context(), request, endpoint, headtype, content)
if err != nil {
return false
}
defer res.Body.Close()
bdy, err = ioutil.ReadAll(res.Body)
if err != nil {
return false
}
if bytes.Contains(bdy, []byte("INVALID_INPUT")) {
return false
}
err = xml.Unmarshal(bdy, &ret)
if err != nil {
return false
}
return ret.Soap.Soap.Valid
}
|
nit(
|
semantics_utils.py
|
from typing import Optional, Any, Dict
import numpy as np
import pandas as pd
from more_itertools import first
from networkx import Graph, to_numpy_matrix
import matplotlib.pyplot as plt
import seaborn as sb
from adam.semantics import Concept, KindConcept, ObjectConcept, ActionConcept
class SemanticsManager:
def __init__(self, semantics_graph: Graph) -> None:
|
def object_concept_embedding(self, concept: str) -> Any:
# Get a numpy array weighted adjacency embedding of the concept from the graph
return self.semantics_matrix[self.nodes.index(concept)]
def kind_concept_embedding(self, concept: str) -> Any:
# Get a numpy array weighted adjacency embedding averaging the members of a kind concept in the graph
member_embeddings = np.vstack(
[
self.object_concept_embedding(member)
for member in self.semantics_graph.neighbors(concept)
]
)
return np.mean(member_embeddings, axis=0)
def evaluate_kind_membership(self, word: str, kind: str) -> float:
word_node = self.concept_as_str_node(ObjectConcept(word))
kind_node = self.concept_as_str_node(KindConcept(kind))
if kind_node not in self.nodes or word_node not in self.nodes:
return 0
return cos_sim(
self.object_concept_embedding(word_node),
self.kind_concept_embedding(kind_node),
)
@staticmethod
def concept_as_str_node(concept: Concept, syntactic_position="") -> str:
if syntactic_position:
return f"{concept.debug_string}_{str(type(concept))}_{syntactic_position}"
else:
return f"{concept.debug_string}_{str(type(concept))}"
def get_concept_node_from_graph(
identifier: str, semantics_graph: Graph
) -> Optional[Concept]:
return first([n for n in semantics_graph.nodes if n.debug_string == identifier], None)
def cos_sim(a, b) -> float:
dot = np.dot(a.reshape(1, -1), b.reshape(-1, 1))
norma = np.linalg.norm(a.reshape(1, -1))
normb = np.linalg.norm(b.reshape(1, -1))
return dot / (norma * normb)
def generate_heatmap(nodes_to_embeddings: Dict[Concept, Any], filename: str):
if not nodes_to_embeddings:
return
similarity_matrix = np.zeros((len(nodes_to_embeddings), len(nodes_to_embeddings)))
for i, (_, embedding_1) in enumerate(nodes_to_embeddings.items()):
for j, (_, embedding_2) in enumerate(nodes_to_embeddings.items()):
similarity_matrix[i][j] = cos_sim(embedding_1, embedding_2)
names = [n.debug_string for n in nodes_to_embeddings.keys()]
df = pd.DataFrame(data=similarity_matrix, index=names, columns=names)
plt.rcParams["figure.figsize"] = (20.0, 20.0)
plt.rcParams["font.family"] = "serif"
sb.clustermap(df, row_cluster=True, col_cluster=True)
plt.savefig(f"plots/{filename}.png")
plt.close()
|
self.semantics_graph: Graph = Graph()
# Create a new type of edge for each edge in the original semantics graph
# If any of the nodes is an action concept, we want to make a distinct new node to track syntax
for u, v, data in semantics_graph.edges(data=True):
syntactic_position = data["slot"]
new_u = (
self.concept_as_str_node(u, syntactic_position)
if isinstance(u, ActionConcept)
else self.concept_as_str_node(u)
)
new_v = (
self.concept_as_str_node(v, syntactic_position)
if isinstance(v, ActionConcept)
else self.concept_as_str_node(v)
)
self.semantics_graph.add_edge(new_u, new_v, weight=data["weight"])
self.nodes = list(self.semantics_graph.nodes)
self.semantics_matrix = to_numpy_matrix(self.semantics_graph)
|
config.rs
|
// Copyright 2017 TiKV Project Authors. Licensed under Apache-2.0.
use engine_traits::{perf_level_serde, PerfLevel};
use online_config::{ConfigChange, ConfigManager, OnlineConfig};
use serde::{Deserialize, Serialize};
use tikv_util::{box_err, config::ReadableSize, worker::Scheduler};
use super::Result;
use crate::store::SplitCheckTask;
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, OnlineConfig)]
#[serde(default)]
#[serde(rename_all = "kebab-case")]
pub struct Config {
/// When it is true, it will try to split a region with table prefix if
/// that region crosses tables.
pub split_region_on_table: bool,
/// For once split check, there are several split_key produced for batch.
/// batch_split_limit limits the number of produced split-key for one batch.
pub batch_split_limit: u64,
/// When region [a,e) size meets region_max_size, it will be split into
/// several regions [a,b), [b,c), [c,d), [d,e). And the size of [a,b),
/// [b,c), [c,d) will be region_split_size (maybe a little larger).
/// by default, region_max_size = region_split_size * 2 / 3.
pub region_max_size: Option<ReadableSize>,
pub region_split_size: ReadableSize,
/// When the number of keys in region [a,e) meets the region_max_keys,
/// it will be split into two several regions [a,b), [b,c), [c,d), [d,e).
/// And the number of keys in [a,b), [b,c), [c,d) will be region_split_keys.
/// by default, region_max_keys = region_split_keys * 2 / 3.
pub region_max_keys: Option<u64>,
pub region_split_keys: Option<u64>,
/// ConsistencyCheckMethod can not be chanaged dynamically.
#[online_config(skip)]
pub consistency_check_method: ConsistencyCheckMethod,
// Deprecated. Perf level is not applicable to the raftstore coprocessor.
// It was mistakenly used to refer to the perf level of the TiKV coprocessor
// and should be replaced with `server.end-point-perf-level`.
#[serde(with = "perf_level_serde", skip_serializing)]
#[online_config(skip)]
pub perf_level: PerfLevel,
// enable subsplit ranges (aka bucket) within the region
pub enable_region_bucket: bool,
pub region_bucket_size: ReadableSize,
// region size threshold for using approximate size instead of scan
pub region_size_threshold_for_approximate: ReadableSize,
// ratio of region_bucket_size. (0, 0.5)
// The region_bucket_merge_size_ratio * region_bucket_size is threshold to merge with its left neighbor bucket
pub region_bucket_merge_size_ratio: f64,
}
#[derive(Copy, Clone, Debug, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "kebab-case")]
pub enum ConsistencyCheckMethod {
/// Does consistency check for regions based on raw data. Only used when
/// raw APIs are enabled and MVCC-GC is disabled.
Raw = 0,
/// Does consistency check for regions based on MVCC.
Mvcc = 1,
}
/// Default region split size.
pub const SPLIT_SIZE_MB: u64 = 96;
/// Default batch split limit.
pub const BATCH_SPLIT_LIMIT: u64 = 10;
pub const DEFAULT_BUCKET_SIZE: ReadableSize = ReadableSize::mb(96);
pub const DEFAULT_REGION_BUCKET_MERGE_SIZE_RATIO: f64 = 0.33;
impl Default for Config {
fn default() -> Config {
let split_size = ReadableSize::mb(SPLIT_SIZE_MB);
Config {
split_region_on_table: false,
batch_split_limit: BATCH_SPLIT_LIMIT,
region_split_size: split_size,
region_max_size: None,
region_split_keys: None,
region_max_keys: None,
consistency_check_method: ConsistencyCheckMethod::Mvcc,
perf_level: PerfLevel::Uninitialized,
enable_region_bucket: false,
region_bucket_size: DEFAULT_BUCKET_SIZE,
region_size_threshold_for_approximate: DEFAULT_BUCKET_SIZE * 4,
region_bucket_merge_size_ratio: DEFAULT_REGION_BUCKET_MERGE_SIZE_RATIO,
}
}
}
impl Config {
pub fn
|
(&self) -> u64 {
let default_split_keys = self.region_split_size.as_mb_f64() * 10000.0;
self.region_max_keys
.unwrap_or(default_split_keys as u64 / 2 * 3)
}
pub fn region_max_size(&self) -> ReadableSize {
self.region_max_size
.unwrap_or(self.region_split_size / 2 * 3)
}
pub fn region_split_keys(&self) -> u64 {
// Assume the average size of KVs is 100B.
self.region_split_keys
.unwrap_or((self.region_split_size.as_mb_f64() * 10000.0) as u64)
}
pub fn validate(&mut self) -> Result<()> {
if self.region_split_keys.is_none() {
self.region_split_keys = Some((self.region_split_size.as_mb_f64() * 10000.0) as u64);
}
match self.region_max_size {
Some(region_max_size) => {
if region_max_size.0 < self.region_split_size.0 {
return Err(box_err!(
"region max size {} must >= split size {}",
region_max_size.0,
self.region_split_size.0
));
}
}
None => self.region_max_size = Some(self.region_split_size / 2 * 3),
}
match self.region_max_keys {
Some(region_max_keys) => {
if region_max_keys < self.region_split_keys() {
return Err(box_err!(
"region max keys {} must >= split keys {}",
region_max_keys,
self.region_split_keys()
));
}
}
None => self.region_max_keys = Some(self.region_split_keys() / 2 * 3),
}
if self.enable_region_bucket {
if self.region_split_size.0 < self.region_bucket_size.0 {
return Err(box_err!(
"region split size {} must >= region bucket size {}",
self.region_split_size.0,
self.region_bucket_size.0
));
}
if self.region_size_threshold_for_approximate.0 < self.region_bucket_size.0 {
return Err(box_err!(
"large region threshold size {} must >= region bucket size {}",
self.region_size_threshold_for_approximate.0,
self.region_bucket_size.0
));
}
if self.region_bucket_size.0 == 0 {
return Err(box_err!("region_bucket size cannot be 0."));
}
if self.region_bucket_merge_size_ratio <= 0.0
|| self.region_bucket_merge_size_ratio >= 0.5
{
return Err(box_err!(
"region-bucket-merge-size-ratio should be 0 to 0.5 (not include both ends)."
));
}
}
Ok(())
}
}
pub struct SplitCheckConfigManager(pub Scheduler<SplitCheckTask>);
impl ConfigManager for SplitCheckConfigManager {
fn dispatch(
&mut self,
change: ConfigChange,
) -> std::result::Result<(), Box<dyn std::error::Error>> {
self.0.schedule(SplitCheckTask::ChangeConfig(change))?;
Ok(())
}
}
impl std::ops::Deref for SplitCheckConfigManager {
type Target = Scheduler<SplitCheckTask>;
fn deref(&self) -> &Self::Target {
&self.0
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_config_validate() {
let mut cfg = Config::default();
cfg.validate().unwrap();
cfg = Config::default();
cfg.region_max_size = Some(ReadableSize(10));
cfg.region_split_size = ReadableSize(20);
assert!(cfg.validate().is_err());
cfg = Config::default();
cfg.region_max_size = None;
cfg.region_split_size = ReadableSize(20);
assert!(cfg.validate().is_ok());
assert_eq!(cfg.region_max_size, Some(ReadableSize(30)));
cfg = Config::default();
cfg.region_max_keys = Some(10);
cfg.region_split_keys = Some(20);
assert!(cfg.validate().is_err());
cfg = Config::default();
cfg.region_max_keys = None;
cfg.region_split_keys = Some(20);
assert!(cfg.validate().is_ok());
assert_eq!(cfg.region_max_keys, Some(30));
cfg = Config::default();
cfg.enable_region_bucket = false;
cfg.region_split_size = ReadableSize(20);
cfg.region_bucket_size = ReadableSize(30);
assert!(cfg.validate().is_ok());
cfg = Config::default();
cfg.region_split_size = ReadableSize::mb(20);
assert!(cfg.validate().is_ok());
assert_eq!(cfg.region_split_keys, Some(200000));
}
}
|
region_max_keys
|
test_nba_py_shotchart.py
|
from nba_py import shotchart
from nba_py.player import get_player
def test():
|
pid = get_player('Kevin', 'Durant')
assert shotchart.ShotChart(pid)
|
|
test_driver_gaussian_log.py
|
# This code is part of Qiskit.
#
# (C) Copyright IBM 2020, 2021.
#
# This code is licensed under the Apache License, Version 2.0. You may
# obtain a copy of this license in the LICENSE.txt file in the root directory
# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
#
# Any modifications or derivative works of this code must retain this
# copyright notice, and modified files need to carry a notice indicating
# that they have been altered from the originals.
""" Test Gaussian Log Driver """
import unittest
from test import QiskitNatureTestCase
from qiskit_nature.drivers import GaussianLogDriver, GaussianLogResult
from qiskit_nature import QiskitNatureError
class TestDriverGaussianLog(QiskitNatureTestCase):
"""Gaussian Log Driver tests."""
def setUp(self):
super().setUp()
self.logfile = self.get_resource_path(
"test_driver_gaussian_log.txt", "drivers/second_quantization/gaussiand"
)
def test_log_driver(self):
"""Test the driver itself creates log and we can get a result"""
try:
driver = GaussianLogDriver(
[
"#p B3LYP/6-31g Freq=(Anharm) Int=Ultrafine SCF=VeryTight",
"",
"CO2 geometry optimization B3LYP/cc-pVTZ",
"",
"0 1",
"C -0.848629 2.067624 0.160992",
"O 0.098816 2.655801 -0.159738",
"O -1.796073 1.479446 0.481721",
"",
"",
]
)
result = driver.run()
qfc = result.quadratic_force_constants
expected = [
("1", "1", 1409.20235, 1.17003, 0.07515),
("2", "2", 2526.46159, 3.76076, 0.24156),
("3a", "3a", 462.61566, 0.12609, 0.0081),
("3b", "3b", 462.61566, 0.12609, 0.0081),
]
self.assertListEqual(qfc, expected)
except QiskitNatureError:
self.skipTest("GAUSSIAN driver does not appear to be installed")
# These tests check the gaussian log result and the parsing from a partial log file that is
# located with the tests so that this aspect of the code can be tested independent of
# Gaussian 16 being installed.
def test_gaussian_log_result_file(self):
"""Test result from file"""
result = GaussianLogResult(self.logfile)
with open(self.logfile, "r", encoding="utf8") as file:
lines = file.read().split("\n")
with self.subTest("Check list of lines"):
self.assertListEqual(result.log, lines)
with self.subTest("Check as string"):
line = "\n".join(lines)
self.assertEqual(str(result), line)
|
with open(self.logfile, "r", encoding="utf8") as file:
lines = file.read().split("\n")
result = GaussianLogResult(lines)
self.assertListEqual(result.log, lines)
def test_gaussian_log_result_string(self):
"""Test result from string"""
with open(self.logfile, "r", encoding="utf8") as file:
line = file.read()
result = GaussianLogResult(line)
self.assertListEqual(result.log, line.split("\n"))
def test_quadratic_force_constants(self):
"""Test quadratic force constants"""
result = GaussianLogResult(self.logfile)
qfc = result.quadratic_force_constants
expected = [
("1", "1", 1409.20235, 1.17003, 0.07515),
("2", "2", 2526.46159, 3.76076, 0.24156),
("3a", "3a", 462.61566, 0.12609, 0.0081),
("3b", "3b", 462.61566, 0.12609, 0.0081),
]
self.assertListEqual(qfc, expected)
def test_cubic_force_constants(self):
"""Test cubic force constants"""
result = GaussianLogResult(self.logfile)
cfc = result.cubic_force_constants
expected = [
("1", "1", "1", -260.36071, -1.39757, -0.0475),
("2", "2", "1", -498.9444, -4.80163, -0.1632),
("3a", "3a", "1", 239.87769, 0.4227, 0.01437),
("3a", "3b", "1", 74.25095, 0.13084, 0.00445),
("3b", "3b", "1", 12.93985, 0.0228, 0.00078),
]
self.assertListEqual(cfc, expected)
def test_quartic_force_constants(self):
"""Test quartic force constants"""
result = GaussianLogResult(self.logfile)
qfc = result.quartic_force_constants
expected = [
("1", "1", "1", "1", 40.39063, 1.40169, 0.02521),
("2", "2", "1", "1", 79.08068, 4.92017, 0.0885),
("2", "2", "2", "2", 154.78015, 17.26491, 0.31053),
("3a", "3a", "1", "1", -67.10879, -0.76453, -0.01375),
("3b", "3b", "1", "1", -67.10879, -0.76453, -0.01375),
("3a", "3a", "2", "2", -163.29426, -3.33524, -0.05999),
("3b", "3b", "2", "2", -163.29426, -3.33524, -0.05999),
("3a", "3a", "3a", "3a", 220.54851, 0.82484, 0.01484),
("3a", "3a", "3a", "3b", 66.77089, 0.24972, 0.00449),
("3a", "3a", "3b", "3b", 117.26759, 0.43857, 0.00789),
("3a", "3b", "3b", "3b", -66.77088, -0.24972, -0.00449),
("3b", "3b", "3b", "3b", 220.54851, 0.82484, 0.01484),
]
self.assertListEqual(qfc, expected)
def test_watson_hamiltonian(self):
"""Test the watson hamiltonian"""
result = GaussianLogResult(self.logfile)
watson = result.get_watson_hamiltonian()
expected = [
[352.3005875, 2, 2],
[-352.3005875, -2, -2],
[631.6153975, 1, 1],
[-631.6153975, -1, -1],
[115.653915, 4, 4],
[-115.653915, -4, -4],
[115.653915, 3, 3],
[-115.653915, -3, -3],
[-15.341901966295344, 2, 2, 2],
[-88.2017421687633, 1, 1, 2],
[42.40478531359112, 4, 4, 2],
[26.25167512727164, 4, 3, 2],
[2.2874639206341865, 3, 3, 2],
[0.4207357291666667, 2, 2, 2, 2],
[4.9425425, 1, 1, 2, 2],
[1.6122932291666665, 1, 1, 1, 1],
[-4.194299375, 4, 4, 2, 2],
[-4.194299375, 3, 3, 2, 2],
[-10.20589125, 4, 4, 1, 1],
[-10.20589125, 3, 3, 1, 1],
[2.2973803125, 4, 4, 4, 4],
[2.7821204166666664, 4, 4, 4, 3],
[7.329224375, 4, 4, 3, 3],
[-2.7821200000000004, 4, 3, 3, 3],
[2.2973803125, 3, 3, 3, 3],
]
for i, entry in enumerate(watson.data):
msg = "mode[{}]={} does not match expected {}".format(i, entry, expected[i])
self.assertAlmostEqual(entry[0], expected[i][0], msg=msg)
self.assertListEqual(entry[1:], expected[i][1:], msg=msg)
if __name__ == "__main__":
unittest.main()
|
def test_gaussian_log_result_list(self):
"""Test result from list of strings"""
|
verilog.rs
|
//! SystemVerilog backend for the Calyx compiler.
//!
//! Transforms an [`ir::Context`](crate::ir::Context) into a formatted string that represents a
//! valid SystemVerilog program.
use crate::backend::traits::Backend;
use calyx::{
errors::{CalyxResult, Error},
ir,
utils::OutputFile,
};
use ir::{Control, Group, Guard, RRC};
use itertools::Itertools;
use std::fs::File;
use std::io;
use std::{collections::HashMap, rc::Rc};
use vast::v17::ast as v;
/// Implements a simple Verilog backend. The backend only accepts Calyx programs with no control
/// and no groups.
#[derive(Default)]
pub struct VerilogBackend;
/// Checks to make sure that there are no holes being
/// used in a guard.
fn validate_guard(guard: &ir::Guard) -> bool {
match guard {
Guard::Or(left, right) | Guard::And(left, right) => {
validate_guard(left) && validate_guard(right)
}
Guard::CompOp(_, left, right) => {
!left.borrow().is_hole() && !right.borrow().is_hole()
}
Guard::Not(inner) => validate_guard(inner),
Guard::Port(port) => !port.borrow().is_hole(),
Guard::True => true,
}
}
/// Returns `Ok` if there are no groups defined.
fn validate_structure<'a, I>(groups: I) -> CalyxResult<()>
where
I: Iterator<Item = &'a RRC<Group>>,
{
for group in groups {
for asgn in &group.borrow().assignments {
let port = asgn.dst.borrow();
// check if port is a hole
if port.is_hole() {
return Err(Error::MalformedStructure(
"Groups / Holes can not be turned into Verilog".to_string(),
));
}
// validate guard
if !validate_guard(&asgn.guard) {
return Err(Error::MalformedStructure(
"Groups / Holes can not be turned into Verilog".to_string(),
));
};
}
}
Ok(())
}
/// Returns `Ok` if the control for `comp` is either a single `enable`
/// or `empty`.
fn validate_control(ctrl: &ir::Control) -> CalyxResult<()> {
match ctrl {
Control::Empty(_) => Ok(()),
_ => Err(Error::MalformedControl("Control must be empty".to_string())),
}
}
impl Backend for VerilogBackend {
fn name(&self) -> &'static str {
"verilog"
}
fn validate(ctx: &ir::Context) -> CalyxResult<()> {
for component in &ctx.components {
validate_structure(component.groups.iter())?;
validate_control(&component.control.borrow())?;
}
Ok(())
}
/// Generate a "fat" library by copy-pasting all of the extern files.
/// A possible alternative in the future is to use SystemVerilog `include`
/// statement.
fn link_externs(
ctx: &ir::Context,
file: &mut OutputFile,
) -> CalyxResult<()> {
for extern_path in ctx.lib.extern_paths() {
// The extern file is guaranteed to exist by the frontend.
let mut ext = File::open(extern_path).unwrap();
io::copy(&mut ext, &mut file.get_write()).map_err(|err| {
let std::io::Error { .. } = err;
Error::WriteError(format!(
"File not found: {}",
file.as_path_string()
))
})?;
}
Ok(())
}
fn emit(ctx: &ir::Context, file: &mut OutputFile) -> CalyxResult<()> {
let modules = &ctx
.components
.iter()
.map(|comp| {
emit_component(
comp,
ctx.bc.synthesis_mode,
ctx.bc.enable_verification,
ctx.bc.initialize_inputs,
)
.to_string()
})
.collect::<Vec<_>>();
write!(file.get_write(), "{}", modules.join("\n")).map_err(|err| {
let std::io::Error { .. } = err;
Error::WriteError(format!(
"File not found: {}",
file.as_path_string()
))
})?;
Ok(())
}
}
fn emit_component(
comp: &ir::Component,
synthesis_mode: bool,
enable_verification: bool,
initialize_inputs: bool,
) -> v::Module {
let mut module = v::Module::new(comp.name.as_ref());
let sig = comp.signature.borrow();
for port_ref in &sig.ports {
let port = port_ref.borrow();
// NOTE: The signature port definitions are reversed inside the component.
match port.direction {
ir::Direction::Input => {
module.add_output(port.name.as_ref(), port.width);
}
ir::Direction::Output => {
module.add_input(port.name.as_ref(), port.width);
}
ir::Direction::Inout => {
panic!("Unexpected Inout port on Component: {}", port.name)
}
}
}
// Add memory initial and final blocks
if !synthesis_mode {
memory_read_write(comp).into_iter().for_each(|stmt| {
module.add_stmt(stmt);
});
}
let wires = comp
.cells
.iter()
.flat_map(|cell| wire_decls(&cell.borrow()))
.collect_vec();
// structure wire declarations
wires.iter().for_each(|(name, width, _)| {
module.add_decl(v::Decl::new_logic(name, *width));
});
if initialize_inputs {
let mut initial = v::ParallelProcess::new_initial();
wires.iter().for_each(|(name, width, dir)| {
if *dir == ir::Direction::Input {
// HACK: this is not the right way to reset
// registers. we should have real reset ports.
let value = String::from("0");
initial.add_seq(v::Sequential::new_blk_assign(
v::Expr::new_ref(name),
v::Expr::new_ulit_dec(*width as u32, &value),
));
}
});
module.add_process(initial);
}
// cell instances
comp.cells
.iter()
.filter_map(|cell| cell_instance(&cell.borrow()))
.for_each(|instance| {
module.add_instance(instance);
});
// gather assignments keyed by destination
let mut map: HashMap<_, (RRC<ir::Port>, Vec<_>)> = HashMap::new();
for asgn in &comp.continuous_assignments {
map.entry(asgn.dst.borrow().canonical())
.and_modify(|(_, v)| v.push(asgn))
.or_insert((Rc::clone(&asgn.dst), vec![asgn]));
}
// Build a top-level always block to contain verilator checks for assignments
let mut checks = v::ParallelProcess::new_always_comb();
map.values()
.sorted_by_key(|(port, _)| port.borrow().canonical())
.for_each(|asgns| {
module.add_stmt(v::Stmt::new_parallel(emit_assignment(asgns)));
// If verification generation is enabled, emit disjointness check.
if enable_verification {
if let Some(check) = emit_guard_disjoint_check(asgns) {
checks.add_seq(check);
};
}
});
if !synthesis_mode {
module.add_process(checks);
}
module
}
fn wire_decls(cell: &ir::Cell) -> Vec<(String, u64, ir::Direction)> {
cell.ports
.iter()
.filter_map(|port| match &port.borrow().parent {
ir::PortParent::Cell(cell) => {
let parent_ref = cell.upgrade();
let parent = parent_ref.borrow();
match parent.prototype {
ir::CellType::Component { .. }
| ir::CellType::Primitive { .. } => Some((
format!(
"{}_{}",
parent.name().as_ref(),
port.borrow().name.as_ref()
),
port.borrow().width,
port.borrow().direction.clone(),
)),
_ => None,
}
}
ir::PortParent::Group(_) => unreachable!(),
})
.collect()
}
fn cell_instance(cell: &ir::Cell) -> Option<v::Instance> {
match cell.type_name() {
Some(ty_name) => {
let mut inst =
v::Instance::new(cell.name().as_ref(), ty_name.as_ref());
if let ir::CellType::Primitive { param_binding, .. } =
&cell.prototype
{
param_binding.iter().for_each(|(name, width)| {
inst.add_param(
name.as_ref(),
v::Expr::new_int(*width as i32),
)
})
}
for port in &cell.ports {
inst.connect(
port.borrow().name.as_ref(),
port_to_ref(Rc::clone(port)),
);
}
Some(inst)
}
None => None,
}
}
/// Generates an always block that checks of the guards are disjoint when the
/// length of assignments is greater than 1:
/// ```verilog
/// always_ff @(posedge clk) begin
/// if (!$onehot0({fsm_out < 1'd1 & go, fsm_out < 1'd1 & go})) begin
/// $error("Multiple assignments to r_in");
/// end
/// end
/// ```
fn emit_guard_disjoint_check(
(dst_ref, assignments): &(RRC<ir::Port>, Vec<&ir::Assignment>),
) -> Option<v::Sequential> {
if assignments.len() < 2 {
return None;
}
// Construct concat with all guards.
let mut concat = v::ExprConcat::default();
assignments.iter().for_each(|assign| {
concat.add_expr(guard_to_expr(&assign.guard));
});
let onehot0 = v::Expr::new_call("$onehot0", vec![v::Expr::Concat(concat)]);
let not_onehot0 = v::Expr::new_not(onehot0);
let mut check = v::SequentialIfElse::new(not_onehot0);
// Generated error message
let (cell, port) = dst_ref.borrow().canonical();
let err = v::Sequential::new_error(&format!(
"Multiple assignment to port `{}.{}'.",
cell, port
));
check.add_seq(err);
Some(v::Sequential::If(check))
}
/// Generates an assign statement that uses ternaries to select the correct
/// assignment to enable and adds a default assignment to 0 when none of the
/// guards are active.
///
/// Example:
/// ```
/// // Input Calyx code
/// a.in = foo ? 2'd0;
/// a.in = bar ? 2'd1;
/// ```
/// Into:
/// ```
/// assign a_in = foo ? 2'd0 : bar ? 2d'1 : 2'd0;
/// ```
fn emit_assignment(
(dst_ref, assignments): &(RRC<ir::Port>, Vec<&ir::Assignment>),
) -> v::Parallel {
let dst = dst_ref.borrow();
let init = v::Expr::new_ulit_dec(dst.width as u32, &0.to_string());
let rhs = assignments.iter().rfold(init, |acc, e| {
let guard = guard_to_expr(&e.guard);
let asgn = port_to_ref(Rc::clone(&e.src));
v::Expr::new_mux(guard, asgn, acc)
});
v::Parallel::ParAssign(port_to_ref(Rc::clone(dst_ref)), rhs)
}
fn port_to_ref(port_ref: RRC<ir::Port>) -> v::Expr {
let port = port_ref.borrow();
match &port.parent {
ir::PortParent::Cell(cell) => {
let parent_ref = cell.upgrade();
let parent = parent_ref.borrow();
match parent.prototype {
ir::CellType::Constant { val, width } => {
v::Expr::new_ulit_dec(width as u32, &val.to_string())
}
ir::CellType::ThisComponent => v::Expr::new_ref(&port.name),
_ => v::Expr::Ref(format!(
"{}_{}",
parent.name().as_ref(),
port.name.as_ref()
)),
}
}
ir::PortParent::Group(_) => unreachable!(),
}
}
fn guard_to_expr(guard: &ir::Guard) -> v::Expr {
let op = |g: &ir::Guard| match g {
Guard::Or(..) => v::Expr::new_bit_or,
Guard::And(..) => v::Expr::new_bit_and,
Guard::CompOp(op, ..) => match op {
ir::PortComp::Eq => v::Expr::new_eq,
ir::PortComp::Neq => v::Expr::new_neq,
ir::PortComp::Gt => v::Expr::new_gt,
ir::PortComp::Lt => v::Expr::new_lt,
ir::PortComp::Geq => v::Expr::new_geq,
ir::PortComp::Leq => v::Expr::new_leq,
},
Guard::Not(..) | Guard::Port(..) | Guard::True => unreachable!(),
};
match guard {
Guard::And(l, r) | Guard::Or(l, r) => {
op(guard)(guard_to_expr(l), guard_to_expr(r))
}
Guard::CompOp(_, l, r) => {
op(guard)(port_to_ref(Rc::clone(l)), port_to_ref(Rc::clone(r)))
}
Guard::Not(o) => v::Expr::new_not(guard_to_expr(o)),
Guard::Port(p) => port_to_ref(Rc::clone(p)),
Guard::True => v::Expr::new_ulit_bin(1, &1.to_string()),
}
}
//==========================================
// Memory input and output
//==========================================
/// Generates code of the form:
/// ```
/// initial begin
/// $value$plusargs("DATA=%s", DATA);
/// $display("DATA: %s", DATA);
/// $readmemh({DATA, "/<mem_name>.dat"}, <mem_name>.mem);
/// ...
/// end
/// final begin
/// $writememh({DATA, "/<mem_name>.out"}, <mem_name>.mem);
/// end
/// ```
fn memory_read_write(comp: &ir::Component) -> Vec<v::Stmt> {
// Import futil helper library.
let data_decl = v::Stmt::new_rawstr("string DATA;".to_string());
let mut initial_block = v::ParallelProcess::new_initial();
initial_block
// get the data
.add_seq(v::Sequential::new_seqexpr(v::Expr::new_call(
"$value$plusargs",
vec![v::Expr::new_str("DATA=%s"), v::Expr::new_ref("DATA")],
)))
// log the path to the data
.add_seq(v::Sequential::new_seqexpr(v::Expr::new_call(
"$display",
vec![
v::Expr::new_str("DATA (path to meminit files): %s"),
v::Expr::new_ref("DATA"),
],
)));
let memories = comp.cells.iter().filter_map(|cell| {
|
if is_external
&& cell
.borrow()
.type_name()
.map(|proto| proto.id.contains("mem"))
.unwrap_or_default()
{
Some(cell.borrow().name().id.clone())
} else {
None
}
});
memories.clone().for_each(|name| {
initial_block.add_seq(v::Sequential::new_seqexpr(v::Expr::new_call(
"$readmemh",
vec![
v::Expr::Concat(v::ExprConcat {
exprs: vec![
v::Expr::new_str(&format!("/{}.dat", name)),
v::Expr::new_ref("DATA"),
],
}),
v::Expr::new_ipath(&format!("{}.mem", name)),
],
)));
});
let mut final_block = v::ParallelProcess::new_final();
memories.for_each(|name| {
final_block.add_seq(v::Sequential::new_seqexpr(v::Expr::new_call(
"$writememh",
vec![
v::Expr::Concat(v::ExprConcat {
exprs: vec![
v::Expr::new_str(&format!("/{}.out", name)),
v::Expr::new_ref("DATA"),
],
}),
v::Expr::new_ipath(&format!("{}.mem", name)),
],
)));
});
vec![
data_decl,
v::Stmt::new_parallel(v::Parallel::new_process(initial_block)),
v::Stmt::new_parallel(v::Parallel::new_process(final_block)),
]
}
|
let is_external = cell.borrow().get_attribute("external").is_some();
|
0009_personpage_alt_short_intro.py
|
# Generated by Django 2.1.5 on 2019-02-16 07:42
from django.db import migrations, models
class Migration(migrations.Migration):
|
dependencies = [
('people', '0008_personpage_short_intro'),
]
operations = [
migrations.AddField(
model_name='personpage',
name='alt_short_intro',
field=models.TextField(blank=True, null=True),
),
]
|
|
chrome_test_server_spawner.py
|
# Copyright 2013 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""A "Test Server Spawner" that handles killing/stopping per-test test servers.
It's used to accept requests from the device to spawn and kill instances of the
chrome test server on the host.
"""
import BaseHTTPServer
import json
import logging
import os
import select
import struct
import subprocess
import threading
import time
import urlparse
import constants
from forwarder import Forwarder
import ports
# Path that are needed to import necessary modules when launching a testserver.
os.environ['PYTHONPATH'] = os.environ.get('PYTHONPATH', '') + (':%s:%s:%s:%s:%s'
% (os.path.join(constants.CHROME_DIR, 'third_party'),
os.path.join(constants.CHROME_DIR, 'third_party', 'tlslite'),
os.path.join(constants.CHROME_DIR, 'third_party', 'pyftpdlib', 'src'),
os.path.join(constants.CHROME_DIR, 'net', 'tools', 'testserver'),
os.path.join(constants.CHROME_DIR, 'sync', 'tools', 'testserver')))
SERVER_TYPES = {
'http': '',
'ftp': '-f',
'sync': '', # Sync uses its own script, and doesn't take a server type arg.
'tcpecho': '--tcp-echo',
'udpecho': '--udp-echo',
}
# The timeout (in seconds) of starting up the Python test server.
TEST_SERVER_STARTUP_TIMEOUT = 10
def _CheckPortStatus(port, expected_status):
"""Returns True if port has expected_status.
Args:
port: the port number.
expected_status: boolean of expected status.
Returns:
Returns True if the status is expected. Otherwise returns False.
"""
for timeout in range(1, 5):
if ports.IsHostPortUsed(port) == expected_status:
return True
time.sleep(timeout)
return False
def _GetServerTypeCommandLine(server_type):
"""Returns the command-line by the given server type.
Args:
server_type: the server type to be used (e.g. 'http').
Returns:
A string containing the command-line argument.
"""
if server_type not in SERVER_TYPES:
raise NotImplementedError('Unknown server type: %s' % server_type)
if server_type == 'udpecho':
raise Exception('Please do not run UDP echo tests because we do not have '
'a UDP forwarder tool.')
return SERVER_TYPES[server_type]
class TestServerThread(threading.Thread):
"""A thread to run the test server in a separate process."""
def __init__(self, ready_event, arguments, adb, tool, build_type):
"""Initialize TestServerThread with the following argument.
Args:
ready_event: event which will be set when the test server is ready.
arguments: dictionary of arguments to run the test server.
adb: instance of AndroidCommands.
tool: instance of runtime error detection tool.
build_type: 'Release' or 'Debug'.
"""
threading.Thread.__init__(self)
self.wait_event = threading.Event()
self.stop_flag = False
self.ready_event = ready_event
self.ready_event.clear()
self.arguments = arguments
self.adb = adb
self.tool = tool
self.test_server_process = None
self.is_ready = False
self.host_port = self.arguments['port']
assert isinstance(self.host_port, int)
self._test_server_forwarder = None
# The forwarder device port now is dynamically allocated.
self.forwarder_device_port = 0
# Anonymous pipe in order to get port info from test server.
self.pipe_in = None
self.pipe_out = None
self.command_line = []
self.build_type = build_type
def _WaitToStartAndGetPortFromTestServer(self):
"""Waits for the Python test server to start and gets the port it is using.
The port information is passed by the Python test server with a pipe given
by self.pipe_out. It is written as a result to |self.host_port|.
Returns:
Whether the port used by the test server was successfully fetched.
"""
assert self.host_port == 0 and self.pipe_out and self.pipe_in
(in_fds, _, _) = select.select([self.pipe_in, ], [], [],
TEST_SERVER_STARTUP_TIMEOUT)
if len(in_fds) == 0:
logging.error('Failed to wait to the Python test server to be started.')
return False
# First read the data length as an unsigned 4-byte value. This
# is _not_ using network byte ordering since the Python test server packs
# size as native byte order and all Chromium platforms so far are
# configured to use little-endian.
# TODO(jnd): Change the Python test server and local_test_server_*.cc to
# use a unified byte order (either big-endian or little-endian).
data_length = os.read(self.pipe_in, struct.calcsize('=L'))
if data_length:
(data_length,) = struct.unpack('=L', data_length)
assert data_length
if not data_length:
logging.error('Failed to get length of server data.')
return False
port_json = os.read(self.pipe_in, data_length)
if not port_json:
logging.error('Failed to get server data.')
return False
logging.info('Got port json data: %s', port_json)
port_json = json.loads(port_json)
if port_json.has_key('port') and isinstance(port_json['port'], int):
self.host_port = port_json['port']
return _CheckPortStatus(self.host_port, True)
logging.error('Failed to get port information from the server data.')
return False
def _GenerateCommandLineArguments(self):
"""Generates the command line to run the test server.
Note that all options are processed by following the definitions in
testserver.py.
"""
if self.command_line:
return
# The following arguments must exist.
type_cmd = _GetServerTypeCommandLine(self.arguments['server-type'])
if type_cmd:
self.command_line.append(type_cmd)
self.command_line.append('--port=%d' % self.host_port)
# Use a pipe to get the port given by the instance of Python test server
# if the test does not specify the port.
if self.host_port == 0:
(self.pipe_in, self.pipe_out) = os.pipe()
self.command_line.append('--startup-pipe=%d' % self.pipe_out)
self.command_line.append('--host=%s' % self.arguments['host'])
data_dir = self.arguments['data-dir'] or 'chrome/test/data'
if not os.path.isabs(data_dir):
data_dir = os.path.join(constants.CHROME_DIR, data_dir)
self.command_line.append('--data-dir=%s' % data_dir)
# The following arguments are optional depending on the individual test.
if self.arguments.has_key('log-to-console'):
self.command_line.append('--log-to-console')
if self.arguments.has_key('auth-token'):
self.command_line.append('--auth-token=%s' % self.arguments['auth-token'])
if self.arguments.has_key('https'):
self.command_line.append('--https')
if self.arguments.has_key('cert-and-key-file'):
self.command_line.append('--cert-and-key-file=%s' % os.path.join(
constants.CHROME_DIR, self.arguments['cert-and-key-file']))
if self.arguments.has_key('ocsp'):
self.command_line.append('--ocsp=%s' % self.arguments['ocsp'])
if self.arguments.has_key('https-record-resume'):
self.command_line.append('--https-record-resume')
if self.arguments.has_key('ssl-client-auth'):
self.command_line.append('--ssl-client-auth')
if self.arguments.has_key('tls-intolerant'):
self.command_line.append('--tls-intolerant=%s' %
self.arguments['tls-intolerant'])
if self.arguments.has_key('ssl-client-ca'):
for ca in self.arguments['ssl-client-ca']:
self.command_line.append('--ssl-client-ca=%s' %
os.path.join(constants.CHROME_DIR, ca))
if self.arguments.has_key('ssl-bulk-cipher'):
for bulk_cipher in self.arguments['ssl-bulk-cipher']:
self.command_line.append('--ssl-bulk-cipher=%s' % bulk_cipher)
def run(self):
logging.info('Start running the thread!')
self.wait_event.clear()
self._GenerateCommandLineArguments()
command = constants.CHROME_DIR
if self.arguments['server-type'] == 'sync':
command = [os.path.join(command, 'sync', 'tools', 'testserver',
'sync_testserver.py')] + self.command_line
else:
command = [os.path.join(command, 'net', 'tools', 'testserver',
'testserver.py')] + self.command_line
logging.info('Running: %s', command)
self.process = subprocess.Popen(command)
if self.process:
if self.pipe_out:
self.is_ready = self._WaitToStartAndGetPortFromTestServer()
else:
self.is_ready = _CheckPortStatus(self.host_port, True)
if self.is_ready:
self._test_server_forwarder = Forwarder(self.adb, self.build_type)
self._test_server_forwarder.Run(
[(0, self.host_port)], self.tool, '127.0.0.1')
# Check whether the forwarder is ready on the device.
self.is_ready = False
device_port = self._test_server_forwarder.DevicePortForHostPort(
self.host_port)
if device_port:
for timeout in range(1, 5):
if ports.IsDevicePortUsed(self.adb, device_port, 'LISTEN'):
self.is_ready = True
self.forwarder_device_port = device_port
break
time.sleep(timeout)
# Wake up the request handler thread.
self.ready_event.set()
# Keep thread running until Stop() gets called.
while not self.stop_flag:
time.sleep(1)
if self.process.poll() is None:
self.process.kill()
if self._test_server_forwarder:
self._test_server_forwarder.Close()
self.process = None
self.is_ready = False
if self.pipe_out:
os.close(self.pipe_in)
os.close(self.pipe_out)
self.pipe_in = None
self.pipe_out = None
logging.info('Test-server has died.')
self.wait_event.set()
def Stop(self):
"""Blocks until the loop has finished.
Note that this must be called in another thread.
"""
if not self.process:
return
self.stop_flag = True
self.wait_event.wait()
class SpawningServerRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler):
"""A handler used to process http GET/POST request."""
def _SendResponse(self, response_code, response_reason, additional_headers,
contents):
"""Generates a response sent to the client from the provided parameters.
Args:
response_code: number of the response status.
response_reason: string of reason description of the response.
additional_headers: dict of additional headers. Each key is the name of
the header, each value is the content of the header.
contents: string of the contents we want to send to client.
"""
self.send_response(response_code, response_reason)
self.send_header('Content-Type', 'text/html')
# Specify the content-length as without it the http(s) response will not
# be completed properly (and the browser keeps expecting data).
self.send_header('Content-Length', len(contents))
for header_name in additional_headers:
self.send_header(header_name, additional_headers[header_name])
self.end_headers()
self.wfile.write(contents)
self.wfile.flush()
def _StartTestServer(self):
"""Starts the test server thread."""
logging.info('Handling request to spawn a test server.')
content_type = self.headers.getheader('content-type')
if content_type != 'application/json':
raise Exception('Bad content-type for start request.')
content_length = self.headers.getheader('content-length')
if not content_length:
content_length = 0
try:
content_length = int(content_length)
except:
raise Exception('Bad content-length for start request.')
logging.info(content_length)
test_server_argument_json = self.rfile.read(content_length)
logging.info(test_server_argument_json)
assert not self.server.test_server_instance
ready_event = threading.Event()
self.server.test_server_instance = TestServerThread(
ready_event,
json.loads(test_server_argument_json),
self.server.adb,
self.server.tool,
self.server.build_type)
self.server.test_server_instance.setDaemon(True)
self.server.test_server_instance.start()
ready_event.wait()
if self.server.test_server_instance.is_ready:
self._SendResponse(200, 'OK', {}, json.dumps(
{'port': self.server.test_server_instance.forwarder_device_port,
'message': 'started'}))
logging.info('Test server is running on port: %d.',
self.server.test_server_instance.host_port)
else:
self.server.test_server_instance.Stop()
self.server.test_server_instance = None
self._SendResponse(500, 'Test Server Error.', {}, '')
logging.info('Encounter problem during starting a test server.')
def _KillTestServer(self):
"""Stops the test server instance."""
# There should only ever be one test server at a time. This may do the
# wrong thing if we try and start multiple test servers.
if not self.server.test_server_instance:
return
port = self.server.test_server_instance.host_port
logging.info('Handling request to kill a test server on port: %d.', port)
self.server.test_server_instance.Stop()
# Make sure the status of test server is correct before sending response.
if _CheckPortStatus(port, False):
self._SendResponse(200, 'OK', {}, 'killed')
logging.info('Test server on port %d is killed', port)
else:
self._SendResponse(500, 'Test Server Error.', {}, '')
logging.info('Encounter problem during killing a test server.')
self.server.test_server_instance = None
def do_POST(self):
parsed_path = urlparse.urlparse(self.path)
action = parsed_path.path
logging.info('Action for POST method is: %s.', action)
if action == '/start':
self._StartTestServer()
else:
self._SendResponse(400, 'Unknown request.', {}, '')
logging.info('Encounter unknown request: %s.', action)
def
|
(self):
parsed_path = urlparse.urlparse(self.path)
action = parsed_path.path
params = urlparse.parse_qs(parsed_path.query, keep_blank_values=1)
logging.info('Action for GET method is: %s.', action)
for param in params:
logging.info('%s=%s', param, params[param][0])
if action == '/kill':
self._KillTestServer()
elif action == '/ping':
# The ping handler is used to check whether the spawner server is ready
# to serve the requests. We don't need to test the status of the test
# server when handling ping request.
self._SendResponse(200, 'OK', {}, 'ready')
logging.info('Handled ping request and sent response.')
else:
self._SendResponse(400, 'Unknown request', {}, '')
logging.info('Encounter unknown request: %s.', action)
class SpawningServer(object):
"""The class used to start/stop a http server."""
def __init__(self, test_server_spawner_port, adb, tool, build_type):
logging.info('Creating new spawner on port: %d.', test_server_spawner_port)
self.server = BaseHTTPServer.HTTPServer(('', test_server_spawner_port),
SpawningServerRequestHandler)
self.port = test_server_spawner_port
self.server.adb = adb
self.server.tool = tool
self.server.test_server_instance = None
self.server.build_type = build_type
def _Listen(self):
logging.info('Starting test server spawner')
self.server.serve_forever()
def Start(self):
listener_thread = threading.Thread(target=self._Listen)
listener_thread.setDaemon(True)
listener_thread.start()
time.sleep(1)
def Stop(self):
if self.server.test_server_instance:
self.server.test_server_instance.Stop()
self.server.shutdown()
|
do_GET
|
generics3.rs
|
// An imaginary magical school has a new report card generation system written in Rust!
// Currently the system only supports creating report cards where the student's grade
// is represented numerically (e.g. 1.0 -> 5.5).
// However, the school also issues alphabetical grades (A+ -> F-) and needs
// to be able to print both types of report card!
// Make the necessary code changes in the struct ReportCard and the impl block
// to support alphabetical report cards. Change the Grade in the second test to "A+"
// to show that your changes allow alphabetical grades.
// Execute 'rustlings hint generics3' for hints!
use std::fmt;
pub struct ReportCard<T: ToString> {
pub grade: T,
pub student_name: String,
pub student_age: u8,
}
impl<T:ToString> ReportCard<T> {
pub fn print(&self) -> String {
format!("{}", self).to_string()
}
}
impl<T: ToString> fmt::Display for ReportCard<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{} ({}) - achieved a grade of {}",
&self.student_name, &self.student_age, &self.grade.to_string())
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn generate_numeric_report_card() {
let report_card = ReportCard {
grade: 2.1,
student_name: "Tom Wriggle".to_string(),
student_age: 12,
};
assert_eq!(
report_card.print(),
"Tom Wriggle (12) - achieved a grade of 2.1"
);
}
#[test]
fn generate_alphabetic_report_card()
|
}
|
{
// TODO: Make sure to change the grade here after you finish the exercise.
let report_card = ReportCard {
grade: "A+".to_string(),
student_name: "Gary Plotter".to_string(),
student_age: 11,
};
assert_eq!(
report_card.print(),
"Gary Plotter (11) - achieved a grade of A+"
);
}
|
verify.ts
|
import * as fs from 'fs'
import {
genPerVOSpentVoiceCreditsCommitment,
genTallyResultCommitment,
genSpentVoiceCreditsCommitment,
} from 'maci-core'
import {
maciContractAbi,
} from 'maci-contracts'
import {
validateEthAddress,
contractExists,
calcQuinTreeDepthFromMaxLeaves,
} from './utils'
import * as ethers from 'ethers'
const configureSubparser = (subparsers: any) => {
const parser = subparsers.addParser(
'verify',
{ addHelp: true },
)
parser.addArgument(
['-t', '--tally-file'],
{
required: true,
type: 'string',
help: 'A filepath in which to save the final vote tally and salt.',
}
)
}
const verify = async (args: any) => {
// Read the tally file
let contents
try {
contents = fs.readFileSync(args.tally_file, { encoding: 'utf8' })
} catch {
console.error('Error: unable to open ', args.tally_file)
return
}
// Parse the tally file
let data
try {
data = JSON.parse(contents)
} catch {
console.error('Error: unable to parse ', args.tally_file)
return
}
// Check the results salt
const validResultsSalt = data.results.salt && data.results.salt.match(/0x[a-fA-F0-9]+/)
if (!validResultsSalt) {
console.error('Error: invalid results salt')
return
}
// Check the results commitment
const validResultsCommitment = data.results.commitment && data.results.commitment.match(/0x[a-fA-F0-9]+/)
if (!validResultsCommitment) {
console.error('Error: invalid results commitment format')
return
}
// Ensure that the length of data.results.tally is a square root of 2
const depth = calcQuinTreeDepthFromMaxLeaves(data.results.tally.length)
if (Math.floor(depth).toString() !== depth.toString()) {
console.error('Error: invalid results tally field length')
return
}
// Verify that the results commitment matches the output of
// genTallyResultCommitment()
const tally = data.results.tally.map(BigInt)
const salt = BigInt(data.results.salt)
const resultsCommitment = BigInt(data.results.commitment)
const expectedResultsCommitment = genTallyResultCommitment(tally, salt, depth)
if (expectedResultsCommitment.toString() === resultsCommitment.toString()) {
console.log('The results commitment in the specified file is correct given the tally and salt')
} else {
console.error('Error: the results commitment in the specified file is incorrect')
return
}
// Check the total spent voice credits salt
const validTvcSalt = data.totalVoiceCredits.salt && data.totalVoiceCredits.salt.match(/0x[a-fA-F0-9]+/)
if (!validTvcSalt) {
console.error('Error: invalid total spent voice credits results salt')
return
}
// Check the total spent voice credits commitment
const validTvcCommitment = data.totalVoiceCredits.commitment && data.totalVoiceCredits.commitment.match(/0x[a-fA-F0-9]+/)
if (!validTvcCommitment) {
console.error('Error: invalid total spent voice credits commitment format')
return
}
// Verify that the total spent voice credits commitment matches the output of
// genSpentVoiceCreditsCommitment()
const tvcSpent = BigInt(data.totalVoiceCredits.spent)
const tvcSalt = BigInt(data.totalVoiceCredits.salt)
const tvcCommitment = BigInt(data.totalVoiceCredits.commitment)
const expectedTvcCommitment = genSpentVoiceCreditsCommitment(tvcSpent, tvcSalt)
if (expectedTvcCommitment.toString() === tvcCommitment.toString()) {
console.log('The total spent voice credit commitment in the specified file is correct given the tally and salt')
} else {
console.error('Error: the total spent voice credit commitment in the specified file is incorrect')
return
}
const pvcTally = data.totalVoiceCreditsPerVoteOption.tally.map((x) => BigInt(x))
const pvcSalt = BigInt(data.totalVoiceCreditsPerVoteOption.salt)
const pvcCommitment = BigInt(data.totalVoiceCreditsPerVoteOption.commitment)
const expectedPvcCommitment = genPerVOSpentVoiceCreditsCommitment(pvcTally, pvcSalt, depth)
if (expectedPvcCommitment.toString() === pvcCommitment.toString()) {
console.log('The per vote option spent voice credit commitment in the specified file is correct given the tally and salt')
} else {
console.error('Error: the per vote option spent voice credit commitment in the specified file is incorrect')
return
}
const maciAddress = data.maci
// MACI contract
if (!validateEthAddress(maciAddress)) {
console.error('Error: invalid MACI contract address')
return
}
// Ethereum provider
const ethProvider = data.provider
const provider = new ethers.providers.JsonRpcProvider(ethProvider)
try {
await provider.getBlockNumber()
} catch {
console.error('Error: unable to connect to the Ethereum provider at', ethProvider)
return
}
if (! (await contractExists(provider, maciAddress))) {
console.error('Error: there is no contract deployed at the specified address')
return
}
const maciContract = new ethers.Contract(
maciAddress,
maciContractAbi,
provider,
)
const onChainResultsCommitment = BigInt((await maciContract.currentResultsCommitment()).toString())
if (onChainResultsCommitment.toString() === expectedResultsCommitment.toString()) {
console.log('The results commitment in the MACI contract on-chain is valid')
} else {
console.error('Error: the results commitment in the MACI contract does not match the expected commitment')
}
const onChainTvcCommitment = BigInt(
(await maciContract.currentSpentVoiceCreditsCommitment()).toString()
)
if (onChainTvcCommitment.toString() === expectedTvcCommitment.toString()) {
console.log('The total spent voice credit commitment in the MACI contract on-chain is valid')
} else {
console.error('Error: the total spent voice credit commitment in the MACI contract does not match the expected commitment')
}
const onChainPvcCommitment = BigInt(
|
if (onChainPvcCommitment.toString() === expectedPvcCommitment.toString()) {
console.log('The per vote option spent voice credit commitment in the MACI contract on-chain is valid')
} else {
console.error('Error: the per vote option spent voice credit commitment in the MACI contract does not match the expected commitment')
}
// Check the total votes
let expectedTotalVotes = BigInt(0)
for (const t of tally) {
expectedTotalVotes += t
}
const onChainTotalVotes = await maciContract.totalVotes()
if (onChainTotalVotes.toString() === expectedTotalVotes.toString()) {
console.log('The total sum of votes in the MACI contract on-chain is valid.')
} else {
console.error('Error: the total votes value in the MACI contract does not match the expected sum of the vote tally')
}
}
export {
verify,
configureSubparser,
}
|
(await maciContract.currentPerVOSpentVoiceCreditsCommitment()).toString()
)
|
index.ts
|
export * from './agent.entity';
export * from './base.entity';
export * from './mail-queue-log.entity';
| ||
resource.py
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Definitions for resource-type trackable object classes."""
import contextlib
import copy
import weakref
import six
from tensorflow.python.eager import context
from tensorflow.python.eager import def_function
from tensorflow.python.framework import ops
from tensorflow.python.training.tracking import base
from tensorflow.python.util import tf_contextlib
from tensorflow.python.util.tf_export import tf_export
# global _RESOURCE_TRACKER_STACK
_RESOURCE_TRACKER_STACK = []
class ResourceTracker(object):
"""An object that tracks a list of resources."""
__slots__ = ["_resources"]
def __init__(self):
self._resources = []
@property
def resources(self):
return self._resources
def add_resource(self, resource):
self._resources.append(resource)
@tf_contextlib.contextmanager
def resource_tracker_scope(resource_tracker):
"""A context to manage resource trackers.
Use this in order to collect up all resources created within a block of code.
Example usage:
```python
resource_tracker = ResourceTracker()
with resource_tracker_scope(resource_tracker):
resource = TrackableResource()
assert resource_tracker.resources == [resource]
Args:
resource_tracker: The passed in ResourceTracker object
Yields:
A scope in which the resource_tracker is active.
"""
global _RESOURCE_TRACKER_STACK
old = list(_RESOURCE_TRACKER_STACK)
_RESOURCE_TRACKER_STACK.append(resource_tracker)
try:
yield
finally:
_RESOURCE_TRACKER_STACK = old
def _make_getter(captured_getter, captured_previous):
"""To avoid capturing loop variables."""
def getter(*args, **kwargs):
return captured_getter(captured_previous, *args, **kwargs)
return getter
class _ResourceMetaclass(type):
"""Metaclass for CapturableResource."""
def __call__(cls, *args, **kwargs):
def default_resource_creator(next_creator, *a, **kw):
assert next_creator is None
obj = cls.__new__(cls, *a, **kw)
obj.__init__(*a, **kw)
return obj
previous_getter = lambda *a, **kw: default_resource_creator(None, *a, **kw)
resource_creator_stack = ops.get_default_graph()._resource_creator_stack
for getter in resource_creator_stack[cls._resource_type()]:
previous_getter = _make_getter(getter, previous_getter)
return previous_getter(*args, **kwargs)
class CapturableResource(six.with_metaclass(_ResourceMetaclass,
base.Trackable)):
"""Holds a Tensor which a tf.function can capture.
`CapturableResource`s are discovered by traversing the graph of object
attributes, e.g. during `tf.saved_model.save`. They are excluded from the
scope-based tracking of `TrackableResource`; generally things that require
initialization should inherit from `TrackableResource` instead of
`CapturableResource` directly.
"""
def __init__(self, device=""):
"""Initialize the `CapturableResource`.
Args:
device: A string indicating a required placement for this resource,
e.g. "CPU" if this resource must be created on a CPU device. A blank
device allows the user to place resource creation, so generally this
should be blank unless the resource only makes sense on one device.
"""
self._resource_handle_value = None
self._resource_device = device
self._self_destruction_context = (
context.eager_mode if context.executing_eagerly()
else ops.get_default_graph().as_default)
@classmethod
def _resource_type(cls):
return cls.__name__
@property
def _destruction_context(self):
return getattr(self, "_self_destruction_context",
# no-op context
contextlib.suppress)
@_destruction_context.setter
def _destruction_context(self, destruction_context):
self._self_destruction_context = destruction_context
def _create_resource(self):
"""A function that creates a resource handle."""
raise NotImplementedError("TrackableResource._create_resource not "
"implemented.")
@property
def _resource_handle(self):
return self._resource_handle_value
@_resource_handle.setter
def _resource_handle(self, value):
if isinstance(value, (ops.Tensor, ops.EagerTensor)):
value._parent_trackable = weakref.ref(self) # pylint: disable=protected-access
self._resource_handle_value = value
def _initialize(self):
"""A function that initializes the resource. Optional."""
pass
def _destroy_resource(self):
"""A function that destroys the resource. Optional."""
pass
@property
def resource_handle(self):
|
def _map_resources(self, _):
"""For implementing `Trackable`."""
new_obj = copy.copy(self)
# pylint: disable=protected-access
with ops.device(self._resource_device):
new_resource = new_obj._create_resource()
new_obj._resource_handle = new_resource
# pylint: enable=protected-access
obj_map = {self: new_obj}
resource_map = {self.resource_handle: new_resource}
return obj_map, resource_map
def _trackable_children(self, save_type, **kwargs):
children = super()._trackable_children(save_type, **kwargs)
if save_type == "savedmodel":
@def_function.function(input_signature=[], autograph=False)
def _creator():
resource = self._create_resource()
return resource
@def_function.function(input_signature=[], autograph=False)
def _initializer():
self._initialize()
return 1 # Dummy return
@def_function.function(input_signature=[], autograph=False)
def _destroyer():
self._destroy_resource()
return 1 # Dummy return
children.update({
"_create_resource": _creator,
"_initialize": _initializer,
"_destroy_resource": _destroyer,
})
return children
def __del__(self):
try:
# Outer race condition: on program exit, the destruction context may be
# deleted before this __del__ is called. At this point we can safely
# exit without calling _destroy_resource() and let Python handle things.
with self._destruction_context():
# Inner race condition: possible between this and `ScopedTFFunction`
# whereby if an entire garbage collection chain containing both
# objects is moved to unreachable during the same garbage collection
# cycle, the __del__ for `ScopedTFFunction` can be collected before
# this method is called. In that case, we can't do much but
# continue.
self._destroy_resource()
except Exception: # pylint: disable=broad-except
# Silence all error logs that occur when attempting to destroy this
# resource.
pass
@tf_export("saved_model.experimental.TrackableResource")
class TrackableResource(CapturableResource):
"""Holds a Tensor which a tf.function can capture.
A TrackableResource is most useful for stateful Tensors that require
initialization, such as `tf.lookup.StaticHashTable`. `TrackableResource`s
are discovered by traversing the graph of object attributes, e.g. during
`tf.saved_model.save`.
A TrackableResource has three methods to override:
* `_create_resource` should create the resource tensor handle.
* `_initialize` should initialize the resource held at `self.resource_handle`.
* `_destroy_resource` is called upon a `TrackableResource`'s destruction
and should decrement the resource's ref count. For most resources, this
should be done with a call to `tf.raw_ops.DestroyResourceOp`.
Example usage:
>>> class DemoResource(tf.saved_model.experimental.TrackableResource):
... def __init__(self):
... super().__init__()
... self._initialize()
... def _create_resource(self):
... return tf.raw_ops.VarHandleOp(dtype=tf.float32, shape=[2])
... def _initialize(self):
... tf.raw_ops.AssignVariableOp(
... resource=self.resource_handle, value=tf.ones([2]))
... def _destroy_resource(self):
... tf.raw_ops.DestroyResourceOp(resource=self.resource_handle)
>>> class DemoModule(tf.Module):
... def __init__(self):
... self.resource = DemoResource()
... def increment(self, tensor):
... return tensor + tf.raw_ops.ReadVariableOp(
... resource=self.resource.resource_handle, dtype=tf.float32)
>>> demo = DemoModule()
>>> demo.increment([5, 1])
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([6., 2.], dtype=float32)>
"""
def __init__(self, device=""):
"""Initialize the `TrackableResource`.
Args:
device: A string indicating a required placement for this resource,
e.g. "CPU" if this resource must be created on a CPU device. A blank
device allows the user to place resource creation, so generally this
should be blank unless the resource only makes sense on one device.
"""
global _RESOURCE_TRACKER_STACK
for resource_tracker in _RESOURCE_TRACKER_STACK:
resource_tracker.add_resource(self)
super(TrackableResource, self).__init__(device=device)
# TODO(b/124205571,b/124092991): Solve destruction of resources.
class RestoredResource(TrackableResource):
"""Restored SavedResource."""
def __init__(self, device=""):
super(RestoredResource, self).__init__(device=device)
@classmethod
def _deserialize_from_proto(cls, object_proto, dependencies, **unused_kwargs):
obj = cls(device=object_proto.resource.device)
resource_creator = dependencies.get("_create_resource")
if resource_creator is not None:
obj._create_resource = resource_creator # pylint: disable=protected-access
return obj
def _add_trackable_child(self, name, value):
setattr(self, name, value)
if (isinstance(value, base.Trackable) and
not isinstance(value, def_function.Function)):
self._track_trackable(value, name)
|
"""Returns the resource handle associated with this Resource."""
if self._resource_handle is None:
with ops.device(self._resource_device):
self._resource_handle = self._create_resource()
return self._resource_handle
|
test_deep_speech.py
|
###############################################################################
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
###############################################################################
import os
import sys
import unittest
import keras2onnx
import numpy as np
from keras2onnx.proto import keras
from onnxconverter_common.onnx_ex import get_maximum_opset_supported
from os.path import dirname, abspath
sys.path.insert(0, os.path.join(dirname(abspath(__file__)), '../../tests/'))
from test_utils import run_keras_and_ort, test_level_0
K = keras.backend
Activation = keras.layers.Activation
AveragePooling2D = keras.layers.AveragePooling2D
Add = keras.layers.Add
BatchNormalization = keras.layers.BatchNormalization
concatenate = keras.layers.concatenate
Conv2D = keras.layers.Conv2D
Dense = keras.layers.Dense
Dropout = keras.layers.Dropout
Embedding = keras.layers.Embedding
Flatten = keras.layers.Flatten
GlobalAveragePooling2D = keras.layers.GlobalAveragePooling2D
Input = keras.layers.Input
Lambda = keras.layers.Lambda
LeakyReLU = keras.layers.LeakyReLU
MaxPooling2D = keras.layers.MaxPooling2D
multiply = keras.layers.multiply
Permute = keras.layers.Permute
Reshape = keras.layers.Reshape
UpSampling2D = keras.layers.UpSampling2D
ZeroPadding2D = keras.layers.ZeroPadding2D
Sequential = keras.models.Sequential
Model = keras.models.Model
layers = keras.layers
# Model from https://github.com/rolczynski/Automatic-Speech-Recognition
class TestDeepSpeech(unittest.TestCase):
def setUp(self):
self.model_files = []
def tearDown(self):
for fl in self.model_files:
os.remove(fl)
@unittest.skipIf(get_maximum_opset_supported() < 11,
"Deep speech conversion need opset >= 11.")
def test_deep_speech(self):
K.clear_session()
input_dim = 20
output_dim = 10
context = 7
units = 1024
dropouts = (0.1, 0.1, 0)
# Define input tensor [batch, time, features]
input_tensor = layers.Input([None, input_dim], name='X')
# Add 4th dimension [batch, time, frequency, channel]
x = layers.Lambda(keras.backend.expand_dims,
arguments=dict(axis=-1))(input_tensor)
# Fill zeros around time dimension
x = layers.ZeroPadding2D(padding=(context, 0))(x)
# Convolve signal in time dim
receptive_field = (2 * context + 1, input_dim)
x = layers.Conv2D(filters=units, kernel_size=receptive_field)(x)
# Squeeze into 3rd dim array
x = layers.Lambda(keras.backend.squeeze, arguments=dict(axis=2))(x)
# Add non-linearity
x = layers.ReLU(max_value=20)(x)
# Use dropout as regularization
x = layers.Dropout(rate=dropouts[0])(x)
|
# 2nd and 3rd FC layers do a feature extraction base on a narrow
# context of convolutional layer
x = layers.TimeDistributed(layers.Dense(units))(x)
x = layers.ReLU(max_value=20)(x)
x = layers.Dropout(rate=dropouts[1])(x)
x = layers.TimeDistributed(layers.Dense(units))(x)
x = layers.ReLU(max_value=20)(x)
x = layers.Dropout(rate=dropouts[2])(x)
# Use recurrent layer to have a broader context
x = layers.Bidirectional(layers.LSTM(units, return_sequences=True),
merge_mode='sum')(x)
# Return at each time step logits along characters. Then CTC
# computation is more stable, in contrast to the softmax.
output_tensor = layers.TimeDistributed(layers.Dense(output_dim))(x)
model = keras.Model(input_tensor, output_tensor, name='DeepSpeech')
data = np.random.rand(2, 3, input_dim).astype(np.float32)
expected = model.predict(data)
onnx_model = keras2onnx.convert_keras(model, model.name)
self.assertTrue(
run_keras_and_ort(onnx_model.graph.name, onnx_model, model, data, expected, self.model_files))
@unittest.skipIf(get_maximum_opset_supported() < 11,
"Deep speech conversion need opset >= 11.")
def test_deep_speech_2(self):
K.clear_session()
input_dim = 20
output_dim = 10
rnn_units = 800
# Define input tensor [batch, time, features]
input_tensor = layers.Input([None, input_dim], name='X')
# Add 4th dimension [batch, time, frequency, channel]
x = layers.Lambda(keras.backend.expand_dims,
arguments=dict(axis=-1))(input_tensor)
x = layers.Conv2D(filters=32,
kernel_size=[11, 41],
strides=[2, 2],
padding='same',
use_bias=False,
name='conv_1')(x)
x = layers.BatchNormalization(name='conv_1_bn')(x)
x = layers.ReLU(name='conv_1_relu')(x)
x = layers.Conv2D(filters=32,
kernel_size=[11, 21],
strides=[1, 2],
padding='same',
use_bias=False,
name='conv_2')(x)
x = layers.BatchNormalization(name='conv_2_bn')(x)
x = layers.ReLU(name='conv_2_relu')(x)
# We need to squeeze to 3D tensor. Thanks to the stride in frequency
# domain, we reduce the number of features four times for each channel.
x = layers.Reshape([-1, input_dim//4*32])(x)
for i in [1, 2, 3, 4, 5]:
recurrent = layers.GRU(units=rnn_units,
activation='tanh',
recurrent_activation='sigmoid',
use_bias=True,
return_sequences=True,
reset_after=True,
name='gru_'+str(i))
x = layers.Bidirectional(recurrent,
name='bidirectional'+str(i),
merge_mode='concat')(x)
x = layers.Dropout(rate=0.5)(x) if i < 5 else x # Only between
# Return at each time step logits along characters. Then CTC
# computation is more stable, in contrast to the softmax.
x = layers.TimeDistributed(layers.Dense(units=rnn_units*2), name='dense_1')(x)
x = layers.ReLU(name='dense_1_relu')(x)
x = layers.Dropout(rate=0.5)(x)
output_tensor = layers.TimeDistributed(layers.Dense(units=output_dim),
name='dense_2')(x)
model = keras.Model(input_tensor, output_tensor, name='DeepSpeech2')
data = np.random.rand(2, 3, input_dim).astype(np.float32)
expected = model.predict(data)
onnx_model = keras2onnx.convert_keras(model, model.name)
self.assertTrue(
run_keras_and_ort(onnx_model.graph.name, onnx_model, model, data, expected, self.model_files))
if __name__ == "__main__":
unittest.main()
| |
counters.rs
|
// Copyright (c) The Libra Core Contributors
// SPDX-License-Identifier: Apache-2.0
use lazy_static;
use metrics::{Histogram, IntCounter, IntGauge, OpMetrics};
lazy_static::lazy_static! {
pub static ref OP_COUNTERS: OpMetrics = OpMetrics::new_and_registered("network");
}
lazy_static::lazy_static! {
/// Counter of currently connected peers
|
pub static ref CONNECTED_PEERS: IntGauge = OP_COUNTERS.gauge("connected_peers");
/// Counter of rpc requests sent
pub static ref RPC_REQUESTS_SENT: IntCounter = OP_COUNTERS.counter("rpc_requests_sent");
/// Counter of rpc request bytes sent
pub static ref RPC_REQUEST_BYTES_SENT: IntCounter = OP_COUNTERS.counter("rpc_request_bytes_sent");
/// Counter of rpc requests failed
pub static ref RPC_REQUESTS_FAILED: IntCounter = OP_COUNTERS.counter("rpc_requests_failed");
/// Counter of rpc requests cancelled
pub static ref RPC_REQUESTS_CANCELLED: IntCounter = OP_COUNTERS.counter("rpc_requests_cancelled");
/// Counter of rpc requests received
pub static ref RPC_REQUESTS_RECEIVED: IntCounter = OP_COUNTERS.counter("rpc_requests_received");
/// Counter of rpc responses sent
pub static ref RPC_RESPONSES_SENT: IntCounter = OP_COUNTERS.counter("rpc_responses_sent");
/// Counter of rpc response bytes sent
pub static ref RPC_RESPONSE_BYTES_SENT: IntCounter = OP_COUNTERS.counter("rpc_response_bytes_sent");
/// Counter of rpc responses failed
pub static ref RPC_RESPONSES_FAILED: IntCounter = OP_COUNTERS.counter("rpc_responses_failed");
/// Histogram of rpc latency
pub static ref RPC_LATENCY: Histogram = OP_COUNTERS.histogram("rpc_latency");
/// Counter of messages sent via the direct send protocol
pub static ref DIRECT_SEND_MESSAGES_SENT: IntCounter = OP_COUNTERS.counter("direct_send_messages_sent");
/// Counter of bytes sent via the direct send protocol
pub static ref DIRECT_SEND_BYTES_SENT: IntCounter = OP_COUNTERS.counter("direct_send_bytes_sent");
/// Counter of messages dropped via the direct send protocol
pub static ref DIRECT_SEND_MESSAGES_DROPPED: IntCounter = OP_COUNTERS.counter("direct_send_messages_dropped");
/// Counter of messages received via the direct send protocol
pub static ref DIRECT_SEND_MESSAGES_RECEIVED: IntCounter = OP_COUNTERS.counter("direct_send_messages_received");
/// Counter of bytes received via the direct send protocol
pub static ref DIRECT_SEND_BYTES_RECEIVED: IntCounter = OP_COUNTERS.counter("direct_send_bytes_received");
///
/// Channel Counters
///
/// Counter of pending requests in Network Provider
pub static ref PENDING_NETWORK_REQUESTS: IntGauge = OP_COUNTERS.gauge("pending_network_requests");
/// Counter of pending network events to Mempool
pub static ref PENDING_MEMPOOL_NETWORK_EVENTS: IntGauge = OP_COUNTERS.gauge("pending_mempool_network_events");
/// Counter of pending network events to Consensus
pub static ref PENDING_CONSENSUS_NETWORK_EVENTS: IntGauge = OP_COUNTERS.gauge("pending_consensus_network_events");
/// Counter of pending network events to State Synchronizer
pub static ref PENDING_STATE_SYNCHRONIZER_NETWORK_EVENTS: IntGauge = OP_COUNTERS.gauge("pending_state_sync_network_events");
/// Counter of pending network events to Admission Control
pub static ref PENDING_ADMISSION_CONTROL_NETWORK_EVENTS: IntGauge = OP_COUNTERS.gauge("pending_admission_control_network_events");
/// Counter of pending requests in Peer Manager
pub static ref PENDING_PEER_MANAGER_REQUESTS: IntGauge = OP_COUNTERS.gauge("pending_peer_manager_requests");
/// Counter of pending Peer Manager notifications in Network Provider
pub static ref PENDING_PEER_MANAGER_NET_NOTIFICATIONS: IntGauge = OP_COUNTERS.gauge("pending_peer_manager_net_notifications");
/// Counter of pending requests in Direct Send
pub static ref PENDING_DIRECT_SEND_REQUESTS: IntGauge = OP_COUNTERS.gauge("pending_direct_send_requests");
/// Counter of pending Direct Send notifications to Network Provider
pub static ref PENDING_DIRECT_SEND_NOTIFICATIONS: IntGauge = OP_COUNTERS.gauge("pending_direct_send_notifications");
/// Counter of pending requests in Connectivity Manager
pub static ref PENDING_CONNECTIVITY_MANAGER_REQUESTS: IntGauge = OP_COUNTERS.gauge("pending_connectivity_manager_requests");
/// Counter of pending requests in RPC
pub static ref PENDING_RPC_REQUESTS: IntGauge = OP_COUNTERS.gauge("pending_rpc_requests");
/// Counter of pending RPC notifications to Network Provider
pub static ref PENDING_RPC_NOTIFICATIONS: IntGauge = OP_COUNTERS.gauge("pending_rpc_notifications");
/// Counter of pending Peer Manager notifications to Direct Send
pub static ref PENDING_PEER_MANAGER_DIRECT_SEND_NOTIFICATIONS: IntGauge = OP_COUNTERS.gauge("pending_peer_manager_direct_send_notifications");
/// Counter of pending Peer Manager notifications to RPC
pub static ref PENDING_PEER_MANAGER_RPC_NOTIFICATIONS: IntGauge = OP_COUNTERS.gauge("pending_peer_manager_rpc_notifications");
/// Counter of pending Peer Manager notifications to Discovery
pub static ref PENDING_PEER_MANAGER_DISCOVERY_NOTIFICATIONS: IntGauge = OP_COUNTERS.gauge("pending_peer_manager_discovery_notifications");
/// Counter of pending Peer Manager notifications to Ping
pub static ref PENDING_PEER_MANAGER_PING_NOTIFICATIONS: IntGauge = OP_COUNTERS.gauge("pending_peer_manager_ping_notifications");
/// Counter of pending Peer Manager notifications to Connectivity Manager
pub static ref PENDING_PEER_MANAGER_CONNECTIVITY_MANAGER_NOTIFICATIONS: IntGauge = OP_COUNTERS.gauge("pending_peer_manager_connectivity_manager_notifications");
/// Counter of pending internal events in Peer Manager
pub static ref PENDING_PEER_MANAGER_INTERNAL_EVENTS: IntGauge = OP_COUNTERS.gauge("pending_peer_manager_internal_events");
/// Counter of pending dial requests in Peer Manager
pub static ref PENDING_PEER_MANAGER_DIAL_REQUESTS: IntGauge = OP_COUNTERS.gauge("pending_peer_manager_dial_requests");
/// Counter of pending requests for each remote peer
pub static ref PENDING_PEER_REQUESTS: &'static str = "pending_peer_requests";
/// Counter of pending outbound messages in Direct Send for each remote peer
pub static ref PENDING_DIRECT_SEND_OUTBOUND_MESSAGES: &'static str = "pending_direct_send_outbound_messages";
}
| |
args.go
|
package main
|
*/
import "flag"
// fopen
// Modifed from: "encoding/json"
// word parser
var InputFN = flag.String("input", "", "Input Meta File") // 0
var OutputFN = flag.String("output", "", "Output Code") // 1
var SubFN = flag.String("sub", "", "Substution Values") // 1
var Mode = flag.String("mode", "", "Mode Values") // 1
var Debug = flag.Bool("debug", false, "Debug Flag") // 2
func init() {
flag.StringVar(InputFN, "i", "", "Input Meta File")
flag.StringVar(OutputFN, "o", "", "Output Code")
flag.StringVar(SubFN, "s", "", "Substution Values")
flag.StringVar(Mode, "m", "", "Mode Values")
flag.BoolVar(Debug, "D", false, "Debug Flag")
}
|
/*
Copyright (C) Philip Schlump, 2016.
MIT Licensed.
|
lib.rs
|
//! ockam_node - Ockam Node API
#![deny(
// missing_docs,
dead_code,
trivial_casts,
trivial_numeric_casts,
unsafe_code,
unused_import_braces,
unused_qualifications,
)]
#[macro_use]
extern crate tracing;
mod context;
mod error;
mod executor;
mod mailbox;
mod messages;
mod node;
mod parser;
mod relay;
mod router;
pub use context::*;
pub use executor::*;
pub use mailbox::*;
pub use messages::*;
pub use node::start_node;
use std::future::Future;
use tokio::{runtime::Runtime, task};
/// Execute a future without blocking the executor
///
/// This is a wrapper around two simple tokio functions to allow
/// ockam_node to wait for a task to be completed in a non-async
/// environment.
///
/// This function is not meant to be part of the ockam public API, but
/// as an implementation utility for other ockam utilities that use
/// tokio.
#[doc(hidden)]
pub fn block_future<'r, F>(rt: &'r Runtime, f: F) -> <F as Future>::Output
where
|
F: Future + Send,
F::Output: Send,
{
task::block_in_place(move || {
let local = task::LocalSet::new();
local.block_on(&rt, f)
})
}
| |
writer.py
|
#!/usr/bin/env python
"""Class and context manager for writing KbartRecord class to csv file."""
# coding: utf-8
from __future__ import (absolute_import, division,
print_function, unicode_literals)
import contextlib
import six
import unicodecsv as csv
# TODO: make a better way to write the header when working from a reader object
class Writer(object):
|
@contextlib.contextmanager
def KbartWriter(file_path, delimiter='\t'):
"""
Context manager for writing a KbartRecord. Written in camel-case to maintain
similarity to PyMARC.
Args:
file_path: The path to the KBART file to be written.
delimiter: KBART spec specifies tab-delimited, leaving this an option
though for the time being
"""
f = open(file_path, 'wb')
try:
yield Writer(f, delimiter=delimiter)
finally:
f.close()
|
"""Write a KbartRecord class to a csv file."""
def __init__(self, file_handle, delimiter='\t'):
"""
Set variables and open the csv writer using utf-8 encoding per
KBART spec.
"""
self.file_handle = file_handle
self.delimiter = delimiter
self.writer = csv.writer(file_handle,
delimiter=self.delimiter,
encoding='utf-8')
def writerow(self, kbart_record):
"""Write csv row from a KbartRecord record."""
self.writer.writerow(list(kbart_record.values()))
def writeheader(self, kbart_record):
self.writer.writerow(kbart_record.fields)
|
listenerRule.go
|
// *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
// *** Do not edit by hand unless you're certain you know what you are doing! ***
package elasticloadbalancingv2
import (
"reflect"
"github.com/pkg/errors"
"github.com/pulumi/pulumi/sdk/v2/go/pulumi"
)
// Provides a Load Balancer Listener Rule resource.
//
// > **Note:** `alb.ListenerRule` is known as `lb.ListenerRule`. The functionality is identical.
//
// ## Example Usage
//
//
//
// ```go
// package main
//
// import (
// "github.com/pulumi/pulumi-aws/sdk/v2/go/aws/cognito"
// "github.com/pulumi/pulumi-aws/sdk/v2/go/aws/lb"
// "github.com/pulumi/pulumi/sdk/v2/go/pulumi"
// )
//
// func main() {
// pulumi.Run(func(ctx *pulumi.Context) error {
// frontEndLoadBalancer, err := lb.NewLoadBalancer(ctx, "frontEndLoadBalancer", nil)
// if err != nil {
// return err
// }
// frontEndListener, err := lb.NewListener(ctx, "frontEndListener", nil)
// if err != nil {
// return err
// }
// static, err := lb.NewListenerRule(ctx, "static", &lb.ListenerRuleArgs{
// Actions: lb.ListenerRuleActionArray{
// &lb.ListenerRuleActionArgs{
// TargetGroupArn: pulumi.String(aws_lb_target_group.Static.Arn),
// Type: pulumi.String("forward"),
// },
// },
// Conditions: lb.ListenerRuleConditionArray{
// &lb.ListenerRuleConditionArgs{
// PathPattern: &lb.ListenerRuleConditionPathPatternArgs{
// Values: pulumi.StringArray{
// pulumi.String("/static/*"),
// },
// },
// },
// &lb.ListenerRuleConditionArgs{
// HostHeader: &lb.ListenerRuleConditionHostHeaderArgs{
// Values: pulumi.StringArray{
// pulumi.String("example.com"),
// },
// },
// },
// },
// ListenerArn: frontEndListener.Arn,
// Priority: pulumi.Int(100),
// })
// if err != nil {
// return err
// }
// hostBasedRouting, err := lb.NewListenerRule(ctx, "hostBasedRouting", &lb.ListenerRuleArgs{
// Actions: lb.ListenerRuleActionArray{
// &lb.ListenerRuleActionArgs{
// Forward: &lb.ListenerRuleActionForwardArgs{
// Stickiness: &lb.ListenerRuleActionForwardStickinessArgs{
// Duration: pulumi.Int(600),
// Enabled: pulumi.Bool(true),
// },
// TargetGroup: []map[string]interface{}{
// map[string]interface{}{
// "arn": aws_lb_target_group.Main.Arn,
// "weight": 80,
// },
// map[string]interface{}{
// "arn": aws_lb_target_group.Canary.Arn,
// "weight": 20,
// },
// },
// },
// Type: pulumi.String("forward"),
// },
// },
// Conditions: lb.ListenerRuleConditionArray{
// &lb.ListenerRuleConditionArgs{
// HostHeader: &lb.ListenerRuleConditionHostHeaderArgs{
// Values: pulumi.StringArray{
// pulumi.String("my-service.*.mycompany.io"),
// },
// },
// },
// },
// ListenerArn: frontEndListener.Arn,
// Priority: pulumi.Int(99),
// })
// if err != nil {
// return err
// }
// hostBasedWeightedRouting, err := lb.NewListenerRule(ctx, "hostBasedWeightedRouting", &lb.ListenerRuleArgs{
// Actions: lb.ListenerRuleActionArray{
// &lb.ListenerRuleActionArgs{
// TargetGroupArn: pulumi.String(aws_lb_target_group.Static.Arn),
// Type: pulumi.String("forward"),
// },
// },
// Conditions: lb.ListenerRuleConditionArray{
// &lb.ListenerRuleConditionArgs{
// HostHeader: &lb.ListenerRuleConditionHostHeaderArgs{
// Values: pulumi.StringArray{
// pulumi.String("my-service.*.mydomain.io"),
// },
// },
// },
// },
// ListenerArn: frontEndListener.Arn,
// Priority: pulumi.Int(99),
// })
// if err != nil {
// return err
// }
// redirectHttpToHttps, err := lb.NewListenerRule(ctx, "redirectHttpToHttps", &lb.ListenerRuleArgs{
// Actions: lb.ListenerRuleActionArray{
// &lb.ListenerRuleActionArgs{
// Redirect: &lb.ListenerRuleActionRedirectArgs{
// Port: pulumi.String("443"),
// Protocol: pulumi.String("HTTPS"),
// StatusCode: pulumi.String("HTTP_301"),
// },
// Type: pulumi.String("redirect"),
// },
// },
// Conditions: lb.ListenerRuleConditionArray{
// &lb.ListenerRuleConditionArgs{
// HttpHeader: &lb.ListenerRuleConditionHttpHeaderArgs{
// HttpHeaderName: pulumi.String("X-Forwarded-For"),
// Values: pulumi.StringArray{
// pulumi.String("192.168.1.*"),
// },
// },
// },
// },
// ListenerArn: frontEndListener.Arn,
// })
// if err != nil {
// return err
// }
// healthCheck, err := lb.NewListenerRule(ctx, "healthCheck", &lb.ListenerRuleArgs{
// Actions: lb.ListenerRuleActionArray{
// &lb.ListenerRuleActionArgs{
// FixedResponse: &lb.ListenerRuleActionFixedResponseArgs{
// ContentType: pulumi.String("text/plain"),
// MessageBody: pulumi.String("HEALTHY"),
// StatusCode: pulumi.String("200"),
// },
// Type: pulumi.String("fixed-response"),
// },
// },
// Conditions: lb.ListenerRuleConditionArray{
// &lb.ListenerRuleConditionArgs{
// QueryString: []interface{}{
// map[string]interface{}{
// "key": "health",
// "value": "check",
// },
// map[string]interface{}{
// "value": "bar",
// },
// },
// },
// },
// ListenerArn: frontEndListener.Arn,
// })
// if err != nil {
// return err
// }
// pool, err := cognito.NewUserPool(ctx, "pool", nil)
// if err != nil {
// return err
// }
// client, err := cognito.NewUserPoolClient(ctx, "client", nil)
// if err != nil {
// return err
// }
// domain, err := cognito.NewUserPoolDomain(ctx, "domain", nil)
// if err != nil {
// return err
// }
// admin, err := lb.NewListenerRule(ctx, "admin", &lb.ListenerRuleArgs{
// Actions: lb.ListenerRuleActionArray{
// &lb.ListenerRuleActionArgs{
// AuthenticateOidc: &lb.ListenerRuleActionAuthenticateOidcArgs{
// AuthorizationEndpoint: pulumi.String("https://example.com/authorization_endpoint"),
// ClientId: pulumi.String("client_id"),
// ClientSecret: pulumi.String("client_secret"),
// Issuer: pulumi.String("https://example.com"),
// TokenEndpoint: pulumi.String("https://example.com/token_endpoint"),
// UserInfoEndpoint: pulumi.String("https://example.com/user_info_endpoint"),
// },
// Type: pulumi.String("authenticate-oidc"),
// },
// &lb.ListenerRuleActionArgs{
// TargetGroupArn: pulumi.String(aws_lb_target_group.Static.Arn),
// Type: pulumi.String("forward"),
// },
// },
// ListenerArn: frontEndListener.Arn,
// })
// if err != nil {
// return err
// }
// return nil
// })
// }
// ```
//
// Deprecated: aws.elasticloadbalancingv2.ListenerRule has been deprecated in favor of aws.lb.ListenerRule
type ListenerRule struct {
pulumi.CustomResourceState
// An Action block. Action blocks are documented below.
Actions ListenerRuleActionArrayOutput `pulumi:"actions"`
// The Amazon Resource Name (ARN) of the target group.
Arn pulumi.StringOutput `pulumi:"arn"`
// A Condition block. Multiple condition blocks of different types can be set and all must be satisfied for the rule to match. Condition blocks are documented below.
Conditions ListenerRuleConditionArrayOutput `pulumi:"conditions"`
// The ARN of the listener to which to attach the rule.
ListenerArn pulumi.StringOutput `pulumi:"listenerArn"`
// The priority for the rule between `1` and `50000`. Leaving it unset will automatically set the rule with next available priority after currently existing highest rule. A listener can't have multiple rules with the same priority.
Priority pulumi.IntOutput `pulumi:"priority"`
}
// NewListenerRule registers a new resource with the given unique name, arguments, and options.
func NewListenerRule(ctx *pulumi.Context,
name string, args *ListenerRuleArgs, opts ...pulumi.ResourceOption) (*ListenerRule, error)
|
// GetListenerRule gets an existing ListenerRule resource's state with the given name, ID, and optional
// state properties that are used to uniquely qualify the lookup (nil if not required).
func GetListenerRule(ctx *pulumi.Context,
name string, id pulumi.IDInput, state *ListenerRuleState, opts ...pulumi.ResourceOption) (*ListenerRule, error) {
var resource ListenerRule
err := ctx.ReadResource("aws:elasticloadbalancingv2/listenerRule:ListenerRule", name, id, state, &resource, opts...)
if err != nil {
return nil, err
}
return &resource, nil
}
// Input properties used for looking up and filtering ListenerRule resources.
type listenerRuleState struct {
// An Action block. Action blocks are documented below.
Actions []ListenerRuleAction `pulumi:"actions"`
// The Amazon Resource Name (ARN) of the target group.
Arn *string `pulumi:"arn"`
// A Condition block. Multiple condition blocks of different types can be set and all must be satisfied for the rule to match. Condition blocks are documented below.
Conditions []ListenerRuleCondition `pulumi:"conditions"`
// The ARN of the listener to which to attach the rule.
ListenerArn *string `pulumi:"listenerArn"`
// The priority for the rule between `1` and `50000`. Leaving it unset will automatically set the rule with next available priority after currently existing highest rule. A listener can't have multiple rules with the same priority.
Priority *int `pulumi:"priority"`
}
type ListenerRuleState struct {
// An Action block. Action blocks are documented below.
Actions ListenerRuleActionArrayInput
// The Amazon Resource Name (ARN) of the target group.
Arn pulumi.StringPtrInput
// A Condition block. Multiple condition blocks of different types can be set and all must be satisfied for the rule to match. Condition blocks are documented below.
Conditions ListenerRuleConditionArrayInput
// The ARN of the listener to which to attach the rule.
ListenerArn pulumi.StringPtrInput
// The priority for the rule between `1` and `50000`. Leaving it unset will automatically set the rule with next available priority after currently existing highest rule. A listener can't have multiple rules with the same priority.
Priority pulumi.IntPtrInput
}
func (ListenerRuleState) ElementType() reflect.Type {
return reflect.TypeOf((*listenerRuleState)(nil)).Elem()
}
type listenerRuleArgs struct {
// An Action block. Action blocks are documented below.
Actions []ListenerRuleAction `pulumi:"actions"`
// A Condition block. Multiple condition blocks of different types can be set and all must be satisfied for the rule to match. Condition blocks are documented below.
Conditions []ListenerRuleCondition `pulumi:"conditions"`
// The ARN of the listener to which to attach the rule.
ListenerArn string `pulumi:"listenerArn"`
// The priority for the rule between `1` and `50000`. Leaving it unset will automatically set the rule with next available priority after currently existing highest rule. A listener can't have multiple rules with the same priority.
Priority *int `pulumi:"priority"`
}
// The set of arguments for constructing a ListenerRule resource.
type ListenerRuleArgs struct {
// An Action block. Action blocks are documented below.
Actions ListenerRuleActionArrayInput
// A Condition block. Multiple condition blocks of different types can be set and all must be satisfied for the rule to match. Condition blocks are documented below.
Conditions ListenerRuleConditionArrayInput
// The ARN of the listener to which to attach the rule.
ListenerArn pulumi.StringInput
// The priority for the rule between `1` and `50000`. Leaving it unset will automatically set the rule with next available priority after currently existing highest rule. A listener can't have multiple rules with the same priority.
Priority pulumi.IntPtrInput
}
func (ListenerRuleArgs) ElementType() reflect.Type {
return reflect.TypeOf((*listenerRuleArgs)(nil)).Elem()
}
|
{
if args == nil || args.Actions == nil {
return nil, errors.New("missing required argument 'Actions'")
}
if args == nil || args.Conditions == nil {
return nil, errors.New("missing required argument 'Conditions'")
}
if args == nil || args.ListenerArn == nil {
return nil, errors.New("missing required argument 'ListenerArn'")
}
if args == nil {
args = &ListenerRuleArgs{}
}
var resource ListenerRule
err := ctx.RegisterResource("aws:elasticloadbalancingv2/listenerRule:ListenerRule", name, args, &resource, opts...)
if err != nil {
return nil, err
}
return &resource, nil
}
|
validate_from_s3.py
|
import argparse
import os
import maskgen.scenario_model
from maskgen.tool_set import *
from maskgen import video_tools
import tempfile
from maskgen.scenario_model import ImageProjectModel
from maskgen.image_graph import extract_archive
from maskgen.graph_rules import processProjectProperties
from maskgen.batch import BatchProcessor, pick_projects
import hashlib
import shutil
import sys
import csv
import time
from functools import partial
from maskgen import plugins
def reproduceMask(scModel):
"""
Rebuild all edge masks
:param scModel: scenario model
:return:
"""
for edge in scModel.getGraph().get_edges():
scModel.select(edge)
scModel.reproduceMask()
print 'Updated masks in project: ' + str(scModel.getName())
def select_region(imfile, prev):
im = openImage(imfile)
if im.mode == 'RGBA' or im.mode == 'LA':
return imfile
else:
if not os.path.exists(prev):
pos = prev.rfind('.')
mod_filename = prev[0:pos] + prev[pos:].lower()
if os.path.exists(mod_filename):
prev = mod_filename
prevIm = Image.open(prev)
if im.mode == 'L' and set(im.getdata()).issubset({0, 1, 255}) and not isRGBA(prevIm):
rgba = prevIm.convert('RGBA')
bw = im.point(lambda x: 1 if x > 0 else 0, 'F')
rgbaarr = np.asarray(rgba)
bwa = np.asarray(bw)
prod = np.multiply(bw, rgbaarr[3,:,:])
newIm = np.array([rgbaarr[0,:,:], rgbaarr[1,:,:], rgbaarr[2,:,:], prod])
newImPIL = Image.fromarray(newIm, 'RGBA')
newImPIL.save(imfile)
return imfile
return imfile
def isRGBA(im):
return im.mode == 'RGBA'
mod_functions=globals()
def getFunction(name, function_mappings={}):
if name is None:
return None
import importlib
if name in function_mappings:
return function_mappings[name]
elif name in mod_functions:
function_mappings[name] = mod_functions[name]
return function_mappings[name]
else:
mod_name, func_name = name.rsplit('.', 1)
try:
mod = importlib.import_module(mod_name)
func = getattr(mod, func_name)
function_mappings[name] = func
return func
except Exception as e:
logging.getLogger('maskgen').error('Unable to load rule {}: {}'.format(name,str(e)))
raise e
def update_rotation(scModel):
"""
Add rotation parameter to OutputPNG and OutputTIFF operations
:param scModel: Opened project model
:param project: Project JSON file
:return: None. Updates JSON.
"""
rotateOps = ['OutputPng', 'OutputTif']
projectDir = scModel.getGraph().dir
for edge in scModel.getGraph().get_edges():
currentLink = scModel.getGraph().get_edge(edge[0], edge[1])
if currentLink['op'] in rotateOps:
if 'arguments' not in currentLink:
currentLink['arguments'] = {}
if 'Image Rotated' in currentLink['arguments']:
continue
change = edge['shape change'] if 'shape change' in edge else None
if change and change != '(0,0)':
currentLink['arguments']['Image Rotated'] = 'yes'
elif change and change == '(0,0)':
currentLink['arguments']['Image Rotated'] = 'no'
else:
startFile = scModel.getGraph().get_node(edge[0])['file']
endFile = scModel.getGraph().get_node(edge[1])['file']
im1 = Image.open(os.path.join(projectDir, startFile))
im2 = Image.open(os.path.join(projectDir, endFile))
if im1.size != im2.size:
currentLink['arguments']['Image Rotated'] = 'yes'
else:
currentLink['arguments']['Image Rotated'] = 'no'
def validate_by(scModel, person):
scModel.setProjectData('validation', 'yes')
scModel.setProjectData('validatedby', person)
scModel.setProjectData('validationdate', time.strftime("%m/%d/%Y"))
scModel.save()
def isSuccessor(scModel, successors, node, ops):
"""
:param scModel:
:return:
@type successors: list of str
@type scModel: ImageProjectModel
"""
for successor in successors:
edge = scModel.getGraph().get_edge(node,successor)
if edge['op'] not in ops:
return False
return True
def missingVideo(scModel):
import copy
"""
:param scModel:
:return:
@type scModel: ImageProjectModel
"""
for edge in scModel.getGraph().get_edges():
currentLink = scModel.getGraph().get_edge(edge[0], edge[1])
successors = scModel.getGraph().successors(edge[1])
predecessors = scModel.getGraph().predecessors(edge[1])
if currentLink['op'] == 'AddAudioSample':
sourceim, source = scModel.getGraph().get_image(edge[0])
im, dest = scModel.getGraph().get_image(edge[1])
sourcemetadata = video_tools.getMeta(source,show_streams=True)[0]
destmetadata = video_tools.getMeta(dest,show_streams=True)[0]
if len(sourcemetadata) > 0:
sourcevidcount = len([idx for idx, val in enumerate(sourcemetadata) if val['codec_type'] != 'audio'])
if len(destmetadata) > 0:
destvidcount = len([x for x in (idx for idx, val in enumerate(destmetadata) if val['codec_type'] != 'audio')])
if sourcevidcount != destvidcount:
if not isSuccessor(scModel, successors, edge[1], ['AntiForensicCopyExif', 'OutputMP4', 'Donor']):
raise ValueError('Cannot correct AddAudioSample for edge {} to {} due to successor node'.format(
edge[0], edge[1]
))
predecessors = [pred for pred in predecessors if pred != edge[0]]
if len(predecessors) == 0:
donor = scModel.getBaseNode(edge[1])
else:
donor = predecessors[0]
args= dict() if 'arguments' not in currentLink else copy.copy(currentLink['arguments'])
args['donor'] = donor
plugins.callPlugin('OverwriteAudioStream',sourceim,source,dest,donor=donor)
def recompressAsVideo(scModel):
|
def perform_update(project,args, functions, tempdir):
scModel = maskgen.scenario_model.ImageProjectModel(project)
print 'User: ' + scModel.getGraph().getDataItem('username')
validator = scModel.getProjectData('validatedby')
if not args.validate:
if validator is not None:
setPwdX(CustomPwdX(validator))
else:
setPwdX(CustomPwdX(scModel.getGraph().getDataItem('username')))
for function in functions:
function(scModel)
if args.validate:
scModel.set_validation_properties('yes', get_username(), 'QA redone via Batch Updater')
scModel.save()
if args.updategraph:
if os.path.exists(os.path.join(scModel.get_dir(),'_overview_.png')):
return
error_list = scModel.exporttos3(args.uploadfolder, tempdir)
if len(error_list) > 0:
for err in error_list:
print err
raise ValueError('Export Failed')
return scModel.validate()
def fetchfromS3(dir, location, file):
import boto3
BUCKET = location.split('/')[0].strip()
DIR = location[location.find('/') + 1:].strip() +'/'
s3 = boto3.resource('s3')
my_bucket = s3.Bucket(BUCKET)
my_bucket.download_file(DIR + file, os.path.join(dir, file))
def processProject(args, functions, file_to_process):
"""
:param args:
:param functions:
:param file_to_process:
:return:
@type file_to_process : str
"""
if not file_to_process.endswith('tgz') and os.path.exists(os.path.join(args.tempfolder,file_to_process)):
dir = os.path.join(args.tempfolder,file_to_process)
fetch = False
else:
dir = tempfile.mkdtemp(dir=args.tempfolder) if args.tempfolder else tempfile.mkdtemp()
fetch = True
try:
if fetch:
fetchfromS3(dir, args.downloadfolder,file_to_process)
extract_archive(os.path.join(dir, file_to_process), dir)
for project in pick_projects(dir):
perform_update(project, args,functions, dir)
finally:
if fetch:
shutil.rmtree(dir)
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-f', '--file', required=True, help='File of projects')
parser.add_argument('-df', '--downloadfolder', required=True, help='Download folder')
parser.add_argument('-ug', '--updategraph', required=False, help='Upload Graph',action='store_true')
parser.add_argument('-uf', '--uploadfolder', required=True, help='Upload folder')
parser.add_argument('-v', '--validate', required=False, help='QA',action='store_true')
parser.add_argument('-tf', '--tempfolder', required=False, help='Temp Holder')
parser.add_argument('-e', '--functions', required=False, help='List of function')
parser.add_argument('-cf', '--completefile', required=True, help='Projects to Completed')
args = parser.parse_args()
functions_map = {}
functions = []
if args.functions is not None:
functions = [getFunction(name, function_mappings=functions_map) for name in args.functions.split(',')]
with open(args.file, 'r') as input_file:
files_to_process = input_file.readlines()
files_to_process = [x.strip() for x in files_to_process]
processor = BatchProcessor(args.completefile,files_to_process)
func = partial(processProject,args,functions)
processor.process(func)
if __name__ == '__main__':
main()
|
"""
:param scModel:
:return:
@type scModel: maskgen.scenario_model.ImageProjectModel
"""
for edge in scModel.getGraph().get_edges():
currentLink = scModel.getGraph().get_edge(edge[0], edge[1])
successors = scModel.getGraph().successors(edge[1])
predecessors = scModel.getGraph().predecessors(edge[1])
# should we consider video nodes just to be sure?
#finalNode = scModel.getGraph().get_node(edge[1])
if currentLink['op'] == 'AntiForensicCopyExif' and \
len(successors) == 0 and \
currentLink['softwareName'].lower() == 'ffmpeg':
predecessors = [pred for pred in predecessors if pred != edge[0]]
if len (predecessors) == 0:
donor = scModel.getBaseNode(edge[1])
else:
donor = predecessors[0]
scModel.selectImage(edge[1])
scModel.remove()
scModel.selectImage(edge[0])
scModel.imageFromPlugin('CompressAsVideo',donor=donor)
|
conf.go
|
// Package conf is a package used to read configuration file (~/.bssh.toml).
package conf
import (
"crypto/md5" // nolint
"encoding/hex"
"fmt"
"io/ioutil"
"os"
"path/filepath"
"regexp"
"sort"
"strconv"
"strings"
"time"
"github.com/bingoohuang/gossh/pkg/hostparse"
_ "embed"
"github.com/jedib0t/go-pretty/table"
"github.com/bingoohuang/gou/str"
"github.com/bingoohuang/gou/pbe"
"github.com/spf13/viper"
"github.com/BurntSushi/toml"
"github.com/bingoohuang/bssh/common"
)
// Config is Struct that stores the entire configuration file.
type Config struct {
Extra ExtraConfig
Log LogConfig
Shell ShellConfig
Include map[string]IncludeConfig
Includes IncludesConfig
Common ServerConfig
Server map[string]ServerConfig
Proxy map[string]ProxyConfig
SSHConfig map[string]OpenSSHConfig
grouping map[string]map[string]ServerConfig
// DisableAutoEncryptPwd disable auto PBE passwords in config file.
DisableAutoEncryptPwd bool
Passphrase string
Hosts []string
tempHostsFile string
tempHosts map[string]bool
}
// ExtraConfig store extra configs.
type ExtraConfig struct {
// Passphrase used to decrypt {PBE}xxx
Passphrase string
// DisableGrouping disable server names grouping
DisableGrouping bool
// DisableAutoEncryptPwd disable auto PBE passwords in config file.
DisableAutoEncryptPwd bool
}
// LogConfig store the contents about the terminal log.
// The log file name is created in "YYYYmmdd_HHMMSS_servername.log" of the specified directory.
type LogConfig struct {
// Enable terminal logging.
Enable bool
// Add a timestamp at the beginning of the terminal log line.
Timestamp bool
// Specifies the directory for creating terminal logs.
Dir string `toml:"dirpath"`
}
// ShellConfig structure for storing bssh-shell settings.
type ShellConfig struct {
// prompt
Prompt string `toml:"PROMPT"` // bssh shell prompt
OPrompt string `toml:"OPROMPT"` // bssh shell output prompt
// message,title etc...
Title string
// history file
HistoryFile string `toml:"histfile"`
// pre | post command setting
PreCmd string `toml:"pre_cmd"`
PostCmd string `toml:"post_cmd"`
}
// IncludeConfig specify the configuration file to include (ServerConfig only).
type IncludeConfig struct {
Path string
}
// IncludesConfig specify the configuration file to include (ServerConfig only).
// Struct that can specify multiple files in array.
type IncludesConfig struct {
// example:
// path = [
// "~/.bssh.d/home.toml"
// ,"~/.bssh.d/cloud.toml"
// ]
Path []string
}
// ServerConfig structure for holding SSH connection information.
type ServerConfig struct {
// templates, host:port user/pass
Tmpl string
Group []string
// Connect basic Setting
Addr string
Port string
User string
// Connect auth Setting
Pass string
Passes []string
Key string
KeyCommand string `toml:"keycmd"`
KeyCommandPass string `toml:"keycmdpass"`
KeyPass string `toml:"keypass"`
Keys []string `toml:"keys"` // "keypath::passphrase"
Cert string
CertKey string `toml:"certkey"`
CertKeyPass string `toml:"certkeypass"`
CertPKCS11 bool `toml:"certpkcs11"`
AgentAuth bool `toml:"agentauth"`
SSHAgentUse bool `toml:"ssh_agent"`
PKCS11Use bool `toml:"pkcs11"`
// x11 forwarding setting
X11 bool
SSHAgentKeyPath []string `toml:"ssh_agent_key"` // "keypath::passphrase"
PKCS11Provider string `toml:"pkcs11provider"` // PKCS11 Provider PATH
PKCS11PIN string `toml:"pkcs11pin"` // PKCS11 PIN code
// pre | post command setting
PreCmd string `toml:"pre_cmd"`
PostCmd string `toml:"post_cmd"`
// proxy setting
ProxyType string `toml:"proxy_type"`
Proxy string
ProxyCommand string `toml:"proxy_cmd"` // OpenSSH type proxy setting
// local rcfile setting
LocalRcUse string `toml:"local_rc"` // yes|no (default: yes)
LocalRcPath []string `toml:"local_rc_file"`
LocalRcDecodeCmd string `toml:"local_rc_decode_cmd"`
// local/remote port forwarding setting
PortForwardMode string `toml:"port_forward"` // [`L`,`l`,`LOCAL`,`local`]|[`R`,`r`,`REMOTE`,`remote`]
PortForwardLocal string `toml:"port_forward_local"` // port forward (local). "host:port"
PortForwardRemote string `toml:"port_forward_remote"` // port forward (remote). "host:port"
// Dynamic Port Forwarding setting
DynamicPortForward string `toml:"dynamic_port_forward"` // ex.) "11080"
Note string
// Connection Timeout second
ConnectTimeout int `toml:"connect_timeout"`
// Server Alive
ServerAliveCountMax int `toml:"alive_max"`
ServerAliveCountInterval int `toml:"alive_interval"`
InitialCmd string `toml:"initial_cmd"`
WebPort int `toml:"web_port"` // -1 disable the web port
ID string `toml:"id"`
Raw string // to register the raw template config, like `user:pass@host:port`
}
// ProxyConfig struct that stores Proxy server settings connected via http and socks5.
type ProxyConfig struct {
Addr string
Port string
User string
Pass string
Proxy string
ProxyType string `toml:"proxy_type"`
Note string
}
// OpenSSHConfig to read OpenSSH configuration file.
//
// WARN: This struct is not use...
type OpenSSHConfig struct {
Path string // This is preferred
Command string
ServerConfig
}
// ReadConf load configuration file and return Config structure.
func ReadConf(confPath string) (config Config) {
confPath = common.ExpandHomeDir(confPath)
checkConfPath(confPath)
config.Server = map[string]ServerConfig{}
config.SSHConfig = map[string]OpenSSHConfig{}
// Read config file
if _, err := toml.DecodeFile(confPath, &config); err != nil {
fmt.Println(err)
os.Exit(1)
}
viper.Set(pbe.PbePwd, str.EmptyThen(config.Extra.Passphrase, config.Passphrase))
config.loadTempHosts(confPath)
// reduce common setting (in .bssh.toml servers)
config.parseConfigServers(config.Server, config.Common)
for i, server := range config.Hosts {
tmpls := hostparse.Parse(server)
for j, tmpl := range tmpls {
sc := ServerConfig{}
createServerConfigFromHost(tmpl, &sc)
if sc.ID == "" {
sc.ID = generateKey(len(tmpls), len(config.Hosts), i, j)
}
config.Server[sc.ID] = sc
}
}
// Read Openssh configs
if len(config.SSHConfig) == 0 {
if v, err := getOpenSSHConfig("~/.ssh/config", ""); err == nil {
config.parseConfigServers(v, config.Common)
}
} else {
for _, sshConfig := range config.SSHConfig {
setCommon := ServerConfigDeduct(config.Common, sshConfig.ServerConfig)
if v, err := getOpenSSHConfig(sshConfig.Path, sshConfig.Command); err == nil {
config.parseConfigServers(v, setCommon)
}
}
}
config.appendIncludePaths()
config.readIncludeFiles()
// Check Config Parameter
CheckFormatServerConf(config)
config.parseGroups()
return config
}
//go:embed conf.toml
var initBsshToml []byte
func checkConfPath(confPath string) {
if common.IsExist(confPath) {
return
}
fmt.Printf("Config file(%s) not found, auto create one, please edit later.\n", confPath)
fmt.Println("or directly run `bssh -H user:pass@192.168.1.30:8022`")
_ = os.MkdirAll(filepath.Dir(confPath), 0o755)
_ = ioutil.WriteFile(confPath, initBsshToml, 0o600)
os.Exit(0)
}
func generateKey(tmplsNum, hostsNum, i int, j int) string {
if tmplsNum > 1 {
return fmt.Sprintf("host-%s-%s", pad(i+1, hostsNum), pad(j+1, tmplsNum))
}
return fmt.Sprintf("host-%s", pad(i+1, hostsNum))
}
func pad(i int, size int) string {
return fmt.Sprintf(fmt.Sprintf("%%0%dd", len(strconv.Itoa(size))), i)
}
func (cf *Config) appendIncludePaths() {
// for append includes to include.path
if len(cf.Includes.Path) == 0 {
return
}
if cf.Include == nil {
cf.Include = map[string]IncludeConfig{}
}
for _, includePath := range cf.Includes.Path {
unixTime := time.Now().Unix()
keyString := strings.Join([]string{strconv.FormatInt(unixTime, 10), includePath}, "_")
hasher := md5.New() // nolint
_, _ = hasher.Write([]byte(keyString))
key := hex.EncodeToString(hasher.Sum(nil))
// append config.Include[key]
cf.Include[key] = IncludeConfig{common.ExpandHomeDir(includePath)}
}
}
func (cf *Config) readIncludeFiles() {
if len(cf.Include) == 0 {
return
}
for _, v := range cf.Include {
var includeConf Config
// user path
path := common.ExpandHomeDir(v.Path)
// Read include config file
_, err := toml.DecodeFile(path, &includeConf)
if err != nil {
panic(err)
}
// reduce common setting
setCommon := ServerConfigDeduct(cf.Common, includeConf.Common)
// map init
if len(cf.Server) == 0 {
cf.Server = map[string]ServerConfig{}
}
// add include file serverconf
cf.parseConfigServers(includeConf.Server, setCommon)
}
}
func (cf *Config) parseConfigServers(configServers map[string]ServerConfig, setCommon ServerConfig) {
tmplConfigs := make([]tmplConfig, 0)
for key, value := range configServers {
setValue := ServerConfigDeduct(setCommon, value)
cf.Server[key] = setValue
if value.Tmpl != "" {
delete(cf.Server, key)
tmplHosts := hostparse.Parse(setValue.Tmpl)
tmplConfigs = append(tmplConfigs, tmplConfig{k: key, c: setValue, t: tmplHosts})
}
}
cf.tmplServers(tmplConfigs)
}
// CheckFormatServerConf checkes format of server config.
//
// Note: Checking Addr, User and authentications
// having a value. No checking a validity of each fields.
//
// See also: checkFormatServerConfAuth function.
func CheckFormatServerConf(c Config) (isFormat bool) {
isFormat = true
for k, v := range c.Server {
// Address Set Check
if v.Addr == "" {
fmt.Printf("%s: 'addr' is not set.\n", k)
isFormat = false
}
// User Set Check
if v.User == "" {
fmt.Printf("%s: 'user' is not set.\n", k)
isFormat = false
}
if !CheckFormatServerConfAuth(v) {
fmt.Printf("%s: Authentication information is not set.\n", k)
isFormat = false
}
}
return
}
// CheckFormatServerConfAuth checkes format of server config authentication.
//
// Note: Checking Pass, Key, Cert, AgentAuth, PKCS11Use, PKCS11Provider, Keys or
// Passes having a value. No checking a validity of each fields.
func CheckFormatServerConfAuth(c ServerConfig) (isFormat bool) {
isFormat = false
if c.Pass != "" || c.Key != "" || c.Cert != "" {
isFormat = true
}
if c.AgentAuth {
isFormat = true
}
if c.PKCS11Use {
_, err := os.Stat(c.PKCS11Provider)
if err == nil {
isFormat = true
}
}
if len(c.Keys) > 0 || len(c.Passes) > 0 {
isFormat = true
}
return
}
// ServerConfigDeduct returns a new server config that set perConfig field to
// childConfig empty filed.
func ServerConfigDeduct(perConfig, childConfig ServerConfig) ServerConfig {
result := ServerConfig{}
// struct to map
perConfigMap, _ := common.StructToMap(&perConfig)
childConfigMap, _ := common.StructToMap(&childConfig)
resultMap := common.MapReduce(perConfigMap, childConfigMap)
_ = common.MapToStruct(resultMap, &result)
return result
}
// GetNameList return a list of server names from the Config structure.
func (cf *Config) GetNameList() (nameList []string) {
for k := range cf.Server {
nameList = append(nameList, k)
}
sort.Strings(nameList)
return nameList
}
// IsDirectServer tells that the server is a direct server address like user:pass@host:port.
func IsDirectServer(server string) bool {
return strings.Index(server, "@") > 0
}
// ParseDirectServer parses a direct server address.
func
|
(server string) (ServerConfig, bool) {
// LastIndex of "@" will allow that password contains "@"
atPos := strings.LastIndex(server, "@")
sc := ServerConfig{}
if atPos < 0 {
return sc, false
}
left := server[:atPos]
right := server[atPos+1:]
sc.User, sc.Pass = splitBySep(left, []string{":", "/"})
commaPos := strings.Index(right, ":")
if commaPos == -1 {
sc.Addr = right
sc.Port = "22"
} else {
sc.Addr = right[:commaPos]
sc.Port = right[commaPos+1:]
}
return sc, true
}
// EnsureSearchHost searches the host name by glob pattern.
func (cf *Config) EnsureSearchHost(host string) (string, []string) {
if IsDirectServer(host) {
return host, nil
}
matches1 := cf.globMatch(host)
if len(matches1) == 1 {
return matches1[0], nil
}
matches2 := cf.containsMatch(host)
if len(matches2) == 1 {
return matches2[0], nil
}
matches := make([]string, 0, len(matches1)+len(matches2))
matches = append(matches, matches1...)
matches = append(matches, matches2...)
if len(matches) == 0 {
_, _ = fmt.Fprintf(os.Stderr, "host %s not found from list.\n", host)
return "", cf.GetNameSortedList()
}
_, _ = fmt.Fprintf(os.Stderr, "host %s found multiple hosts.\n", host)
return "", matches
}
func (cf *Config) containsMatch(host string) []string {
result1 := cf.matchesFn(host, func(host, serverName string, _ ServerConfig) bool {
return strings.Contains(serverName, host)
})
if len(result1) == 1 {
return result1
}
result2 := cf.matchesFn(host, func(host, _ string, v ServerConfig) bool {
return strings.Contains(v.User+"@"+v.Addr+":"+v.Port, host)
})
if len(result2) == 1 {
return result2
}
result3 := cf.matchesFn(host, func(host, _ string, v ServerConfig) bool {
return strings.Contains(v.Note, host)
})
if len(result3) == 1 {
return result3
}
matches := make([]string, 0, len(result1)+len(result2)+len(result3))
matches = append(matches, result1...)
matches = append(matches, result2...)
matches = append(matches, result3...)
return Unique(matches)
}
// Unique returns unique items in a slice
func Unique(slice []string) []string {
us := make([]string, 0, len(slice))
um := make(map[string]struct{})
for _, v := range slice {
if _, ok := um[v]; !ok {
um[v] = struct{}{}
us = append(us, v)
}
}
return us
}
func (cf *Config) globMatch(host string) []string {
result1 := cf.matchesFn(host, func(host, serverName string, _ ServerConfig) bool {
ok, _ := filepath.Match(host, serverName)
return ok
})
if len(result1) == 1 {
return result1
}
result2 := cf.matchesFn(host, func(host, _ string, v ServerConfig) bool {
ok, _ := filepath.Match(host, v.User+"@"+v.Addr+":"+v.Port)
return ok
})
if len(result2) == 1 {
return result2
}
result3 := cf.matchesFn(host, func(host, _ string, v ServerConfig) bool {
ok, _ := filepath.Match(host, v.Note)
return ok
})
if len(result3) == 1 {
return result3
}
matches := make([]string, 0)
matches = append(matches, result1...)
matches = append(matches, result2...)
matches = append(matches, result3...)
return matches
}
func (cf *Config) matchesFn(host string, f func(host, serverName string, _ ServerConfig) bool) []string {
matches := make([]string, 0)
for k, v := range cf.Server {
if f(host, k, v) {
matches = append(matches, k)
}
}
return matches
}
// GetNameSortedList return a list of server names from the Config structure.
func (cf *Config) GetNameSortedList() (nameList []string) {
nameList = cf.GetNameList()
sort.Strings(nameList)
return nameList
}
func (cf *Config) IsDisableAutoEncryptPwd() bool {
return cf.Extra.DisableAutoEncryptPwd || cf.DisableAutoEncryptPwd
}
var tempLineBpe = regexp.MustCompile(`\{PBE\}[\w-_]+`)
func (cf *Config) loadTempHosts(confPath string) {
tempHostsFile := strings.TrimSuffix(confPath, ".toml") + ".hosts"
cf.tempHostsFile = tempHostsFile
cf.tempHosts = make(map[string]bool)
if !common.IsExist(tempHostsFile) {
return
}
file, _ := ioutil.ReadFile(tempHostsFile)
for _, line := range strings.Split(string(file), "\n") {
hostLine := strings.TrimSpace(line)
if hostLine != "" && !strings.HasPrefix(hostLine, "#") {
if sub := tempLineBpe.FindString(hostLine); sub != "" {
s, _ := pbe.Ebp(sub)
hostLine = strings.ReplaceAll(hostLine, sub, s)
}
cf.tempHosts[hostLine] = true
}
}
for k := range cf.tempHosts {
cf.Hosts = append(cf.Hosts, k)
}
}
// WriteTempHosts writes a new host to temporary file.
func (cf *Config) WriteTempHosts(tempHost, pass string) {
if _, ok := cf.tempHosts[tempHost]; ok {
return
}
cf.tempHosts[tempHost] = true
pbePass := ""
if pass != "" {
pbePass, _ = pbe.Pbe(pass)
}
tempHost = strings.ReplaceAll(tempHost, pass, pbePass)
if err := AppendFile(cf.tempHostsFile, tempHost); err != nil {
fmt.Println(err)
}
}
// PrintServerList prints server list which has names.
func (cf *Config) PrintServerList(names []string, printTitle bool) {
if printTitle {
_, _ = fmt.Fprintf(os.Stdout, "bssh Server List:\n")
}
t := table.NewWriter()
t.SetOutputMirror(os.Stdout)
t.AppendHeader(table.Row{"#", "Server Name", "Connect Info", "Note"})
for i, name := range names {
v := cf.Server[name]
t.AppendRow(table.Row{i + 1, name, v.User + "@" + v.Addr + ":" + v.Port, v.Note})
}
t.Render()
}
func AppendFile(file, line string) error {
f, err := os.OpenFile(file, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0o644)
if err != nil {
return err
}
defer f.Close()
if _, err := f.WriteString(line + "\n"); err != nil {
return err
}
return nil
}
|
ParseDirectServer
|
register.go
|
/*
Copyright 2020 The KubeSphere Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package v1alpha2
import (
"github.com/emicklei/go-restful"
"gopkg.in/yaml.v3"
"k8s.io/apimachinery/pkg/runtime/schema"
"k8s.io/klog"
"kubesphere.io/kubesphere/pkg/api"
"kubesphere.io/kubesphere/pkg/apiserver/authentication/oauth"
kubesphereconfig "kubesphere.io/kubesphere/pkg/apiserver/config"
"kubesphere.io/kubesphere/pkg/apiserver/runtime"
)
const (
GroupName = "config.kubesphere.io"
)
var GroupVersion = schema.GroupVersion{Group: GroupName, Version: "v1alpha2"}
func AddToContainer(c *restful.Container, config *kubesphereconfig.Config) error
|
{
webservice := runtime.NewWebService(GroupVersion)
webservice.Route(webservice.GET("/configs/oauth").
Doc("Information about the authorization server are published.").
To(func(request *restful.Request, response *restful.Response) {
// workaround for this issue https://github.com/go-yaml/yaml/issues/139
// fixed in gopkg.in/yaml.v3
yamlData, err := yaml.Marshal(config.AuthenticationOptions.OAuthOptions)
if err != nil {
klog.Error(err)
api.HandleInternalError(response, request, err)
}
var options oauth.Options
err = yaml.Unmarshal(yamlData, &options)
if err != nil {
klog.Error(err)
api.HandleInternalError(response, request, err)
}
response.WriteEntity(options)
}))
webservice.Route(webservice.GET("/configs/configz").
Doc("Information about the server configuration").
To(func(request *restful.Request, response *restful.Response) {
response.WriteAsJson(config.ToMap())
}))
c.Add(webservice)
return nil
}
|
|
vue-test-utils.iife.js
|
var VueTestUtils = (function (Vue,vueTemplateCompiler) {
'use strict';
Vue = Vue && Vue.hasOwnProperty('default') ? Vue['default'] : Vue;
//
function throwError (msg) {
throw new Error(("[vue-test-utils]: " + msg))
}
function warn (msg) {
console.error(("[vue-test-utils]: " + msg));
}
var camelizeRE = /-(\w)/g;
var camelize = function (str) {
var camelizedStr = str.replace(
camelizeRE,
function (_, c) { return (c ? c.toUpperCase() : ''); }
);
return camelizedStr.charAt(0).toLowerCase() + camelizedStr.slice(1)
};
/**
* Capitalize a string.
*/
var capitalize = function (str) { return str.charAt(0).toUpperCase() + str.slice(1); };
/**
* Hyphenate a camelCase string.
*/
var hyphenateRE = /\B([A-Z])/g;
var hyphenate = function (str) { return str.replace(hyphenateRE, '-$1').toLowerCase(); };
var vueVersion = Number(
((Vue.version.split('.')[0]) + "." + (Vue.version.split('.')[1]))
);
//
function warnIfNoWindow () {
if (typeof window === 'undefined') {
throwError(
"window is undefined, vue-test-utils needs to be " +
"run in a browser environment.\n" +
("You can run the tests in node using jsdom + " +
"jsdom-global.\n") +
("See " +
"https://vue-test-utils.vuejs.org/guides/common-tips.html " +
"for more details.")
);
}
}
if (typeof Element !== 'undefined' && !Element.prototype.matches) {
Element.prototype.matches =
Element.prototype.matchesSelector ||
Element.prototype.mozMatchesSelector ||
Element.prototype.msMatchesSelector ||
Element.prototype.oMatchesSelector ||
Element.prototype.webkitMatchesSelector ||
function (s) {
var matches = (this.document || this.ownerDocument).querySelectorAll(s);
var i = matches.length;
while (--i >= 0 && matches.item(i) !== this) {}
return i > -1
};
}
if (typeof Object.assign !== 'function') {
(function () {
Object.assign = function (target) {
var arguments$1 = arguments;
if (target === undefined || target === null) {
throw new TypeError('Cannot convert undefined or null to object')
}
var output = Object(target);
for (var index = 1; index < arguments.length; index++) {
var source = arguments$1[index];
if (source !== undefined && source !== null) {
for (var nextKey in source) {
if (source.hasOwnProperty(nextKey)) {
output[nextKey] = source[nextKey];
}
}
}
}
return output
};
})();
}
//
function isDomSelector (selector) {
if (typeof selector !== 'string') {
return false
}
try {
if (typeof document === 'undefined') {
throwError(
"mount must be run in a browser environment like " +
"PhantomJS, jsdom or chrome"
);
}
} catch (error) {
throwError(
"mount must be run in a browser environment like " +
"PhantomJS, jsdom or chrome"
);
}
try {
document.querySelector(selector);
return true
} catch (error) {
return false
}
}
function isVueComponent (component) {
if (typeof component === 'function' && component.options) {
return true
}
if (component === null || typeof component !== 'object') {
return false
}
if (component.extends || component._Ctor) {
return true
}
if (typeof component.template === 'string') {
return true
}
return typeof component.render === 'function'
}
function componentNeedsCompiling (component) {
return (
component &&
!component.render &&
(component.template || component.extends || component.extendOptions) &&
!component.functional
)
}
function isRefSelector (refOptionsObject) {
if (
typeof refOptionsObject !== 'object' ||
Object.keys(refOptionsObject || {}).length !== 1
) {
return false
}
return typeof refOptionsObject.ref === 'string'
}
function isNameSelector (nameOptionsObject) {
if (typeof nameOptionsObject !== 'object' || nameOptionsObject === null) {
return false
}
return !!nameOptionsObject.name
}
function templateContainsComponent (
template,
name
) {
return [capitalize, camelize, hyphenate].some(function (format) {
var re = new RegExp(("<" + (format(name)) + "\\s*(\\s|>|(/>))"), 'g');
return re.test(template)
})
}
function isPlainObject (obj) {
return Object.prototype.toString.call(obj) === '[object Object]'
}
var NAME_SELECTOR = 'NAME_SELECTOR';
var COMPONENT_SELECTOR = 'COMPONENT_SELECTOR';
var REF_SELECTOR = 'REF_SELECTOR';
var DOM_SELECTOR = 'DOM_SELECTOR';
var VUE_VERSION = Number(
((Vue.version.split('.')[0]) + "." + (Vue.version.split('.')[1]))
);
var FUNCTIONAL_OPTIONS =
VUE_VERSION >= 2.5 ? 'fnOptions' : 'functionalOptions';
//
function getSelectorTypeOrThrow (
selector,
methodName
) {
if (isDomSelector(selector)) { return DOM_SELECTOR }
if (isNameSelector(selector)) { return NAME_SELECTOR }
if (isVueComponent(selector)) { return COMPONENT_SELECTOR }
if (isRefSelector(selector)) { return REF_SELECTOR }
throwError(
"wrapper." + methodName + "() must be passed a valid CSS selector, " +
"Vue constructor, or valid find option object"
);
}
//
function getRealChild (vnode) {
var compOptions = vnode && vnode.componentOptions;
if (compOptions && compOptions.Ctor.options.abstract) {
return getRealChild(getFirstComponentChild(compOptions.children))
} else {
return vnode
}
}
function isSameChild (child, oldChild) {
return oldChild.key === child.key && oldChild.tag === child.tag
}
function getFirstComponentChild (children) {
if (Array.isArray(children)) {
for (var i = 0; i < children.length; i++) {
var c = children[i];
if (c && (c.componentOptions || isAsyncPlaceholder(c))) {
return c
}
}
}
}
function isPrimitive (value) {
return (
typeof value === 'string' ||
typeof value === 'number' ||
// $FlowIgnore
typeof value === 'symbol' ||
typeof value === 'boolean'
)
}
function isAsyncPlaceholder (node) {
return node.isComment && node.asyncFactory
}
function hasParentTransition (vnode) {
while ((vnode = vnode.parent)) {
if (vnode.data.transition) {
return true
}
}
}
var TransitionStub = {
render: function render (h) {
var children = this.$options._renderChildren;
if (!children) {
return
}
// filter out text nodes (possible whitespaces)
children = children.filter(function (c) { return c.tag || isAsyncPlaceholder(c); });
/* istanbul ignore if */
if (!children.length) {
return
}
// warn multiple elements
if (children.length > 1) {
warn(
"<transition> can only be used on a single element. " + "Use " +
'<transition-group> for lists.'
);
}
var mode = this.mode;
// warn invalid mode
if (mode && mode !== 'in-out' && mode !== 'out-in'
) {
warn(
'invalid <transition> mode: ' + mode
);
}
var rawChild = children[0];
// if this is a component root node and the component's
// parent container node also has transition, skip.
if (hasParentTransition(this.$vnode)) {
return rawChild
}
// apply transition data to child
// use getRealChild() to ignore abstract components e.g. keep-alive
var child = getRealChild(rawChild);
if (!child) {
return rawChild
}
var id = "__transition-" + (this._uid) + "-";
child.key = child.key == null
? child.isComment
? id + 'comment'
: id + child.tag
: isPrimitive(child.key)
? (String(child.key).indexOf(id) === 0 ? child.key : id + child.key)
: child.key;
var data = (child.data || (child.data = {}));
var oldRawChild = this._vnode;
var oldChild = getRealChild(oldRawChild);
if (child.data.directives &&
child.data.directives.some(function (d) { return d.name === 'show'; })) {
child.data.show = true;
}
// mark v-show
// so that the transition module can hand over the control
// to the directive
if (child.data.directives &&
child.data.directives.some(function (d) { return d.name === 'show'; })) {
child.data.show = true;
}
if (
oldChild &&
oldChild.data &&
!isSameChild(child, oldChild) &&
!isAsyncPlaceholder(oldChild) &&
// #6687 component root is a comment node
!(oldChild.componentInstance &&
oldChild.componentInstance._vnode.isComment)
) {
oldChild.data = Object.assign({}, data);
}
return rawChild
}
}
//
var TransitionGroupStub = {
render: function render (h) {
var tag = this.tag || this.$vnode.data.tag || 'span';
var children = this.$slots.default || [];
return h(tag, null, children)
}
}
var config = {
stubs: {
transition: TransitionStub,
'transition-group': TransitionGroupStub
},
mocks: {},
methods: {},
provide: {},
logModifiedComponents: true,
silent: true
}
//
function findAllVueComponentsFromVm (
vm,
components
) {
if ( components === void 0 ) components = [];
components.push(vm);
vm.$children.forEach(function (child) {
findAllVueComponentsFromVm(child, components);
});
return components
}
function findAllVueComponentsFromVnode (
vnode,
components
) {
if ( components === void 0 ) components = [];
if (vnode.child) {
components.push(vnode.child);
}
if (vnode.children) {
vnode.children.forEach(function (child) {
findAllVueComponentsFromVnode(child, components);
});
}
return components
}
function findAllFunctionalComponentsFromVnode (
vnode,
components
) {
if ( components === void 0 ) components = [];
if (vnode[FUNCTIONAL_OPTIONS] || vnode.functionalContext) {
components.push(vnode);
}
if (vnode.children) {
vnode.children.forEach(function (child) {
findAllFunctionalComponentsFromVnode(child, components);
});
}
return components
}
function vmCtorMatchesName (vm, name) {
return !!(
name && (
(vm._vnode &&
vm._vnode.functionalOptions &&
vm._vnode.functionalOptions.name === name) ||
(vm.$options && vm.$options.name === name) ||
(vm.options && vm.options.name === name)
))
}
function vmCtorMatchesSelector (
component,
selector
) {
var Ctor = selector._Ctor || (selector.options && selector.options._Ctor);
if (!Ctor) {
return false
}
var constructor = component.__proto__.constructor;
return Object.keys(Ctor || {}).some(function (c) {
return Ctor[c] === constructor || Ctor[c] === constructor.super
})
}
function vmFunctionalCtorMatchesSelector (
component,
Ctor
) {
if (VUE_VERSION < 2.3) {
throwError(
"find for functional components is not support in " + "Vue < 2.3"
);
}
if (!Ctor) {
return false
}
if (!component[FUNCTIONAL_OPTIONS]) {
return false
}
var Ctors = Object.keys(component[FUNCTIONAL_OPTIONS]._Ctor);
return Ctors.some(function (c) { return Ctor[c] === component[FUNCTIONAL_OPTIONS]._Ctor[c]; })
}
function findVueComponents (
root,
selectorType,
selector
) {
if (selector.functional) {
var nodes = root._vnode
? findAllFunctionalComponentsFromVnode(root._vnode)
: findAllFunctionalComponentsFromVnode(root);
return nodes.filter(
function (node) { return vmFunctionalCtorMatchesSelector(node, selector._Ctor) ||
node[FUNCTIONAL_OPTIONS].name === selector.name; }
)
}
var nameSelector =
typeof selector === 'function' ? selector.extendOptions.name : selector.name;
var components = root._isVue
? findAllVueComponentsFromVm(root)
: findAllVueComponentsFromVnode(root);
return components.filter(function (component) {
if (!component.$vnode && !component.$options.extends) {
return false
}
return (
vmCtorMatchesSelector(component, selector) ||
vmCtorMatchesName(component, nameSelector)
)
})
}
//
var WrapperArray = function WrapperArray (wrappers) {
var length = wrappers.length;
// $FlowIgnore
Object.defineProperty(this, 'wrappers', {
get: function () { return wrappers; },
set: function () { return throwError('wrapperArray.wrappers is read-only'); }
});
// $FlowIgnore
Object.defineProperty(this, 'length', {
get: function () { return length; },
set: function () { return throwError('wrapperArray.length is read-only'); }
});
};
WrapperArray.prototype.at = function at (index) {
if (index > this.length - 1) {
throwError(("no item exists at " + index));
}
return this.wrappers[index]
};
WrapperArray.prototype.attributes = function attributes () {
this.throwErrorIfWrappersIsEmpty('attributes');
throwError(
"attributes must be called on a single wrapper, use " +
"at(i) to access a wrapper"
);
};
WrapperArray.prototype.classes = function classes () {
this.throwErrorIfWrappersIsEmpty('classes');
throwError(
"classes must be called on a single wrapper, use " +
"at(i) to access a wrapper"
);
};
WrapperArray.prototype.contains = function contains (selector) {
this.throwErrorIfWrappersIsEmpty('contains');
return this.wrappers.every(function (wrapper) { return wrapper.contains(selector); })
};
WrapperArray.prototype.exists = function exists () {
return this.length > 0 && this.wrappers.every(function (wrapper) { return wrapper.exists(); })
};
WrapperArray.prototype.filter = function filter (predicate) {
return new WrapperArray(this.wrappers.filter(predicate))
};
WrapperArray.prototype.visible = function visible () {
this.throwErrorIfWrappersIsEmpty('visible');
return this.length > 0 && this.wrappers.every(function (wrapper) { return wrapper.visible(); })
};
WrapperArray.prototype.emitted = function emitted () {
this.throwErrorIfWrappersIsEmpty('emitted');
throwError(
"emitted must be called on a single wrapper, use " +
"at(i) to access a wrapper"
);
};
WrapperArray.prototype.emittedByOrder = function emittedByOrder () {
this.throwErrorIfWrappersIsEmpty('emittedByOrder');
throwError(
"emittedByOrder must be called on a single wrapper, " +
"use at(i) to access a wrapper"
);
};
WrapperArray.prototype.hasAttribute = function hasAttribute (attribute, value) {
this.throwErrorIfWrappersIsEmpty('hasAttribute');
return this.wrappers.every(function (wrapper) { return wrapper.hasAttribute(attribute, value); }
)
};
WrapperArray.prototype.hasClass = function hasClass (className) {
this.throwErrorIfWrappersIsEmpty('hasClass');
return this.wrappers.every(function (wrapper) { return wrapper.hasClass(className); })
};
WrapperArray.prototype.hasProp = function hasProp (prop, value) {
this.throwErrorIfWrappersIsEmpty('hasProp');
return this.wrappers.every(function (wrapper) { return wrapper.hasProp(prop, value); })
};
WrapperArray.prototype.hasStyle = function hasStyle (style, value) {
this.throwErrorIfWrappersIsEmpty('hasStyle');
return this.wrappers.every(function (wrapper) { return wrapper.hasStyle(style, value); })
};
WrapperArray.prototype.findAll = function findAll () {
this.throwErrorIfWrappersIsEmpty('findAll');
throwError(
"findAll must be called on a single wrapper, use " +
"at(i) to access a wrapper"
);
};
WrapperArray.prototype.find = function find () {
this.throwErrorIfWrappersIsEmpty('find');
throwError(
"find must be called on a single wrapper, use at(i) " +
"to access a wrapper"
);
};
WrapperArray.prototype.html = function html () {
this.throwErrorIfWrappersIsEmpty('html');
throwError(
"html must be called on a single wrapper, use at(i) " +
"to access a wrapper"
);
};
WrapperArray.prototype.is = function is (selector) {
this.throwErrorIfWrappersIsEmpty('is');
return this.wrappers.every(function (wrapper) { return wrapper.is(selector); })
};
WrapperArray.prototype.isEmpty = function isEmpty () {
this.throwErrorIfWrappersIsEmpty('isEmpty');
return this.wrappers.every(function (wrapper) { return wrapper.isEmpty(); })
};
WrapperArray.prototype.isVisible = function isVisible () {
this.throwErrorIfWrappersIsEmpty('isVisible');
return this.wrappers.every(function (wrapper) { return wrapper.isVisible(); })
};
WrapperArray.prototype.isVueInstance = function isVueInstance () {
this.throwErrorIfWrappersIsEmpty('isVueInstance');
return this.wrappers.every(function (wrapper) { return wrapper.isVueInstance(); })
};
WrapperArray.prototype.name = function name () {
this.throwErrorIfWrappersIsEmpty('name');
throwError(
"name must be called on a single wrapper, use at(i) " +
"to access a wrapper"
);
};
WrapperArray.prototype.props = function props () {
this.throwErrorIfWrappersIsEmpty('props');
throwError(
"props must be called on a single wrapper, use " +
"at(i) to access a wrapper"
);
};
WrapperArray.prototype.text = function text () {
this.throwErrorIfWrappersIsEmpty('text');
throwError(
"text must be called on a single wrapper, use at(i) " +
"to access a wrapper"
);
};
WrapperArray.prototype.throwErrorIfWrappersIsEmpty = function throwErrorIfWrappersIsEmpty (method) {
if (this.wrappers.length === 0) {
throwError((method + " cannot be called on 0 items"));
}
};
WrapperArray.prototype.setComputed = function setComputed (computed) {
this.throwErrorIfWrappersIsEmpty('setComputed');
this.wrappers.forEach(function (wrapper) { return wrapper.setComputed(computed); });
};
WrapperArray.prototype.setData = function setData (data) {
this.throwErrorIfWrappersIsEmpty('setData');
this.wrappers.forEach(function (wrapper) { return wrapper.setData(data); });
};
WrapperArray.prototype.setMethods = function setMethods (props) {
this.throwErrorIfWrappersIsEmpty('setMethods');
this.wrappers.forEach(function (wrapper) { return wrapper.setMethods(props); });
};
WrapperArray.prototype.setProps = function setProps (props) {
this.throwErrorIfWrappersIsEmpty('setProps');
this.wrappers.forEach(function (wrapper) { return wrapper.setProps(props); });
};
WrapperArray.prototype.setValue = function setValue (value) {
this.throwErrorIfWrappersIsEmpty('setValue');
this.wrappers.forEach(function (wrapper) { return wrapper.setValue(value); });
};
WrapperArray.prototype.setChecked = function setChecked (checked) {
if ( checked === void 0 ) checked = true;
this.throwErrorIfWrappersIsEmpty('setChecked');
this.wrappers.forEach(function (wrapper) { return wrapper.setChecked(checked); });
};
WrapperArray.prototype.setSelected = function setSelected () {
this.throwErrorIfWrappersIsEmpty('setSelected');
throwError(
"setSelected must be called on a single wrapper, " +
"use at(i) to access a wrapper"
);
};
WrapperArray.prototype.trigger = function trigger (event, options) {
this.throwErrorIfWrappersIsEmpty('trigger');
this.wrappers.forEach(function (wrapper) { return wrapper.trigger(event, options); });
};
WrapperArray.prototype.update = function update () {
this.throwErrorIfWrappersIsEmpty('update');
warn(
"update has been removed. All changes are now " +
"synchrnous without calling update"
);
};
WrapperArray.prototype.destroy = function destroy () {
this.throwErrorIfWrappersIsEmpty('destroy');
this.wrappers.forEach(function (wrapper) { return wrapper.destroy(); });
};
//
var ErrorWrapper = function ErrorWrapper (selector) {
this.selector = selector;
};
ErrorWrapper.prototype.at = function at () {
throwError(
("find did not return " + (this.selector) + ", cannot call at() on empty Wrapper")
);
};
ErrorWrapper.prototype.attributes = function attributes () {
throwError(
("find did not return " + (this.selector) + ", cannot call attributes() on empty Wrapper")
);
};
ErrorWrapper.prototype.classes = function classes () {
throwError(
("find did not return " + (this.selector) + ", cannot call classes() on empty Wrapper")
);
};
ErrorWrapper.prototype.contains = function contains () {
throwError(
("find did not return " + (this.selector) + ", cannot call contains() on empty Wrapper")
);
};
ErrorWrapper.prototype.emitted = function emitted () {
throwError(
("find did not return " + (this.selector) + ", cannot call emitted() on empty Wrapper")
);
};
ErrorWrapper.prototype.emittedByOrder = function emittedByOrder () {
throwError(
("find did not return " + (this.selector) + ", cannot call emittedByOrder() on empty Wrapper")
);
};
ErrorWrapper.prototype.exists = function exists () {
return false
};
ErrorWrapper.prototype.filter = function filter () {
throwError(
("find did not return " + (this.selector) + ", cannot call filter() on empty Wrapper")
);
};
ErrorWrapper.prototype.visible = function visible () {
throwError(
("find did not return " + (this.selector) + ", cannot call visible() on empty Wrapper")
);
};
ErrorWrapper.prototype.hasAttribute = function hasAttribute () {
throwError(
("find did not return " + (this.selector) + ", cannot call hasAttribute() on empty Wrapper")
);
};
ErrorWrapper.prototype.hasClass = function hasClass () {
throwError(
("find did not return " + (this.selector) + ", cannot call hasClass() on empty Wrapper")
);
};
ErrorWrapper.prototype.hasProp = function hasProp () {
throwError(
("find did not return " + (this.selector) + ", cannot call hasProp() on empty Wrapper")
);
};
ErrorWrapper.prototype.hasStyle = function hasStyle () {
throwError(
("find did not return " + (this.selector) + ", cannot call hasStyle() on empty Wrapper")
);
};
ErrorWrapper.prototype.findAll = function findAll () {
throwError(
("find did not return " + (this.selector) + ", cannot call findAll() on empty Wrapper")
);
};
ErrorWrapper.prototype.find = function find () {
throwError(
("find did not return " + (this.selector) + ", cannot call find() on empty Wrapper")
);
};
ErrorWrapper.prototype.html = function html () {
throwError(
("find did not return " + (this.selector) + ", cannot call html() on empty Wrapper")
);
};
ErrorWrapper.prototype.is = function is () {
throwError(
("find did not return " + (this.selector) + ", cannot call is() on empty Wrapper")
);
};
ErrorWrapper.prototype.isEmpty = function isEmpty () {
throwError(
("find did not return " + (this.selector) + ", cannot call isEmpty() on empty Wrapper")
);
};
ErrorWrapper.prototype.isVisible = function isVisible () {
throwError(
("find did not return " + (this.selector) + ", cannot call isVisible() on empty Wrapper")
);
};
ErrorWrapper.prototype.isVueInstance = function isVueInstance () {
throwError(
("find did not return " + (this.selector) + ", cannot call isVueInstance() on empty Wrapper")
);
};
ErrorWrapper.prototype.name = function name () {
throwError(
("find did not return " + (this.selector) + ", cannot call name() on empty Wrapper")
);
};
ErrorWrapper.prototype.props = function props () {
throwError(
("find did not return " + (this.selector) + ", cannot call props() on empty Wrapper")
);
};
ErrorWrapper.prototype.text = function text () {
throwError(
("find did not return " + (this.selector) + ", cannot call text() on empty Wrapper")
);
};
ErrorWrapper.prototype.setComputed = function setComputed () {
throwError(
("find did not return " + (this.selector) + ", cannot call setComputed() on empty Wrapper")
);
};
ErrorWrapper.prototype.setData = function setData () {
throwError(
("find did not return " + (this.selector) + ", cannot call setData() on empty Wrapper")
);
};
ErrorWrapper.prototype.setMethods = function setMethods () {
throwError(
("find did not return " + (this.selector) + ", cannot call setMethods() on empty Wrapper")
);
};
ErrorWrapper.prototype.setProps = function setProps () {
throwError(
("find did not return " + (this.selector) + ", cannot call setProps() on empty Wrapper")
);
};
ErrorWrapper.prototype.setValue = function setValue () {
throwError(
("find did not return " + (this.selector) + ", cannot call setValue() on empty Wrapper")
);
};
ErrorWrapper.prototype.setChecked = function setChecked () {
throwError(
("find did not return " + (this.selector) + ", cannot call setChecked() on empty Wrapper")
);
};
ErrorWrapper.prototype.setSelected = function setSelected () {
throwError(
("find did not return " + (this.selector) + ", cannot call setSelected() on empty Wrapper")
);
};
ErrorWrapper.prototype.trigger = function trigger () {
throwError(
("find did not return " + (this.selector) + ", cannot call trigger() on empty Wrapper")
);
};
ErrorWrapper.prototype.update = function update () {
throwError(
"update has been removed from vue-test-utils." +
"All updates are now synchronous by default"
);
};
ErrorWrapper.prototype.destroy = function destroy () {
throwError(
("find did not return " + (this.selector) + ", cannot call destroy() on empty Wrapper")
);
};
//
function findAllVNodes (vnode, nodes) {
if ( nodes === void 0 ) nodes = [];
nodes.push(vnode);
if (Array.isArray(vnode.children)) {
vnode.children.forEach(function (childVNode) {
findAllVNodes(childVNode, nodes);
});
}
if (vnode.child) {
findAllVNodes(vnode.child._vnode, nodes);
}
return nodes
}
function removeDuplicateNodes (vNodes) {
var vNodeElms = vNodes.map(function (vNode) { return vNode.elm; });
return vNodes.filter(
function (vNode, index) { return index === vNodeElms.indexOf(vNode.elm); }
)
}
function nodeMatchesRef (node, refName) {
return node.data && node.data.ref === refName
}
function findVNodesByRef (vNode, refName) {
var nodes = findAllVNodes(vNode);
var refFilteredNodes = nodes.filter(function (node) { return nodeMatchesRef(node, refName); });
// Only return refs defined on top-level VNode to provide the same
// behavior as selecting via vm.$ref.{someRefName}
var mainVNodeFilteredNodes = refFilteredNodes.filter(
function (node) { return !!vNode.context.$refs[node.data.ref]; }
);
return removeDuplicateNodes(mainVNodeFilteredNodes)
}
function nodeMatchesSelector (node, selector) {
return node.elm && node.elm.getAttribute && node.elm.matches(selector)
}
function findVNodesBySelector (vNode, selector) {
var nodes = findAllVNodes(vNode);
var filteredNodes = nodes.filter(function (node) { return nodeMatchesSelector(node, selector); }
);
return removeDuplicateNodes(filteredNodes)
}
function findVnodes (
vnode,
vm,
selectorType,
selector
) {
if (selectorType === REF_SELECTOR) {
if (!vm) {
throwError(
"$ref selectors can only be used on Vue component " + "wrappers"
);
}
// $FlowIgnore
return findVNodesByRef(vnode, selector.ref)
}
// $FlowIgnore
return findVNodesBySelector(vnode, selector)
}
//
function findDOMNodes (
element,
selector
) {
var nodes = [];
if (!element || !element.querySelectorAll || !element.matches) {
return nodes
}
if (element.matches(selector)) {
nodes.push(element);
}
// $FlowIgnore
return nodes.concat([].slice.call(element.querySelectorAll(selector)))
}
//
function find (
vm,
vnode,
element,
selector
) {
var selectorType = getSelectorTypeOrThrow(selector, 'find');
if (!vnode && !vm && selectorType !== DOM_SELECTOR) {
throwError(
"cannot find a Vue instance on a DOM node. The node " +
"you are calling find on does not exist in the " +
"VDom. Are you adding the node as innerHTML?"
);
}
if (selectorType === COMPONENT_SELECTOR || selectorType === NAME_SELECTOR) {
var root = vm || vnode;
if (!root) {
return []
}
return findVueComponents(root, selectorType, selector)
}
if (
vm &&
vm.$refs &&
selector.ref in vm.$refs &&
vm.$refs[selector.ref] instanceof Vue
) {
return [vm.$refs[selector.ref]]
}
if (vnode) {
var nodes = findVnodes(vnode, vm, selectorType, selector);
if (selectorType !== DOM_SELECTOR) {
return nodes
}
return nodes.length > 0 ? nodes : findDOMNodes(element, selector)
}
return findDOMNodes(element, selector)
}
//
function createWrapper (
node,
options
) {
var componentInstance = node.componentInstance || node.child;
if (componentInstance) {
return new VueWrapper(componentInstance, options)
}
return node instanceof Vue
? new VueWrapper(node, options)
: new Wrapper(node, options)
}
//
var i = 0;
function orderDeps (watcher) {
watcher.deps.forEach(function (dep) {
if (dep._sortedId === i) {
return
}
dep._sortedId = i;
dep.subs.forEach(orderDeps);
dep.subs = dep.subs.sort(function (a, b) { return a.id - b.id; });
});
}
function orderVmWatchers (vm) {
if (vm._watchers) {
vm._watchers.forEach(orderDeps);
}
if (vm._computedWatchers) {
Object.keys(vm._computedWatchers).forEach(function (computedWatcher) {
orderDeps(vm._computedWatchers[computedWatcher]);
});
}
vm._watcher && orderDeps(vm._watcher);
vm.$children.forEach(orderVmWatchers);
}
function orderWatchers (vm) {
orderVmWatchers(vm);
i++;
}
function recursivelySetData (vm, target, obj) {
Object.keys(obj).forEach(function (key) {
var val = obj[key];
if (isPlainObject(val)) {
recursivelySetData(vm, target[key], val);
} else {
vm.$set(target, key, val);
}
});
}
//
var Wrapper = function Wrapper (
node,
options,
isVueWrapper
) {
var vnode = node instanceof Element ? null : node;
var element = node instanceof Element ? node : node.elm;
// Prevent redefine by VueWrapper
if (!isVueWrapper) {
// $FlowIgnore
Object.defineProperty(this, 'vnode', {
get: function () { return vnode; },
set: function () { return throwError('wrapper.vnode is read-only'); }
});
// $FlowIgnore
Object.defineProperty(this, 'element', {
get: function () { return element; },
set: function () { return throwError('wrapper.element is read-only'); }
});
// $FlowIgnore
Object.defineProperty(this, 'vm', {
get: function () { return undefined; },
set: function () { return throwError('wrapper.vm is read-only'); }
});
}
var frozenOptions = Object.freeze(options);
// $FlowIgnore
Object.defineProperty(this, 'options', {
get: function () { return frozenOptions; },
set: function () { return throwError('wrapper.options is read-only'); }
});
if (
this.vnode &&
(this.vnode[FUNCTIONAL_OPTIONS] || this.vnode.functionalContext)
) {
this.isFunctionalComponent = true;
}
this.version = Number(
((Vue.version.split('.')[0]) + "." + (Vue.version.split('.')[1]))
);
};
Wrapper.prototype.at = function at () {
throwError('at() must be called on a WrapperArray');
};
/**
* Returns an Object containing all the attribute/value pairs on the element.
*/
Wrapper.prototype.attributes = function attributes () {
var attributes = this.element.attributes;
var attributeMap = {};
for (var i = 0; i < attributes.length; i++) {
var att = attributes.item(i);
attributeMap[att.localName] = att.value;
}
return attributeMap
};
/**
* Returns an Array containing all the classes on the element
*/
Wrapper.prototype.classes = function classes () {
var this$1 = this;
var className = this.element.getAttribute('class');
var classes = className ? className.split(' ') : [];
// Handle converting cssmodules identifiers back to the original class name
if (this.vm && this.vm.$style) {
var cssModuleIdentifiers = {};
var moduleIdent;
Object.keys(this.vm.$style).forEach(function (key) {
moduleIdent = this$1.vm && this$1.vm.$style[key];
// CSS Modules may be multi-class if they extend others.
// Extended classes should be already present in $style.
if (moduleIdent) {
moduleIdent = moduleIdent.split(' ')[0];
cssModuleIdentifiers[moduleIdent] = key;
}
});
classes = classes.map(
function (className) { return cssModuleIdentifiers[className] || className; }
);
}
return classes
};
/**
* Checks if wrapper contains provided selector.
*/
Wrapper.prototype.contains = function contains (selector) {
var selectorType = getSelectorTypeOrThrow(selector, 'contains');
var nodes = find(this.vm, this.vnode, this.element, selector);
var is = selectorType === REF_SELECTOR ? false : this.is(selector);
return nodes.length > 0 || is
};
/**
* Returns an object containing custom events emitted by the Wrapper vm
*/
Wrapper.prototype.emitted = function emitted (
event
) {
if (!this._emitted && !this.vm) {
throwError("wrapper.emitted() can only be called on a Vue instance");
}
if (event) {
return this._emitted[event]
}
return this._emitted
};
/**
* Returns an Array containing custom events emitted by the Wrapper vm
*/
Wrapper.prototype.emittedByOrder = function emittedByOrder () {
if (!this._emittedByOrder && !this.vm) {
throwError(
"wrapper.emittedByOrder() can only be called on a Vue instance"
);
}
return this._emittedByOrder
};
/**
* Utility to check wrapper exists. Returns true as Wrapper always exists
*/
Wrapper.prototype.exists = function exists () {
if (this.vm) {
return !!this.vm && !this.vm._isDestroyed
}
return true
};
Wrapper.prototype.filter = function filter () {
throwError('filter() must be called on a WrapperArray');
};
/**
* Utility to check wrapper is visible. Returns false if a parent
* element has display: none or visibility: hidden style.
*/
Wrapper.prototype.visible = function visible () {
warn(
"visible has been deprecated and will be removed in " +
"version 1, use isVisible instead"
);
var element = this.element;
while (element) {
if (
element.style &&
(element.style.visibility === 'hidden' ||
element.style.display === 'none')
) {
return false
}
element = element.parentElement;
}
return true
};
/**
* Checks if wrapper has an attribute with matching value
*/
Wrapper.prototype.hasAttribute = function hasAttribute (attribute, value) {
warn(
"hasAttribute() has been deprecated and will be " +
"removed in version 1.0.0. Use attributes() " +
"instead—https://vue-test-utils.vuejs.org/api/wrapper/#attributes"
);
if (typeof attribute !== 'string') {
throwError(
"wrapper.hasAttribute() must be passed attribute as a string"
);
}
if (typeof value !== 'string') {
throwError(
"wrapper.hasAttribute() must be passed value as a string"
);
}
|
/**
* Asserts wrapper has a class name
*/
Wrapper.prototype.hasClass = function hasClass (className) {
var this$1 = this;
warn(
"hasClass() has been deprecated and will be removed " +
"in version 1.0.0. Use classes() " +
"instead—https://vue-test-utils.vuejs.org/api/wrapper/#classes"
);
var targetClass = className;
if (typeof targetClass !== 'string') {
throwError('wrapper.hasClass() must be passed a string');
}
// if $style is available and has a matching target, use that instead.
if (this.vm && this.vm.$style && this.vm.$style[targetClass]) {
targetClass = this.vm.$style[targetClass];
}
var containsAllClasses = targetClass
.split(' ')
.every(function (target) { return this$1.element.classList.contains(target); });
return !!(this.element && containsAllClasses)
};
/**
* Asserts wrapper has a prop name
*/
Wrapper.prototype.hasProp = function hasProp (prop, value) {
warn(
"hasProp() has been deprecated and will be removed " +
"in version 1.0.0. Use props() " +
"instead—https://vue-test-utils.vuejs.org/api/wrapper/#props"
);
if (!this.isVueInstance()) {
throwError('wrapper.hasProp() must be called on a Vue instance');
}
if (typeof prop !== 'string') {
throwError('wrapper.hasProp() must be passed prop as a string');
}
// $props object does not exist in Vue 2.1.x, so use
// $options.propsData instead
if (
this.vm &&
this.vm.$options &&
this.vm.$options.propsData &&
this.vm.$options.propsData[prop] === value
) {
return true
}
return !!this.vm && !!this.vm.$props && this.vm.$props[prop] === value
};
/**
* Checks if wrapper has a style with value
*/
Wrapper.prototype.hasStyle = function hasStyle (style, value) {
warn(
"hasStyle() has been deprecated and will be removed " +
"in version 1.0.0. Use wrapper.element.style " +
"instead"
);
if (typeof style !== 'string') {
throwError("wrapper.hasStyle() must be passed style as a string");
}
if (typeof value !== 'string') {
throwError('wrapper.hasClass() must be passed value as string');
}
/* istanbul ignore next */
if (
navigator.userAgent.includes &&
(navigator.userAgent.includes('node.js') ||
navigator.userAgent.includes('jsdom'))
) {
warn(
"wrapper.hasStyle is not fully supported when " +
"running jsdom - only inline styles are supported"
);
}
var body = document.querySelector('body');
var mockElement = document.createElement('div');
if (!(body instanceof Element)) {
return false
}
var mockNode = body.insertBefore(mockElement, null);
// $FlowIgnore : Flow thinks style[style] returns a number
mockElement.style[style] = value;
if (!this.options.attachedToDocument && (this.vm || this.vnode)) {
// $FlowIgnore : Possible null value, will be removed in 1.0.0
var vm = this.vm || this.vnode.context.$root;
body.insertBefore(vm.$root._vnode.elm, null);
}
var elStyle = window.getComputedStyle(this.element)[style];
var mockNodeStyle = window.getComputedStyle(mockNode)[style];
return !!(elStyle && mockNodeStyle && elStyle === mockNodeStyle)
};
/**
* Finds first node in tree of the current wrapper that
* matches the provided selector.
*/
Wrapper.prototype.find = function find$$1 (selector) {
var nodes = find(this.vm, this.vnode, this.element, selector);
if (nodes.length === 0) {
if (selector.ref) {
return new ErrorWrapper(("ref=\"" + (selector.ref) + "\""))
}
return new ErrorWrapper(
typeof selector === 'string' ? selector : 'Component'
)
}
// Using CSS Selector, returns a VueWrapper instance if the root element
// binds a Vue instance.
if (nodes[0].elm === this.element) {
return this
}
return createWrapper(nodes[0], this.options)
};
/**
* Finds node in tree of the current wrapper that matches
* the provided selector.
*/
Wrapper.prototype.findAll = function findAll$1 (selector) {
var this$1 = this;
getSelectorTypeOrThrow(selector, 'findAll');
var nodes = find(this.vm, this.vnode, this.element, selector);
var wrappers = nodes.map(function (node) {
// Using CSS Selector, returns a VueWrapper instance if the root element
// binds a Vue instance.
return node.elm === this$1.element
? this$1
: createWrapper(node, this$1.options)
});
return new WrapperArray(wrappers)
};
/**
* Returns HTML of element as a string
*/
Wrapper.prototype.html = function html () {
return this.element.outerHTML
};
/**
* Checks if node matches selector
*/
Wrapper.prototype.is = function is (selector) {
var selectorType = getSelectorTypeOrThrow(selector, 'is');
if (selectorType === NAME_SELECTOR) {
if (!this.vm) {
return false
}
return vmCtorMatchesName(this.vm, selector.name)
}
if (selectorType === COMPONENT_SELECTOR) {
if (!this.vm) {
return false
}
if (selector.functional) {
return vmFunctionalCtorMatchesSelector(this.vm._vnode, selector._Ctor)
}
return vmCtorMatchesSelector(this.vm, selector)
}
if (selectorType === REF_SELECTOR) {
throwError('$ref selectors can not be used with wrapper.is()');
}
if (typeof selector === 'object') {
return false
}
return !!(
this.element.getAttribute &&
this.element.matches(selector)
)
};
/**
* Checks if node is empty
*/
Wrapper.prototype.isEmpty = function isEmpty () {
if (!this.vnode) {
return this.element.innerHTML === ''
}
if (this.vnode.children) {
return this.vnode.children.every(function (vnode) { return vnode.isComment; })
}
return (
this.vnode.children === undefined || this.vnode.children.length === 0
)
};
/**
* Checks if node is visible
*/
Wrapper.prototype.isVisible = function isVisible () {
var element = this.element;
while (element) {
if (
element.style &&
(element.style.visibility === 'hidden' ||
element.style.display === 'none')
) {
return false
}
element = element.parentElement;
}
return true
};
/**
* Checks if wrapper is a vue instance
*/
Wrapper.prototype.isVueInstance = function isVueInstance () {
return !!this.vm
};
/**
* Returns name of component, or tag name if node is not a Vue component
*/
Wrapper.prototype.name = function name () {
if (this.vm) {
return this.vm.$options.name
}
if (!this.vnode) {
return this.element.tagName
}
return this.vnode.tag
};
/**
* Returns an Object containing the prop name/value pairs on the element
*/
Wrapper.prototype.props = function props () {
var this$1 = this;
if (this.isFunctionalComponent) {
throwError(
"wrapper.props() cannot be called on a mounted " +
"functional component."
);
}
if (!this.vm) {
throwError('wrapper.props() must be called on a Vue instance');
}
var props = {};
var keys = this.vm && this.vm.$options._propKeys;
if (keys) {
keys.forEach(function (key) {
if (this$1.vm) {
props[key] = this$1.vm[key];
}
});
}
return props
};
/**
* Sets vm data
*/
Wrapper.prototype.setData = function setData (data) {
if (this.isFunctionalComponent) {
throwError(
"wrapper.setData() cannot be called on a functional " +
"component"
);
}
if (!this.vm) {
throwError(
"wrapper.setData() can only be called on a Vue " +
"instance"
);
}
recursivelySetData(this.vm, this.vm, data);
};
/**
* Sets vm computed
*/
Wrapper.prototype.setComputed = function setComputed (computed) {
var this$1 = this;
if (!this.isVueInstance()) {
throwError(
"wrapper.setComputed() can only be called on a Vue " +
"instance"
);
}
warn(
"setComputed() has been deprecated and will be " +
"removed in version 1.0.0. You can overwrite " +
"computed properties by passing a computed object " +
"in the mounting options"
);
Object.keys(computed).forEach(function (key) {
if (this$1.version > 2.1) {
// $FlowIgnore : Problem with possibly null this.vm
if (!this$1.vm._computedWatchers[key]) {
throwError(
"wrapper.setComputed() was passed a value that " +
"does not exist as a computed property on the " +
"Vue instance. Property " + key + " does not exist " +
"on the Vue instance"
);
}
// $FlowIgnore : Problem with possibly null this.vm
this$1.vm._computedWatchers[key].value = computed[key];
// $FlowIgnore : Problem with possibly null this.vm
this$1.vm._computedWatchers[key].getter = function () { return computed[key]; };
} else {
var isStore = false;
// $FlowIgnore : Problem with possibly null this.vm
this$1.vm._watchers.forEach(function (watcher) {
if (watcher.getter.vuex && key in watcher.vm.$options.store.getters) {
watcher.vm.$options.store.getters = Object.assign({}, watcher.vm.$options.store.getters);
Object.defineProperty(watcher.vm.$options.store.getters, key, {
get: function () {
return computed[key]
}
});
isStore = true;
}
});
// $FlowIgnore : Problem with possibly null this.vm
if (!isStore && !this$1.vm._watchers.some(function (w) { return w.getter.name === key; })) {
throwError(
"wrapper.setComputed() was passed a value that does " +
"not exist as a computed property on the Vue instance. " +
"Property " + key + " does not exist on the Vue instance"
);
}
// $FlowIgnore : Problem with possibly null this.vm
this$1.vm._watchers.forEach(function (watcher) {
if (watcher.getter.name === key) {
watcher.value = computed[key];
watcher.getter = function () { return computed[key]; };
}
});
}
});
// $FlowIgnore : Problem with possibly null this.vm
this.vm._watchers.forEach(function (watcher) {
watcher.run();
});
};
/**
* Sets vm methods
*/
Wrapper.prototype.setMethods = function setMethods (methods) {
var this$1 = this;
if (!this.isVueInstance()) {
throwError(
"wrapper.setMethods() can only be called on a Vue " +
"instance"
);
}
Object.keys(methods).forEach(function (key) {
// $FlowIgnore : Problem with possibly null this.vm
this$1.vm[key] = methods[key];
// $FlowIgnore : Problem with possibly null this.vm
this$1.vm.$options.methods[key] = methods[key];
});
if (this.vnode) {
var context = this.vnode.context;
if (context.$options.render) { context._update(context._render()); }
}
};
/**
* Sets vm props
*/
Wrapper.prototype.setProps = function setProps (data) {
var this$1 = this;
var originalConfig = Vue.config.silent;
Vue.config.silent = config.silent;
if (this.isFunctionalComponent) {
throwError(
"wrapper.setProps() cannot be called on a " +
"functional component"
);
}
if (!this.vm) {
throwError(
"wrapper.setProps() can only be called on a Vue " +
"instance"
);
}
Object.keys(data).forEach(function (key) {
if (
!this$1.vm ||
!this$1.vm.$options._propKeys ||
!this$1.vm.$options._propKeys.some(function (prop) { return prop === key; })
) {
throwError(
"wrapper.setProps() called with " + key + " property which " +
"is not defined on the component"
);
}
if (
typeof data[key] === 'object' &&
data[key] !== null &&
// $FlowIgnore : Problem with possibly null this.vm
data[key] === this$1.vm[key]
) {
throwError(
"wrapper.setProps() called with the same object " +
"of the existing " + key + " property. " +
"You must call wrapper.setProps() with a new object " +
"to trigger reactivity"
);
}
if (this$1.vm && this$1.vm._props) {
// Set actual props value
this$1.vm._props[key] = data[key];
// $FlowIgnore : Problem with possibly null this.vm
this$1.vm[key] = data[key];
} else {
// $FlowIgnore : Problem with possibly null this.vm.$options
this$1.vm.$options.propsData[key] = data[key];
// $FlowIgnore : Problem with possibly null this.vm
this$1.vm[key] = data[key];
// $FlowIgnore : Need to call this twice to fix watcher bug in 2.0.x
this$1.vm[key] = data[key];
}
});
// $FlowIgnore : Problem with possibly null this.vm
this.vm.$forceUpdate();
// $FlowIgnore : Problem with possibly null this.vm
orderWatchers(this.vm || this.vnode.context.$root);
Vue.config.silent = originalConfig;
};
/**
* Sets element value and triggers input event
*/
Wrapper.prototype.setValue = function setValue (value) {
var tagName = this.element.tagName;
var type = this.attributes().type;
if (tagName === 'SELECT') {
// $FlowIgnore
this.element.value = value;
this.trigger('change');
} else if (tagName === 'OPTION') {
throwError(
"wrapper.setValue() cannot be called on an <option> " +
"element. Use wrapper.setSelected() instead"
);
} else if (tagName === 'INPUT' && type === 'checkbox') {
throwError(
"wrapper.setValue() cannot be called on a <input " +
"type=\"checkbox\" /> element. Use " +
"wrapper.setChecked() instead"
);
} else if (tagName === 'INPUT' && type === 'radio') {
throwError(
"wrapper.setValue() cannot be called on a <input " +
"type=\"radio\" /> element. Use wrapper.setChecked() " +
"instead"
);
} else if (tagName === 'INPUT' || tagName === 'TEXTAREA') {
// $FlowIgnore
this.element.value = value;
this.trigger('input');
} else {
throwError("wrapper.setValue() cannot be called on this element");
}
};
/**
* Checks radio button or checkbox element
*/
Wrapper.prototype.setChecked = function setChecked (checked) {
if ( checked === void 0 ) checked = true;
if (typeof checked !== 'boolean') {
throwError('wrapper.setChecked() must be passed a boolean');
}
var tagName = this.element.tagName;
var type = this.attributes().type;
if (tagName === 'SELECT') {
throwError(
"wrapper.setChecked() cannot be called on a " +
"<select> element. Use wrapper.setSelected() " +
"instead"
);
} else if (tagName === 'INPUT' && type === 'checkbox') {
// $FlowIgnore
if (this.element.checked !== checked) {
if (!navigator.userAgent.includes('jsdom')) {
// $FlowIgnore
this.element.checked = checked;
}
this.trigger('click');
this.trigger('change');
}
} else if (tagName === 'INPUT' && type === 'radio') {
if (!checked) {
throwError(
"wrapper.setChecked() cannot be called with " +
"parameter false on a <input type=\"radio\" /> " +
"element."
);
} else {
// $FlowIgnore
if (!this.element.checked) {
this.trigger('click');
this.trigger('change');
}
}
} else if (tagName === 'INPUT' || tagName === 'TEXTAREA') {
throwError(
"wrapper.setChecked() cannot be called on \"text\" " +
"inputs. Use wrapper.setValue() instead"
);
} else {
throwError("wrapper.setChecked() cannot be called on this element");
}
};
/**
* Selects <option></option> element
*/
Wrapper.prototype.setSelected = function setSelected () {
var tagName = this.element.tagName;
var type = this.attributes().type;
if (tagName === 'OPTION') {
// $FlowIgnore
this.element.selected = true;
// $FlowIgnore
if (this.element.parentElement.tagName === 'OPTGROUP') {
// $FlowIgnore
createWrapper(this.element.parentElement.parentElement, this.options)
.trigger('change');
} else {
// $FlowIgnore
createWrapper(this.element.parentElement, this.options)
.trigger('change');
}
} else if (tagName === 'SELECT') {
throwError(
"wrapper.setSelected() cannot be called on select. " +
"Call it on one of its options"
);
} else if (tagName === 'INPUT' && type === 'checkbox') {
throwError(
"wrapper.setSelected() cannot be called on a <input " +
"type=\"checkbox\" /> element. Use " +
"wrapper.setChecked() instead"
);
} else if (tagName === 'INPUT' && type === 'radio') {
throwError(
"wrapper.setSelected() cannot be called on a <input " +
"type=\"radio\" /> element. Use wrapper.setChecked() " +
"instead"
);
} else if (tagName === 'INPUT' || tagName === 'TEXTAREA') {
throwError(
"wrapper.setSelected() cannot be called on \"text\" " +
"inputs. Use wrapper.setValue() instead"
);
} else {
throwError("wrapper.setSelected() cannot be called on this element");
}
};
/**
* Return text of wrapper element
*/
Wrapper.prototype.text = function text () {
return this.element.textContent.trim()
};
/**
* Calls destroy on vm
*/
Wrapper.prototype.destroy = function destroy () {
if (!this.isVueInstance()) {
throwError("wrapper.destroy() can only be called on a Vue instance");
}
if (this.element.parentNode) {
this.element.parentNode.removeChild(this.element);
}
// $FlowIgnore
this.vm.$destroy();
};
/**
* Dispatches a DOM event on wrapper
*/
Wrapper.prototype.trigger = function trigger (type, options) {
if ( options === void 0 ) options = {};
if (typeof type !== 'string') {
throwError('wrapper.trigger() must be passed a string');
}
if (options.target) {
throwError(
"you cannot set the target value of an event. See " +
"the notes section of the docs for more " +
"details—https://vue-test-utils.vuejs.org/api/wrapper/trigger.html"
);
}
// Don't fire event on a disabled element
if (this.attributes().disabled) {
return
}
var modifiers = {
enter: 13,
tab: 9,
delete: 46,
esc: 27,
space: 32,
up: 38,
down: 40,
left: 37,
right: 39,
end: 35,
home: 36,
backspace: 8,
insert: 45,
pageup: 33,
pagedown: 34
};
var event = type.split('.');
var eventObject;
// Fallback for IE10,11 - https://stackoverflow.com/questions/26596123
if (typeof window.Event === 'function') {
eventObject = new window.Event(event[0], {
bubbles: true,
cancelable: true
});
} else {
eventObject = document.createEvent('Event');
eventObject.initEvent(event[0], true, true);
}
if (options) {
Object.keys(options).forEach(function (key) {
// $FlowIgnore
eventObject[key] = options[key];
});
}
if (event.length === 2) {
// $FlowIgnore
eventObject.keyCode = modifiers[event[1]];
}
this.element.dispatchEvent(eventObject);
if (this.vnode) {
orderWatchers(this.vm || this.vnode.context.$root);
}
};
Wrapper.prototype.update = function update () {
warn(
"update has been removed from vue-test-utils. All " +
"updates are now synchronous by default"
);
};
//
function setDepsSync (dep) {
dep.subs.forEach(setWatcherSync);
}
function setWatcherSync (watcher) {
if (watcher.sync === true) {
return
}
watcher.sync = true;
watcher.deps.forEach(setDepsSync);
}
function setWatchersToSync (vm) {
if (vm._watchers) {
vm._watchers.forEach(setWatcherSync);
}
if (vm._computedWatchers) {
Object.keys(vm._computedWatchers).forEach(function (computedWatcher) {
setWatcherSync(vm._computedWatchers[computedWatcher]);
});
}
setWatcherSync(vm._watcher);
vm.$children.forEach(setWatchersToSync);
// preventing double registration
if (!vm.$_vueTestUtils_updateInSetWatcherSync) {
vm.$_vueTestUtils_updateInSetWatcherSync = vm._update;
vm._update = function (vnode, hydrating) {
var this$1 = this;
this.$_vueTestUtils_updateInSetWatcherSync(vnode, hydrating);
if (VUE_VERSION >= 2.1 && this._isMounted && this.$options.updated) {
this.$options.updated.forEach(function (handler) {
handler.call(this$1);
});
}
};
}
}
//
var VueWrapper = (function (Wrapper$$1) {
function VueWrapper (vm, options) {
Wrapper$$1.call(this, vm._vnode, options, true);
// $FlowIgnore : issue with defineProperty
Object.defineProperty(this, 'vnode', {
get: function () { return vm._vnode; },
set: function () { return throwError('wrapper.vnode is read-only'); }
});
// $FlowIgnore
Object.defineProperty(this, 'element', {
get: function () { return vm.$el; },
set: function () { return throwError('wrapper.element is read-only'); }
});
// $FlowIgnore
Object.defineProperty(this, 'vm', {
get: function () { return vm; },
set: function () { return throwError('wrapper.vm is read-only'); }
});
if (options.sync) {
setWatchersToSync(vm);
orderWatchers(vm);
}
this.isFunctionalComponent = vm.$options._isFunctionalContainer;
this._emitted = vm.__emitted;
this._emittedByOrder = vm.__emittedByOrder;
}
if ( Wrapper$$1 ) VueWrapper.__proto__ = Wrapper$$1;
VueWrapper.prototype = Object.create( Wrapper$$1 && Wrapper$$1.prototype );
VueWrapper.prototype.constructor = VueWrapper;
return VueWrapper;
}(Wrapper));
//
function createVNodes (
vm,
slotValue
) {
var el = vueTemplateCompiler.compileToFunctions(("<div>" + slotValue + "</div>"));
var _staticRenderFns = vm._renderProxy.$options.staticRenderFns;
// version < 2.5
if (!vm._renderProxy._staticTrees) {
vm._renderProxy._staticTrees = [];
}
vm._renderProxy.$options.staticRenderFns = el.staticRenderFns;
var vnode = el.render.call(vm._renderProxy, vm.$createElement);
vm._renderProxy.$options.staticRenderFns = _staticRenderFns;
return vnode.children
}
function createVNodesForSlot (
vm,
slotValue,
name
) {
var vnode;
if (typeof slotValue === 'string') {
var vnodes = createVNodes(vm, slotValue);
vnode = vnodes[0];
} else {
vnode = vm.$createElement(slotValue);
}
if (vnode.data) {
vnode.data.slot = name;
} else {
vnode.data = { slot: name };
}
return vnode
}
function createSlotVNodes (
vm,
slots
) {
return Object.keys(slots).reduce(function (acc, key) {
var content = slots[key];
if (Array.isArray(content)) {
var nodes = content.map(
function (slotDef) { return createVNodesForSlot(vm, slotDef, key); }
);
return acc.concat(nodes)
}
return acc.concat(createVNodesForSlot(vm, content, key))
}, [])
}
//
function addMocks (
mockedProperties,
Vue$$1
) {
Object.keys(mockedProperties).forEach(function (key) {
try {
Vue$$1.prototype[key] = mockedProperties[key];
} catch (e) {
warn(
"could not overwrite property " + key + ", this is " +
"usually caused by a plugin that has added " +
"the property as a read-only value"
);
}
Vue.util.defineReactive(Vue$$1, key, mockedProperties[key]);
});
}
//
function logEvents (
vm,
emitted,
emittedByOrder
) {
var emit = vm.$emit;
vm.$emit = function (name) {
var args = [], len = arguments.length - 1;
while ( len-- > 0 ) args[ len ] = arguments[ len + 1 ];
(emitted[name] || (emitted[name] = [])).push(args);
emittedByOrder.push({ name: name, args: args });
return emit.call.apply(emit, [ vm, name ].concat( args ))
};
}
function addEventLogger (vue) {
vue.mixin({
beforeCreate: function () {
this.__emitted = Object.create(null);
this.__emittedByOrder = [];
logEvents(this, this.__emitted, this.__emittedByOrder);
}
});
}
//
function compileTemplate (component) {
if (component.template) {
Object.assign(component, vueTemplateCompiler.compileToFunctions(component.template));
}
if (component.components) {
Object.keys(component.components).forEach(function (c) {
var cmp = component.components[c];
if (!cmp.render) {
compileTemplate(cmp);
}
});
}
if (component.extends) {
compileTemplate(component.extends);
}
if (component.extendOptions && !component.options.render) {
compileTemplate(component.options);
}
}
//
function isVueComponentStub (comp) {
return comp && comp.template || isVueComponent(comp)
}
function isValidStub (stub) {
return (
(!!stub && typeof stub === 'string') ||
stub === true ||
isVueComponentStub(stub)
)
}
function resolveComponent (obj, component) {
return obj[component] ||
obj[hyphenate(component)] ||
obj[camelize(component)] ||
obj[capitalize(camelize(component))] ||
obj[capitalize(component)] ||
{}
}
function isRequiredComponent (name) {
return (
name === 'KeepAlive' || name === 'Transition' || name === 'TransitionGroup'
)
}
function getCoreProperties (componentOptions) {
return {
attrs: componentOptions.attrs,
name: componentOptions.name,
on: componentOptions.on,
key: componentOptions.key,
ref: componentOptions.ref,
props: componentOptions.props,
domProps: componentOptions.domProps,
class: componentOptions.class,
staticClass: componentOptions.staticClass,
staticStyle: componentOptions.staticStyle,
style: componentOptions.style,
normalizedStyle: componentOptions.normalizedStyle,
nativeOn: componentOptions.nativeOn,
functional: componentOptions.functional
}
}
function createStubFromString (
templateString,
originalComponent,
name
) {
if (!vueTemplateCompiler.compileToFunctions) {
throwError(
"vueTemplateCompiler is undefined, you must pass " +
"precompiled components if vue-template-compiler is " +
"undefined"
);
}
if (templateContainsComponent(templateString, name)) {
throwError('options.stub cannot contain a circular reference');
}
var componentOptions = typeof originalComponent === 'function'
? originalComponent.extendOptions
: originalComponent;
return Object.assign({}, getCoreProperties(componentOptions),
vueTemplateCompiler.compileToFunctions(templateString))
}
function createBlankStub (
originalComponent,
name
) {
var componentOptions = typeof originalComponent === 'function'
? originalComponent.extendOptions
: originalComponent;
var tagName = name + "-stub";
// ignoreElements does not exist in Vue 2.0.x
if (Vue.config.ignoredElements) {
Vue.config.ignoredElements.push(tagName);
}
return Object.assign({}, getCoreProperties(componentOptions),
{render: function render (h) {
return h(
tagName,
!componentOptions.functional && this.$slots.default
)
}})
}
function createComponentStubs (
originalComponents,
stubs
) {
if ( originalComponents === void 0 ) originalComponents = {};
var components = {};
if (!stubs) {
return components
}
if (Array.isArray(stubs)) {
stubs.forEach(function (stub) {
if (stub === false) {
return
}
if (typeof stub !== 'string') {
throwError("each item in an options.stubs array must be a " + "string");
}
var component = resolveComponent(originalComponents, stub);
components[stub] = createBlankStub(component, stub);
});
} else {
var stubsObject = (stubs);
Object.keys(stubsObject).forEach(function (stubName) {
var stub = stubsObject[stubName];
if (stub === false) {
return
}
if (!isValidStub(stub)) {
throwError(
"options.stub values must be passed a string or " + "component"
);
}
if (stub === true) {
var component = resolveComponent(originalComponents, stubName);
components[stubName] = createBlankStub(component, stubName);
return
}
if (typeof stub !== 'string' && componentNeedsCompiling(stub)) {
compileTemplate(stub);
}
if (originalComponents[stubName]) {
// Remove cached constructor
delete originalComponents[stubName]._Ctor;
if (typeof stub === 'string') {
components[stubName] = createStubFromString(
stub,
originalComponents[stubName],
stubName
);
} else {
var stubObject = (stub);
components[stubName] = Object.assign({}, stubObject,
{name: originalComponents[stubName].name});
}
} else {
if (typeof stub === 'string') {
if (!vueTemplateCompiler.compileToFunctions) {
throwError(
"vueTemplateCompiler is undefined, you must pass " +
"precompiled components if vue-template-compiler is " +
"undefined"
);
}
components[stubName] = Object.assign({}, vueTemplateCompiler.compileToFunctions(stub));
} else {
var stubObject$1 = (stub);
components[stubName] = Object.assign({}, stubObject$1);
}
}
});
}
return components
}
function stubComponents (
components,
stubbedComponents
) {
Object.keys(components).forEach(function (component) {
var cmp = components[component];
var componentOptions = typeof cmp === 'function'
? cmp.extendOptions
: cmp;
// Remove cached constructor
delete componentOptions._Ctor;
if (!componentOptions.name) {
componentOptions.name = component;
}
stubbedComponents[component] = createBlankStub(componentOptions, component);
});
}
function createComponentStubsForAll (component) {
var stubbedComponents = {};
if (component.components) {
stubComponents(component.components, stubbedComponents);
}
stubbedComponents[component.name] = createBlankStub(component, component.name);
var extended = component.extends;
// Loop through extended component chains to stub all child components
while (extended) {
if (extended.components) {
stubComponents(extended.components, stubbedComponents);
}
extended = extended.extends;
}
if (component.extendOptions && component.extendOptions.components) {
stubComponents(component.extendOptions.components, stubbedComponents);
}
return stubbedComponents
}
function createComponentStubsForGlobals (
instance
) {
var components = {};
for (var c in instance.options.components) {
if (isRequiredComponent(c)) {
continue
}
components[c] = createBlankStub(instance.options.components[c], c);
delete instance.options.components[c]._Ctor;
delete components[c]._Ctor;
}
return components
}
//
var MOUNTING_OPTIONS = [
'attachToDocument',
'mocks',
'slots',
'localVue',
'stubs',
'context',
'clone',
'attrs',
'listeners',
'propsData'
];
function extractInstanceOptions (
options
) {
var instanceOptions = Object.assign({}, options);
MOUNTING_OPTIONS.forEach(function (mountingOption) {
delete instanceOptions[mountingOption];
});
return instanceOptions
}
//
function isValidSlot (slot) {
return (
isVueComponent(slot) ||
typeof slot === 'string'
)
}
function requiresTemplateCompiler (slot) {
if (typeof slot === 'string' && !vueTemplateCompiler.compileToFunctions) {
throwError(
"vueTemplateCompiler is undefined, you must pass " +
"precompiled components if vue-template-compiler is " +
"undefined"
);
}
}
function validateSlots (slots) {
Object.keys(slots).forEach(function (key) {
var slot = Array.isArray(slots[key]) ? slots[key] : [slots[key]];
slot.forEach(function (slotValue) {
if (!isValidSlot(slotValue)) {
throwError(
"slots[key] must be a Component, string or an array " +
"of Components"
);
}
requiresTemplateCompiler(slotValue);
});
});
}
//
function createFunctionalComponent (
component,
mountingOptions
) {
if (mountingOptions.context && typeof mountingOptions.context !== 'object') {
throwError('mount.context must be an object');
}
if (mountingOptions.slots) {
validateSlots(mountingOptions.slots);
}
return {
render: function render (h) {
return h(
component,
mountingOptions.context || component.FunctionalRenderContext,
(mountingOptions.context &&
mountingOptions.context.children &&
mountingOptions.context.children.map(
function (x) { return (typeof x === 'function' ? x(h) : x); }
)) ||
createSlotVNodes(this, mountingOptions.slots || {})
)
},
name: component.name,
_isFunctionalContainer: true
}
}
//
function isDestructuringSlotScope (slotScope) {
return slotScope[0] === '{' && slotScope[slotScope.length - 1] === '}'
}
function getVueTemplateCompilerHelpers () {
var vue = new Vue();
var helpers = {};
var names = [
'_c',
'_o',
'_n',
'_s',
'_l',
'_t',
'_q',
'_i',
'_m',
'_f',
'_k',
'_b',
'_v',
'_e',
'_u',
'_g'
];
names.forEach(function (name) {
helpers[name] = vue._renderProxy[name];
});
return helpers
}
function validateEnvironment () {
if (window.navigator.userAgent.match(/PhantomJS/i)) {
throwError(
"the scopedSlots option does not support PhantomJS. " +
"Please use Puppeteer, or pass a component."
);
}
if (vueVersion < 2.5) {
throwError("the scopedSlots option is only supported in " + "vue@2.5+.");
}
}
function validateTempldate (template) {
if (template.trim().substr(0, 9) === '<template') {
throwError(
"the scopedSlots option does not support a template " +
"tag as the root element."
);
}
}
function createScopedSlots (
scopedSlotsOption
) {
var scopedSlots = {};
if (!scopedSlotsOption) {
return scopedSlots
}
validateEnvironment();
var helpers = getVueTemplateCompilerHelpers();
var loop = function ( name ) {
var template = scopedSlotsOption[name];
validateTempldate(template);
var render = vueTemplateCompiler.compileToFunctions(template).render;
var domParser = new window.DOMParser();
var _document = domParser.parseFromString(template, 'text/html');
var slotScope = _document.body.firstChild.getAttribute(
'slot-scope'
);
var isDestructuring = isDestructuringSlotScope(slotScope);
scopedSlots[name] = function (props) {
var obj;
if (isDestructuring) {
return render.call(Object.assign({}, helpers, props))
} else {
return render.call(Object.assign({}, helpers, ( obj = {}, obj[slotScope] = props, obj)))
}
};
};
for (var name in scopedSlotsOption) loop( name );
return scopedSlots
}
//
function compileTemplateForSlots (slots) {
Object.keys(slots).forEach(function (key) {
var slot = Array.isArray(slots[key]) ? slots[key] : [slots[key]];
slot.forEach(function (slotValue) {
if (componentNeedsCompiling(slotValue)) {
compileTemplate(slotValue);
}
});
});
}
function createInstance (
component,
options,
_Vue,
elm
) {
// Remove cached constructor
delete component._Ctor;
// mounting options are vue-test-utils specific
//
// instance options are options that are passed to the
// root instance when it's instantiated
//
// component options are the root components options
var componentOptions = typeof component === 'function'
? component.extendOptions
: component;
var instanceOptions = extractInstanceOptions(options);
if (options.mocks) {
addMocks(options.mocks, _Vue);
}
if (
(component.options && component.options.functional) ||
component.functional
) {
component = createFunctionalComponent(component, options);
} else if (options.context) {
throwError(
"mount.context can only be used when mounting a " + "functional component"
);
}
if (componentNeedsCompiling(component)) {
compileTemplate(component);
}
addEventLogger(_Vue);
var stubComponents = createComponentStubs(
// $FlowIgnore
component.components,
// $FlowIgnore
options.stubs
);
if (options.stubs) {
instanceOptions.components = Object.assign({}, instanceOptions.components,
// $FlowIgnore
stubComponents);
}
_Vue.mixin({
created: function created () {
Object.assign(
this.$options.components,
stubComponents
);
}
});
Object.keys(componentOptions.components || {}).forEach(function (c) {
if (
componentOptions.components[c].extendOptions &&
!instanceOptions.components[c]
) {
if (options.logModifiedComponents) {
warn(
"an extended child component <" + c + "> has been modified " +
"to ensure it has the correct instance properties. " +
"This means it is not possible to find the component " +
"with a component selector. To find the component, " +
"you must stub it manually using the stubs mounting " +
"option."
);
}
instanceOptions.components[c] = _Vue.extend(
componentOptions.components[c]
);
}
});
if (component.options) {
component.options._base = _Vue;
}
var Constructor = vueVersion < 2.3 && typeof component === 'function'
? component.extend(instanceOptions)
: _Vue.extend(component).extend(instanceOptions);
Object.keys(instanceOptions.components || {}).forEach(function (key) {
Constructor.component(key, instanceOptions.components[key]);
_Vue.component(key, instanceOptions.components[key]);
});
if (options.slots) {
compileTemplateForSlots(options.slots);
// $FlowIgnore
validateSlots(options.slots);
}
// Objects are not resolved in extended components in Vue < 2.5
// https://github.com/vuejs/vue/issues/6436
if (
options.provide &&
typeof options.provide === 'object' &&
vueVersion < 2.5
) {
var obj = Object.assign({}, options.provide);
options.provide = function () { return obj; };
}
var scopedSlots = createScopedSlots(options.scopedSlots);
if (options.parentComponent && !isPlainObject(options.parentComponent)) {
throwError(
"options.parentComponent should be a valid Vue component " +
"options object"
);
}
var parentComponentOptions = options.parentComponent || {};
parentComponentOptions.provide = options.provide;
parentComponentOptions.render = function (h) {
var slots = options.slots
? createSlotVNodes(this, options.slots)
: undefined;
return h(
Constructor,
{
ref: 'vm',
props: options.propsData,
on: options.listeners,
attrs: options.attrs,
scopedSlots: scopedSlots
},
slots
)
};
var Parent = _Vue.extend(parentComponentOptions);
return new Parent()
}
//
function createElement () {
if (document) {
var elem = document.createElement('div');
if (document.body) {
document.body.appendChild(elem);
}
return elem
}
}
/**
* Removes all key-value entries from the list cache.
*
* @private
* @name clear
* @memberOf ListCache
*/
function listCacheClear() {
this.__data__ = [];
this.size = 0;
}
var _listCacheClear = listCacheClear;
/**
* Performs a
* [`SameValueZero`](http://ecma-international.org/ecma-262/7.0/#sec-samevaluezero)
* comparison between two values to determine if they are equivalent.
*
* @static
* @memberOf _
* @since 4.0.0
* @category Lang
* @param {*} value The value to compare.
* @param {*} other The other value to compare.
* @returns {boolean} Returns `true` if the values are equivalent, else `false`.
* @example
*
* var object = { 'a': 1 };
* var other = { 'a': 1 };
*
* _.eq(object, object);
* // => true
*
* _.eq(object, other);
* // => false
*
* _.eq('a', 'a');
* // => true
*
* _.eq('a', Object('a'));
* // => false
*
* _.eq(NaN, NaN);
* // => true
*/
function eq(value, other) {
return value === other || (value !== value && other !== other);
}
var eq_1 = eq;
/**
* Gets the index at which the `key` is found in `array` of key-value pairs.
*
* @private
* @param {Array} array The array to inspect.
* @param {*} key The key to search for.
* @returns {number} Returns the index of the matched value, else `-1`.
*/
function assocIndexOf(array, key) {
var length = array.length;
while (length--) {
if (eq_1(array[length][0], key)) {
return length;
}
}
return -1;
}
var _assocIndexOf = assocIndexOf;
/** Used for built-in method references. */
var arrayProto = Array.prototype;
/** Built-in value references. */
var splice = arrayProto.splice;
/**
* Removes `key` and its value from the list cache.
*
* @private
* @name delete
* @memberOf ListCache
* @param {string} key The key of the value to remove.
* @returns {boolean} Returns `true` if the entry was removed, else `false`.
*/
function listCacheDelete(key) {
var data = this.__data__,
index = _assocIndexOf(data, key);
if (index < 0) {
return false;
}
var lastIndex = data.length - 1;
if (index == lastIndex) {
data.pop();
} else {
splice.call(data, index, 1);
}
--this.size;
return true;
}
var _listCacheDelete = listCacheDelete;
/**
* Gets the list cache value for `key`.
*
* @private
* @name get
* @memberOf ListCache
* @param {string} key The key of the value to get.
* @returns {*} Returns the entry value.
*/
function listCacheGet(key) {
var data = this.__data__,
index = _assocIndexOf(data, key);
return index < 0 ? undefined : data[index][1];
}
var _listCacheGet = listCacheGet;
/**
* Checks if a list cache value for `key` exists.
*
* @private
* @name has
* @memberOf ListCache
* @param {string} key The key of the entry to check.
* @returns {boolean} Returns `true` if an entry for `key` exists, else `false`.
*/
function listCacheHas(key) {
return _assocIndexOf(this.__data__, key) > -1;
}
var _listCacheHas = listCacheHas;
/**
* Sets the list cache `key` to `value`.
*
* @private
* @name set
* @memberOf ListCache
* @param {string} key The key of the value to set.
* @param {*} value The value to set.
* @returns {Object} Returns the list cache instance.
*/
function listCacheSet(key, value) {
var data = this.__data__,
index = _assocIndexOf(data, key);
if (index < 0) {
++this.size;
data.push([key, value]);
} else {
data[index][1] = value;
}
return this;
}
var _listCacheSet = listCacheSet;
/**
* Creates an list cache object.
*
* @private
* @constructor
* @param {Array} [entries] The key-value pairs to cache.
*/
function ListCache(entries) {
var this$1 = this;
var index = -1,
length = entries == null ? 0 : entries.length;
this.clear();
while (++index < length) {
var entry = entries[index];
this$1.set(entry[0], entry[1]);
}
}
// Add methods to `ListCache`.
ListCache.prototype.clear = _listCacheClear;
ListCache.prototype['delete'] = _listCacheDelete;
ListCache.prototype.get = _listCacheGet;
ListCache.prototype.has = _listCacheHas;
ListCache.prototype.set = _listCacheSet;
var _ListCache = ListCache;
/**
* Removes all key-value entries from the stack.
*
* @private
* @name clear
* @memberOf Stack
*/
function stackClear() {
this.__data__ = new _ListCache;
this.size = 0;
}
var _stackClear = stackClear;
/**
* Removes `key` and its value from the stack.
*
* @private
* @name delete
* @memberOf Stack
* @param {string} key The key of the value to remove.
* @returns {boolean} Returns `true` if the entry was removed, else `false`.
*/
function stackDelete(key) {
var data = this.__data__,
result = data['delete'](key);
this.size = data.size;
return result;
}
var _stackDelete = stackDelete;
/**
* Gets the stack value for `key`.
*
* @private
* @name get
* @memberOf Stack
* @param {string} key The key of the value to get.
* @returns {*} Returns the entry value.
*/
function stackGet(key) {
return this.__data__.get(key);
}
var _stackGet = stackGet;
/**
* Checks if a stack value for `key` exists.
*
* @private
* @name has
* @memberOf Stack
* @param {string} key The key of the entry to check.
* @returns {boolean} Returns `true` if an entry for `key` exists, else `false`.
*/
function stackHas(key) {
return this.__data__.has(key);
}
var _stackHas = stackHas;
var commonjsGlobal = typeof window !== 'undefined' ? window : typeof global !== 'undefined' ? global : typeof self !== 'undefined' ? self : {};
function createCommonjsModule(fn, module) {
return module = { exports: {} }, fn(module, module.exports), module.exports;
}
/** Detect free variable `global` from Node.js. */
var freeGlobal = typeof commonjsGlobal == 'object' && commonjsGlobal && commonjsGlobal.Object === Object && commonjsGlobal;
var _freeGlobal = freeGlobal;
/** Detect free variable `self`. */
var freeSelf = typeof self == 'object' && self && self.Object === Object && self;
/** Used as a reference to the global object. */
var root = _freeGlobal || freeSelf || Function('return this')();
var _root = root;
/** Built-in value references. */
var Symbol = _root.Symbol;
var _Symbol = Symbol;
/** Used for built-in method references. */
var objectProto = Object.prototype;
/** Used to check objects for own properties. */
var hasOwnProperty = objectProto.hasOwnProperty;
/**
* Used to resolve the
* [`toStringTag`](http://ecma-international.org/ecma-262/7.0/#sec-object.prototype.tostring)
* of values.
*/
var nativeObjectToString = objectProto.toString;
/** Built-in value references. */
var symToStringTag = _Symbol ? _Symbol.toStringTag : undefined;
/**
* A specialized version of `baseGetTag` which ignores `Symbol.toStringTag` values.
*
* @private
* @param {*} value The value to query.
* @returns {string} Returns the raw `toStringTag`.
*/
function getRawTag(value) {
var isOwn = hasOwnProperty.call(value, symToStringTag),
tag = value[symToStringTag];
try {
value[symToStringTag] = undefined;
var unmasked = true;
} catch (e) {}
var result = nativeObjectToString.call(value);
if (unmasked) {
if (isOwn) {
value[symToStringTag] = tag;
} else {
delete value[symToStringTag];
}
}
return result;
}
var _getRawTag = getRawTag;
/** Used for built-in method references. */
var objectProto$1 = Object.prototype;
/**
* Used to resolve the
* [`toStringTag`](http://ecma-international.org/ecma-262/7.0/#sec-object.prototype.tostring)
* of values.
*/
var nativeObjectToString$1 = objectProto$1.toString;
/**
* Converts `value` to a string using `Object.prototype.toString`.
*
* @private
* @param {*} value The value to convert.
* @returns {string} Returns the converted string.
*/
function objectToString(value) {
return nativeObjectToString$1.call(value);
}
var _objectToString = objectToString;
/** `Object#toString` result references. */
var nullTag = '[object Null]',
undefinedTag = '[object Undefined]';
/** Built-in value references. */
var symToStringTag$1 = _Symbol ? _Symbol.toStringTag : undefined;
/**
* The base implementation of `getTag` without fallbacks for buggy environments.
*
* @private
* @param {*} value The value to query.
* @returns {string} Returns the `toStringTag`.
*/
function baseGetTag(value) {
if (value == null) {
return value === undefined ? undefinedTag : nullTag;
}
return (symToStringTag$1 && symToStringTag$1 in Object(value))
? _getRawTag(value)
: _objectToString(value);
}
var _baseGetTag = baseGetTag;
/**
* Checks if `value` is the
* [language type](http://www.ecma-international.org/ecma-262/7.0/#sec-ecmascript-language-types)
* of `Object`. (e.g. arrays, functions, objects, regexes, `new Number(0)`, and `new String('')`)
*
* @static
* @memberOf _
* @since 0.1.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is an object, else `false`.
* @example
*
* _.isObject({});
* // => true
*
* _.isObject([1, 2, 3]);
* // => true
*
* _.isObject(_.noop);
* // => true
*
* _.isObject(null);
* // => false
*/
function isObject(value) {
var type = typeof value;
return value != null && (type == 'object' || type == 'function');
}
var isObject_1 = isObject;
/** `Object#toString` result references. */
var asyncTag = '[object AsyncFunction]',
funcTag = '[object Function]',
genTag = '[object GeneratorFunction]',
proxyTag = '[object Proxy]';
/**
* Checks if `value` is classified as a `Function` object.
*
* @static
* @memberOf _
* @since 0.1.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a function, else `false`.
* @example
*
* _.isFunction(_);
* // => true
*
* _.isFunction(/abc/);
* // => false
*/
function isFunction(value) {
if (!isObject_1(value)) {
return false;
}
// The use of `Object#toString` avoids issues with the `typeof` operator
// in Safari 9 which returns 'object' for typed arrays and other constructors.
var tag = _baseGetTag(value);
return tag == funcTag || tag == genTag || tag == asyncTag || tag == proxyTag;
}
var isFunction_1 = isFunction;
/** Used to detect overreaching core-js shims. */
var coreJsData = _root['__core-js_shared__'];
var _coreJsData = coreJsData;
/** Used to detect methods masquerading as native. */
var maskSrcKey = (function() {
var uid = /[^.]+$/.exec(_coreJsData && _coreJsData.keys && _coreJsData.keys.IE_PROTO || '');
return uid ? ('Symbol(src)_1.' + uid) : '';
}());
/**
* Checks if `func` has its source masked.
*
* @private
* @param {Function} func The function to check.
* @returns {boolean} Returns `true` if `func` is masked, else `false`.
*/
function isMasked(func) {
return !!maskSrcKey && (maskSrcKey in func);
}
var _isMasked = isMasked;
/** Used for built-in method references. */
var funcProto = Function.prototype;
/** Used to resolve the decompiled source of functions. */
var funcToString = funcProto.toString;
/**
* Converts `func` to its source code.
*
* @private
* @param {Function} func The function to convert.
* @returns {string} Returns the source code.
*/
function toSource(func) {
if (func != null) {
try {
return funcToString.call(func);
} catch (e) {}
try {
return (func + '');
} catch (e) {}
}
return '';
}
var _toSource = toSource;
/**
* Used to match `RegExp`
* [syntax characters](http://ecma-international.org/ecma-262/7.0/#sec-patterns).
*/
var reRegExpChar = /[\\^$.*+?()[\]{}|]/g;
/** Used to detect host constructors (Safari). */
var reIsHostCtor = /^\[object .+?Constructor\]$/;
/** Used for built-in method references. */
var funcProto$1 = Function.prototype,
objectProto$2 = Object.prototype;
/** Used to resolve the decompiled source of functions. */
var funcToString$1 = funcProto$1.toString;
/** Used to check objects for own properties. */
var hasOwnProperty$1 = objectProto$2.hasOwnProperty;
/** Used to detect if a method is native. */
var reIsNative = RegExp('^' +
funcToString$1.call(hasOwnProperty$1).replace(reRegExpChar, '\\$&')
.replace(/hasOwnProperty|(function).*?(?=\\\()| for .+?(?=\\\])/g, '$1.*?') + '$'
);
/**
* The base implementation of `_.isNative` without bad shim checks.
*
* @private
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a native function,
* else `false`.
*/
function baseIsNative(value) {
if (!isObject_1(value) || _isMasked(value)) {
return false;
}
var pattern = isFunction_1(value) ? reIsNative : reIsHostCtor;
return pattern.test(_toSource(value));
}
var _baseIsNative = baseIsNative;
/**
* Gets the value at `key` of `object`.
*
* @private
* @param {Object} [object] The object to query.
* @param {string} key The key of the property to get.
* @returns {*} Returns the property value.
*/
function getValue(object, key) {
return object == null ? undefined : object[key];
}
var _getValue = getValue;
/**
* Gets the native function at `key` of `object`.
*
* @private
* @param {Object} object The object to query.
* @param {string} key The key of the method to get.
* @returns {*} Returns the function if it's native, else `undefined`.
*/
function getNative(object, key) {
var value = _getValue(object, key);
return _baseIsNative(value) ? value : undefined;
}
var _getNative = getNative;
/* Built-in method references that are verified to be native. */
var Map = _getNative(_root, 'Map');
var _Map = Map;
/* Built-in method references that are verified to be native. */
var nativeCreate = _getNative(Object, 'create');
var _nativeCreate = nativeCreate;
/**
* Removes all key-value entries from the hash.
*
* @private
* @name clear
* @memberOf Hash
*/
function hashClear() {
this.__data__ = _nativeCreate ? _nativeCreate(null) : {};
this.size = 0;
}
var _hashClear = hashClear;
/**
* Removes `key` and its value from the hash.
*
* @private
* @name delete
* @memberOf Hash
* @param {Object} hash The hash to modify.
* @param {string} key The key of the value to remove.
* @returns {boolean} Returns `true` if the entry was removed, else `false`.
*/
function hashDelete(key) {
var result = this.has(key) && delete this.__data__[key];
this.size -= result ? 1 : 0;
return result;
}
var _hashDelete = hashDelete;
/** Used to stand-in for `undefined` hash values. */
var HASH_UNDEFINED = '__lodash_hash_undefined__';
/** Used for built-in method references. */
var objectProto$3 = Object.prototype;
/** Used to check objects for own properties. */
var hasOwnProperty$2 = objectProto$3.hasOwnProperty;
/**
* Gets the hash value for `key`.
*
* @private
* @name get
* @memberOf Hash
* @param {string} key The key of the value to get.
* @returns {*} Returns the entry value.
*/
function hashGet(key) {
var data = this.__data__;
if (_nativeCreate) {
var result = data[key];
return result === HASH_UNDEFINED ? undefined : result;
}
return hasOwnProperty$2.call(data, key) ? data[key] : undefined;
}
var _hashGet = hashGet;
/** Used for built-in method references. */
var objectProto$4 = Object.prototype;
/** Used to check objects for own properties. */
var hasOwnProperty$3 = objectProto$4.hasOwnProperty;
/**
* Checks if a hash value for `key` exists.
*
* @private
* @name has
* @memberOf Hash
* @param {string} key The key of the entry to check.
* @returns {boolean} Returns `true` if an entry for `key` exists, else `false`.
*/
function hashHas(key) {
var data = this.__data__;
return _nativeCreate ? (data[key] !== undefined) : hasOwnProperty$3.call(data, key);
}
var _hashHas = hashHas;
/** Used to stand-in for `undefined` hash values. */
var HASH_UNDEFINED$1 = '__lodash_hash_undefined__';
/**
* Sets the hash `key` to `value`.
*
* @private
* @name set
* @memberOf Hash
* @param {string} key The key of the value to set.
* @param {*} value The value to set.
* @returns {Object} Returns the hash instance.
*/
function hashSet(key, value) {
var data = this.__data__;
this.size += this.has(key) ? 0 : 1;
data[key] = (_nativeCreate && value === undefined) ? HASH_UNDEFINED$1 : value;
return this;
}
var _hashSet = hashSet;
/**
* Creates a hash object.
*
* @private
* @constructor
* @param {Array} [entries] The key-value pairs to cache.
*/
function Hash(entries) {
var this$1 = this;
var index = -1,
length = entries == null ? 0 : entries.length;
this.clear();
while (++index < length) {
var entry = entries[index];
this$1.set(entry[0], entry[1]);
}
}
// Add methods to `Hash`.
Hash.prototype.clear = _hashClear;
Hash.prototype['delete'] = _hashDelete;
Hash.prototype.get = _hashGet;
Hash.prototype.has = _hashHas;
Hash.prototype.set = _hashSet;
var _Hash = Hash;
/**
* Removes all key-value entries from the map.
*
* @private
* @name clear
* @memberOf MapCache
*/
function mapCacheClear() {
this.size = 0;
this.__data__ = {
'hash': new _Hash,
'map': new (_Map || _ListCache),
'string': new _Hash
};
}
var _mapCacheClear = mapCacheClear;
/**
* Checks if `value` is suitable for use as unique object key.
*
* @private
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is suitable, else `false`.
*/
function isKeyable(value) {
var type = typeof value;
return (type == 'string' || type == 'number' || type == 'symbol' || type == 'boolean')
? (value !== '__proto__')
: (value === null);
}
var _isKeyable = isKeyable;
/**
* Gets the data for `map`.
*
* @private
* @param {Object} map The map to query.
* @param {string} key The reference key.
* @returns {*} Returns the map data.
*/
function getMapData(map, key) {
var data = map.__data__;
return _isKeyable(key)
? data[typeof key == 'string' ? 'string' : 'hash']
: data.map;
}
var _getMapData = getMapData;
/**
* Removes `key` and its value from the map.
*
* @private
* @name delete
* @memberOf MapCache
* @param {string} key The key of the value to remove.
* @returns {boolean} Returns `true` if the entry was removed, else `false`.
*/
function mapCacheDelete(key) {
var result = _getMapData(this, key)['delete'](key);
this.size -= result ? 1 : 0;
return result;
}
var _mapCacheDelete = mapCacheDelete;
/**
* Gets the map value for `key`.
*
* @private
* @name get
* @memberOf MapCache
* @param {string} key The key of the value to get.
* @returns {*} Returns the entry value.
*/
function mapCacheGet(key) {
return _getMapData(this, key).get(key);
}
var _mapCacheGet = mapCacheGet;
/**
* Checks if a map value for `key` exists.
*
* @private
* @name has
* @memberOf MapCache
* @param {string} key The key of the entry to check.
* @returns {boolean} Returns `true` if an entry for `key` exists, else `false`.
*/
function mapCacheHas(key) {
return _getMapData(this, key).has(key);
}
var _mapCacheHas = mapCacheHas;
/**
* Sets the map `key` to `value`.
*
* @private
* @name set
* @memberOf MapCache
* @param {string} key The key of the value to set.
* @param {*} value The value to set.
* @returns {Object} Returns the map cache instance.
*/
function mapCacheSet(key, value) {
var data = _getMapData(this, key),
size = data.size;
data.set(key, value);
this.size += data.size == size ? 0 : 1;
return this;
}
var _mapCacheSet = mapCacheSet;
/**
* Creates a map cache object to store key-value pairs.
*
* @private
* @constructor
* @param {Array} [entries] The key-value pairs to cache.
*/
function MapCache(entries) {
var this$1 = this;
var index = -1,
length = entries == null ? 0 : entries.length;
this.clear();
while (++index < length) {
var entry = entries[index];
this$1.set(entry[0], entry[1]);
}
}
// Add methods to `MapCache`.
MapCache.prototype.clear = _mapCacheClear;
MapCache.prototype['delete'] = _mapCacheDelete;
MapCache.prototype.get = _mapCacheGet;
MapCache.prototype.has = _mapCacheHas;
MapCache.prototype.set = _mapCacheSet;
var _MapCache = MapCache;
/** Used as the size to enable large array optimizations. */
var LARGE_ARRAY_SIZE = 200;
/**
* Sets the stack `key` to `value`.
*
* @private
* @name set
* @memberOf Stack
* @param {string} key The key of the value to set.
* @param {*} value The value to set.
* @returns {Object} Returns the stack cache instance.
*/
function stackSet(key, value) {
var data = this.__data__;
if (data instanceof _ListCache) {
var pairs = data.__data__;
if (!_Map || (pairs.length < LARGE_ARRAY_SIZE - 1)) {
pairs.push([key, value]);
this.size = ++data.size;
return this;
}
data = this.__data__ = new _MapCache(pairs);
}
data.set(key, value);
this.size = data.size;
return this;
}
var _stackSet = stackSet;
/**
* Creates a stack cache object to store key-value pairs.
*
* @private
* @constructor
* @param {Array} [entries] The key-value pairs to cache.
*/
function Stack(entries) {
var data = this.__data__ = new _ListCache(entries);
this.size = data.size;
}
// Add methods to `Stack`.
Stack.prototype.clear = _stackClear;
Stack.prototype['delete'] = _stackDelete;
Stack.prototype.get = _stackGet;
Stack.prototype.has = _stackHas;
Stack.prototype.set = _stackSet;
var _Stack = Stack;
/**
* A specialized version of `_.forEach` for arrays without support for
* iteratee shorthands.
*
* @private
* @param {Array} [array] The array to iterate over.
* @param {Function} iteratee The function invoked per iteration.
* @returns {Array} Returns `array`.
*/
function arrayEach(array, iteratee) {
var index = -1,
length = array == null ? 0 : array.length;
while (++index < length) {
if (iteratee(array[index], index, array) === false) {
break;
}
}
return array;
}
var _arrayEach = arrayEach;
var defineProperty = (function() {
try {
var func = _getNative(Object, 'defineProperty');
func({}, '', {});
return func;
} catch (e) {}
}());
var _defineProperty = defineProperty;
/**
* The base implementation of `assignValue` and `assignMergeValue` without
* value checks.
*
* @private
* @param {Object} object The object to modify.
* @param {string} key The key of the property to assign.
* @param {*} value The value to assign.
*/
function baseAssignValue(object, key, value) {
if (key == '__proto__' && _defineProperty) {
_defineProperty(object, key, {
'configurable': true,
'enumerable': true,
'value': value,
'writable': true
});
} else {
object[key] = value;
}
}
var _baseAssignValue = baseAssignValue;
/** Used for built-in method references. */
var objectProto$5 = Object.prototype;
/** Used to check objects for own properties. */
var hasOwnProperty$4 = objectProto$5.hasOwnProperty;
/**
* Assigns `value` to `key` of `object` if the existing value is not equivalent
* using [`SameValueZero`](http://ecma-international.org/ecma-262/7.0/#sec-samevaluezero)
* for equality comparisons.
*
* @private
* @param {Object} object The object to modify.
* @param {string} key The key of the property to assign.
* @param {*} value The value to assign.
*/
function assignValue(object, key, value) {
var objValue = object[key];
if (!(hasOwnProperty$4.call(object, key) && eq_1(objValue, value)) ||
(value === undefined && !(key in object))) {
_baseAssignValue(object, key, value);
}
}
var _assignValue = assignValue;
/**
* Copies properties of `source` to `object`.
*
* @private
* @param {Object} source The object to copy properties from.
* @param {Array} props The property identifiers to copy.
* @param {Object} [object={}] The object to copy properties to.
* @param {Function} [customizer] The function to customize copied values.
* @returns {Object} Returns `object`.
*/
function copyObject(source, props, object, customizer) {
var isNew = !object;
object || (object = {});
var index = -1,
length = props.length;
while (++index < length) {
var key = props[index];
var newValue = customizer
? customizer(object[key], source[key], key, object, source)
: undefined;
if (newValue === undefined) {
newValue = source[key];
}
if (isNew) {
_baseAssignValue(object, key, newValue);
} else {
_assignValue(object, key, newValue);
}
}
return object;
}
var _copyObject = copyObject;
/**
* The base implementation of `_.times` without support for iteratee shorthands
* or max array length checks.
*
* @private
* @param {number} n The number of times to invoke `iteratee`.
* @param {Function} iteratee The function invoked per iteration.
* @returns {Array} Returns the array of results.
*/
function baseTimes(n, iteratee) {
var index = -1,
result = Array(n);
while (++index < n) {
result[index] = iteratee(index);
}
return result;
}
var _baseTimes = baseTimes;
/**
* Checks if `value` is object-like. A value is object-like if it's not `null`
* and has a `typeof` result of "object".
*
* @static
* @memberOf _
* @since 4.0.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is object-like, else `false`.
* @example
*
* _.isObjectLike({});
* // => true
*
* _.isObjectLike([1, 2, 3]);
* // => true
*
* _.isObjectLike(_.noop);
* // => false
*
* _.isObjectLike(null);
* // => false
*/
function isObjectLike(value) {
return value != null && typeof value == 'object';
}
var isObjectLike_1 = isObjectLike;
/** `Object#toString` result references. */
var argsTag = '[object Arguments]';
/**
* The base implementation of `_.isArguments`.
*
* @private
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is an `arguments` object,
*/
function baseIsArguments(value) {
return isObjectLike_1(value) && _baseGetTag(value) == argsTag;
}
var _baseIsArguments = baseIsArguments;
/** Used for built-in method references. */
var objectProto$6 = Object.prototype;
/** Used to check objects for own properties. */
var hasOwnProperty$5 = objectProto$6.hasOwnProperty;
/** Built-in value references. */
var propertyIsEnumerable = objectProto$6.propertyIsEnumerable;
/**
* Checks if `value` is likely an `arguments` object.
*
* @static
* @memberOf _
* @since 0.1.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is an `arguments` object,
* else `false`.
* @example
*
* _.isArguments(function() { return arguments; }());
* // => true
*
* _.isArguments([1, 2, 3]);
* // => false
*/
var isArguments = _baseIsArguments(function() { return arguments; }()) ? _baseIsArguments : function(value) {
return isObjectLike_1(value) && hasOwnProperty$5.call(value, 'callee') &&
!propertyIsEnumerable.call(value, 'callee');
};
var isArguments_1 = isArguments;
/**
* Checks if `value` is classified as an `Array` object.
*
* @static
* @memberOf _
* @since 0.1.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is an array, else `false`.
* @example
*
* _.isArray([1, 2, 3]);
* // => true
*
* _.isArray(document.body.children);
* // => false
*
* _.isArray('abc');
* // => false
*
* _.isArray(_.noop);
* // => false
*/
var isArray = Array.isArray;
var isArray_1 = isArray;
/**
* This method returns `false`.
*
* @static
* @memberOf _
* @since 4.13.0
* @category Util
* @returns {boolean} Returns `false`.
* @example
*
* _.times(2, _.stubFalse);
* // => [false, false]
*/
function stubFalse() {
return false;
}
var stubFalse_1 = stubFalse;
var isBuffer_1 = createCommonjsModule(function (module, exports) {
/** Detect free variable `exports`. */
var freeExports = 'object' == 'object' && exports && !exports.nodeType && exports;
/** Detect free variable `module`. */
var freeModule = freeExports && 'object' == 'object' && module && !module.nodeType && module;
/** Detect the popular CommonJS extension `module.exports`. */
var moduleExports = freeModule && freeModule.exports === freeExports;
/** Built-in value references. */
var Buffer = moduleExports ? _root.Buffer : undefined;
/* Built-in method references for those with the same name as other `lodash` methods. */
var nativeIsBuffer = Buffer ? Buffer.isBuffer : undefined;
/**
* Checks if `value` is a buffer.
*
* @static
* @memberOf _
* @since 4.3.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a buffer, else `false`.
* @example
*
* _.isBuffer(new Buffer(2));
* // => true
*
* _.isBuffer(new Uint8Array(2));
* // => false
*/
var isBuffer = nativeIsBuffer || stubFalse_1;
module.exports = isBuffer;
});
/** Used as references for various `Number` constants. */
var MAX_SAFE_INTEGER = 9007199254740991;
/** Used to detect unsigned integer values. */
var reIsUint = /^(?:0|[1-9]\d*)$/;
/**
* Checks if `value` is a valid array-like index.
*
* @private
* @param {*} value The value to check.
* @param {number} [length=MAX_SAFE_INTEGER] The upper bounds of a valid index.
* @returns {boolean} Returns `true` if `value` is a valid index, else `false`.
*/
function isIndex(value, length) {
var type = typeof value;
length = length == null ? MAX_SAFE_INTEGER : length;
return !!length &&
(type == 'number' ||
(type != 'symbol' && reIsUint.test(value))) &&
(value > -1 && value % 1 == 0 && value < length);
}
var _isIndex = isIndex;
/** Used as references for various `Number` constants. */
var MAX_SAFE_INTEGER$1 = 9007199254740991;
/**
* Checks if `value` is a valid array-like length.
*
* **Note:** This method is loosely based on
* [`ToLength`](http://ecma-international.org/ecma-262/7.0/#sec-tolength).
*
* @static
* @memberOf _
* @since 4.0.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a valid length, else `false`.
* @example
*
* _.isLength(3);
* // => true
*
* _.isLength(Number.MIN_VALUE);
* // => false
*
* _.isLength(Infinity);
* // => false
*
* _.isLength('3');
* // => false
*/
function isLength(value) {
return typeof value == 'number' &&
value > -1 && value % 1 == 0 && value <= MAX_SAFE_INTEGER$1;
}
var isLength_1 = isLength;
/** `Object#toString` result references. */
var argsTag$1 = '[object Arguments]',
arrayTag = '[object Array]',
boolTag = '[object Boolean]',
dateTag = '[object Date]',
errorTag = '[object Error]',
funcTag$1 = '[object Function]',
mapTag = '[object Map]',
numberTag = '[object Number]',
objectTag = '[object Object]',
regexpTag = '[object RegExp]',
setTag = '[object Set]',
stringTag = '[object String]',
weakMapTag = '[object WeakMap]';
var arrayBufferTag = '[object ArrayBuffer]',
dataViewTag = '[object DataView]',
float32Tag = '[object Float32Array]',
float64Tag = '[object Float64Array]',
int8Tag = '[object Int8Array]',
int16Tag = '[object Int16Array]',
int32Tag = '[object Int32Array]',
uint8Tag = '[object Uint8Array]',
uint8ClampedTag = '[object Uint8ClampedArray]',
uint16Tag = '[object Uint16Array]',
uint32Tag = '[object Uint32Array]';
/** Used to identify `toStringTag` values of typed arrays. */
var typedArrayTags = {};
typedArrayTags[float32Tag] = typedArrayTags[float64Tag] =
typedArrayTags[int8Tag] = typedArrayTags[int16Tag] =
typedArrayTags[int32Tag] = typedArrayTags[uint8Tag] =
typedArrayTags[uint8ClampedTag] = typedArrayTags[uint16Tag] =
typedArrayTags[uint32Tag] = true;
typedArrayTags[argsTag$1] = typedArrayTags[arrayTag] =
typedArrayTags[arrayBufferTag] = typedArrayTags[boolTag] =
typedArrayTags[dataViewTag] = typedArrayTags[dateTag] =
typedArrayTags[errorTag] = typedArrayTags[funcTag$1] =
typedArrayTags[mapTag] = typedArrayTags[numberTag] =
typedArrayTags[objectTag] = typedArrayTags[regexpTag] =
typedArrayTags[setTag] = typedArrayTags[stringTag] =
typedArrayTags[weakMapTag] = false;
/**
* The base implementation of `_.isTypedArray` without Node.js optimizations.
*
* @private
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a typed array, else `false`.
*/
function baseIsTypedArray(value) {
return isObjectLike_1(value) &&
isLength_1(value.length) && !!typedArrayTags[_baseGetTag(value)];
}
var _baseIsTypedArray = baseIsTypedArray;
/**
* The base implementation of `_.unary` without support for storing metadata.
*
* @private
* @param {Function} func The function to cap arguments for.
* @returns {Function} Returns the new capped function.
*/
function baseUnary(func) {
return function(value) {
return func(value);
};
}
var _baseUnary = baseUnary;
var _nodeUtil = createCommonjsModule(function (module, exports) {
/** Detect free variable `exports`. */
var freeExports = 'object' == 'object' && exports && !exports.nodeType && exports;
/** Detect free variable `module`. */
var freeModule = freeExports && 'object' == 'object' && module && !module.nodeType && module;
/** Detect the popular CommonJS extension `module.exports`. */
var moduleExports = freeModule && freeModule.exports === freeExports;
/** Detect free variable `process` from Node.js. */
var freeProcess = moduleExports && _freeGlobal.process;
/** Used to access faster Node.js helpers. */
var nodeUtil = (function() {
try {
// Use `util.types` for Node.js 10+.
var types = freeModule && freeModule.require && freeModule.require('util').types;
if (types) {
return types;
}
// Legacy `process.binding('util')` for Node.js < 10.
return freeProcess && freeProcess.binding && freeProcess.binding('util');
} catch (e) {}
}());
module.exports = nodeUtil;
});
/* Node.js helper references. */
var nodeIsTypedArray = _nodeUtil && _nodeUtil.isTypedArray;
/**
* Checks if `value` is classified as a typed array.
*
* @static
* @memberOf _
* @since 3.0.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a typed array, else `false`.
* @example
*
* _.isTypedArray(new Uint8Array);
* // => true
*
* _.isTypedArray([]);
* // => false
*/
var isTypedArray = nodeIsTypedArray ? _baseUnary(nodeIsTypedArray) : _baseIsTypedArray;
var isTypedArray_1 = isTypedArray;
/** Used for built-in method references. */
var objectProto$7 = Object.prototype;
/** Used to check objects for own properties. */
var hasOwnProperty$6 = objectProto$7.hasOwnProperty;
/**
* Creates an array of the enumerable property names of the array-like `value`.
*
* @private
* @param {*} value The value to query.
* @param {boolean} inherited Specify returning inherited property names.
* @returns {Array} Returns the array of property names.
*/
function arrayLikeKeys(value, inherited) {
var isArr = isArray_1(value),
isArg = !isArr && isArguments_1(value),
isBuff = !isArr && !isArg && isBuffer_1(value),
isType = !isArr && !isArg && !isBuff && isTypedArray_1(value),
skipIndexes = isArr || isArg || isBuff || isType,
result = skipIndexes ? _baseTimes(value.length, String) : [],
length = result.length;
for (var key in value) {
if ((inherited || hasOwnProperty$6.call(value, key)) &&
!(skipIndexes && (
// Safari 9 has enumerable `arguments.length` in strict mode.
key == 'length' ||
// Node.js 0.10 has enumerable non-index properties on buffers.
(isBuff && (key == 'offset' || key == 'parent')) ||
// PhantomJS 2 has enumerable non-index properties on typed arrays.
(isType && (key == 'buffer' || key == 'byteLength' || key == 'byteOffset')) ||
// Skip index properties.
_isIndex(key, length)
))) {
result.push(key);
}
}
return result;
}
var _arrayLikeKeys = arrayLikeKeys;
/** Used for built-in method references. */
var objectProto$8 = Object.prototype;
/**
* Checks if `value` is likely a prototype object.
*
* @private
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a prototype, else `false`.
*/
function isPrototype(value) {
var Ctor = value && value.constructor,
proto = (typeof Ctor == 'function' && Ctor.prototype) || objectProto$8;
return value === proto;
}
var _isPrototype = isPrototype;
/**
* Creates a unary function that invokes `func` with its argument transformed.
*
* @private
* @param {Function} func The function to wrap.
* @param {Function} transform The argument transform.
* @returns {Function} Returns the new function.
*/
function overArg(func, transform) {
return function(arg) {
return func(transform(arg));
};
}
var _overArg = overArg;
/* Built-in method references for those with the same name as other `lodash` methods. */
var nativeKeys = _overArg(Object.keys, Object);
var _nativeKeys = nativeKeys;
/** Used for built-in method references. */
var objectProto$9 = Object.prototype;
/** Used to check objects for own properties. */
var hasOwnProperty$7 = objectProto$9.hasOwnProperty;
/**
* The base implementation of `_.keys` which doesn't treat sparse arrays as dense.
*
* @private
* @param {Object} object The object to query.
* @returns {Array} Returns the array of property names.
*/
function baseKeys(object) {
if (!_isPrototype(object)) {
return _nativeKeys(object);
}
var result = [];
for (var key in Object(object)) {
if (hasOwnProperty$7.call(object, key) && key != 'constructor') {
result.push(key);
}
}
return result;
}
var _baseKeys = baseKeys;
/**
* Checks if `value` is array-like. A value is considered array-like if it's
* not a function and has a `value.length` that's an integer greater than or
* equal to `0` and less than or equal to `Number.MAX_SAFE_INTEGER`.
*
* @static
* @memberOf _
* @since 4.0.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is array-like, else `false`.
* @example
*
* _.isArrayLike([1, 2, 3]);
* // => true
*
* _.isArrayLike(document.body.children);
* // => true
*
* _.isArrayLike('abc');
* // => true
*
* _.isArrayLike(_.noop);
* // => false
*/
function isArrayLike(value) {
return value != null && isLength_1(value.length) && !isFunction_1(value);
}
var isArrayLike_1 = isArrayLike;
/**
* Creates an array of the own enumerable property names of `object`.
*
* **Note:** Non-object values are coerced to objects. See the
* [ES spec](http://ecma-international.org/ecma-262/7.0/#sec-object.keys)
* for more details.
*
* @static
* @since 0.1.0
* @memberOf _
* @category Object
* @param {Object} object The object to query.
* @returns {Array} Returns the array of property names.
* @example
*
* function Foo() {
* this.a = 1;
* this.b = 2;
* }
*
* Foo.prototype.c = 3;
*
* _.keys(new Foo);
* // => ['a', 'b'] (iteration order is not guaranteed)
*
* _.keys('hi');
* // => ['0', '1']
*/
function keys(object) {
return isArrayLike_1(object) ? _arrayLikeKeys(object) : _baseKeys(object);
}
var keys_1 = keys;
/**
* The base implementation of `_.assign` without support for multiple sources
* or `customizer` functions.
*
* @private
* @param {Object} object The destination object.
* @param {Object} source The source object.
* @returns {Object} Returns `object`.
*/
function baseAssign(object, source) {
return object && _copyObject(source, keys_1(source), object);
}
var _baseAssign = baseAssign;
/**
* This function is like
* [`Object.keys`](http://ecma-international.org/ecma-262/7.0/#sec-object.keys)
* except that it includes inherited enumerable properties.
*
* @private
* @param {Object} object The object to query.
* @returns {Array} Returns the array of property names.
*/
function nativeKeysIn(object) {
var result = [];
if (object != null) {
for (var key in Object(object)) {
result.push(key);
}
}
return result;
}
var _nativeKeysIn = nativeKeysIn;
/** Used for built-in method references. */
var objectProto$10 = Object.prototype;
/** Used to check objects for own properties. */
var hasOwnProperty$8 = objectProto$10.hasOwnProperty;
/**
* The base implementation of `_.keysIn` which doesn't treat sparse arrays as dense.
*
* @private
* @param {Object} object The object to query.
* @returns {Array} Returns the array of property names.
*/
function baseKeysIn(object) {
if (!isObject_1(object)) {
return _nativeKeysIn(object);
}
var isProto = _isPrototype(object),
result = [];
for (var key in object) {
if (!(key == 'constructor' && (isProto || !hasOwnProperty$8.call(object, key)))) {
result.push(key);
}
}
return result;
}
var _baseKeysIn = baseKeysIn;
/**
* Creates an array of the own and inherited enumerable property names of `object`.
*
* **Note:** Non-object values are coerced to objects.
*
* @static
* @memberOf _
* @since 3.0.0
* @category Object
* @param {Object} object The object to query.
* @returns {Array} Returns the array of property names.
* @example
*
* function Foo() {
* this.a = 1;
* this.b = 2;
* }
*
* Foo.prototype.c = 3;
*
* _.keysIn(new Foo);
* // => ['a', 'b', 'c'] (iteration order is not guaranteed)
*/
function keysIn$1(object) {
return isArrayLike_1(object) ? _arrayLikeKeys(object, true) : _baseKeysIn(object);
}
var keysIn_1 = keysIn$1;
/**
* The base implementation of `_.assignIn` without support for multiple sources
* or `customizer` functions.
*
* @private
* @param {Object} object The destination object.
* @param {Object} source The source object.
* @returns {Object} Returns `object`.
*/
function baseAssignIn(object, source) {
return object && _copyObject(source, keysIn_1(source), object);
}
var _baseAssignIn = baseAssignIn;
var _cloneBuffer = createCommonjsModule(function (module, exports) {
/** Detect free variable `exports`. */
var freeExports = 'object' == 'object' && exports && !exports.nodeType && exports;
/** Detect free variable `module`. */
var freeModule = freeExports && 'object' == 'object' && module && !module.nodeType && module;
/** Detect the popular CommonJS extension `module.exports`. */
var moduleExports = freeModule && freeModule.exports === freeExports;
/** Built-in value references. */
var Buffer = moduleExports ? _root.Buffer : undefined,
allocUnsafe = Buffer ? Buffer.allocUnsafe : undefined;
/**
* Creates a clone of `buffer`.
*
* @private
* @param {Buffer} buffer The buffer to clone.
* @param {boolean} [isDeep] Specify a deep clone.
* @returns {Buffer} Returns the cloned buffer.
*/
function cloneBuffer(buffer, isDeep) {
if (isDeep) {
return buffer.slice();
}
var length = buffer.length,
result = allocUnsafe ? allocUnsafe(length) : new buffer.constructor(length);
buffer.copy(result);
return result;
}
module.exports = cloneBuffer;
});
/**
* Copies the values of `source` to `array`.
*
* @private
* @param {Array} source The array to copy values from.
* @param {Array} [array=[]] The array to copy values to.
* @returns {Array} Returns `array`.
*/
function copyArray(source, array) {
var index = -1,
length = source.length;
array || (array = Array(length));
while (++index < length) {
array[index] = source[index];
}
return array;
}
var _copyArray = copyArray;
/**
* A specialized version of `_.filter` for arrays without support for
* iteratee shorthands.
*
* @private
* @param {Array} [array] The array to iterate over.
* @param {Function} predicate The function invoked per iteration.
* @returns {Array} Returns the new filtered array.
*/
function arrayFilter(array, predicate) {
var index = -1,
length = array == null ? 0 : array.length,
resIndex = 0,
result = [];
while (++index < length) {
var value = array[index];
if (predicate(value, index, array)) {
result[resIndex++] = value;
}
}
return result;
}
var _arrayFilter = arrayFilter;
/**
* This method returns a new empty array.
*
* @static
* @memberOf _
* @since 4.13.0
* @category Util
* @returns {Array} Returns the new empty array.
* @example
*
* var arrays = _.times(2, _.stubArray);
*
* console.log(arrays);
* // => [[], []]
*
* console.log(arrays[0] === arrays[1]);
* // => false
*/
function stubArray() {
return [];
}
var stubArray_1 = stubArray;
/** Used for built-in method references. */
var objectProto$11 = Object.prototype;
/** Built-in value references. */
var propertyIsEnumerable$1 = objectProto$11.propertyIsEnumerable;
/* Built-in method references for those with the same name as other `lodash` methods. */
var nativeGetSymbols = Object.getOwnPropertySymbols;
/**
* Creates an array of the own enumerable symbols of `object`.
*
* @private
* @param {Object} object The object to query.
* @returns {Array} Returns the array of symbols.
*/
var getSymbols = !nativeGetSymbols ? stubArray_1 : function(object) {
if (object == null) {
return [];
}
object = Object(object);
return _arrayFilter(nativeGetSymbols(object), function(symbol) {
return propertyIsEnumerable$1.call(object, symbol);
});
};
var _getSymbols = getSymbols;
/**
* Copies own symbols of `source` to `object`.
*
* @private
* @param {Object} source The object to copy symbols from.
* @param {Object} [object={}] The object to copy symbols to.
* @returns {Object} Returns `object`.
*/
function copySymbols(source, object) {
return _copyObject(source, _getSymbols(source), object);
}
var _copySymbols = copySymbols;
/**
* Appends the elements of `values` to `array`.
*
* @private
* @param {Array} array The array to modify.
* @param {Array} values The values to append.
* @returns {Array} Returns `array`.
*/
function arrayPush(array, values) {
var index = -1,
length = values.length,
offset = array.length;
while (++index < length) {
array[offset + index] = values[index];
}
return array;
}
var _arrayPush = arrayPush;
/** Built-in value references. */
var getPrototype = _overArg(Object.getPrototypeOf, Object);
var _getPrototype = getPrototype;
/* Built-in method references for those with the same name as other `lodash` methods. */
var nativeGetSymbols$1 = Object.getOwnPropertySymbols;
/**
* Creates an array of the own and inherited enumerable symbols of `object`.
*
* @private
* @param {Object} object The object to query.
* @returns {Array} Returns the array of symbols.
*/
var getSymbolsIn = !nativeGetSymbols$1 ? stubArray_1 : function(object) {
var result = [];
while (object) {
_arrayPush(result, _getSymbols(object));
object = _getPrototype(object);
}
return result;
};
var _getSymbolsIn = getSymbolsIn;
/**
* Copies own and inherited symbols of `source` to `object`.
*
* @private
* @param {Object} source The object to copy symbols from.
* @param {Object} [object={}] The object to copy symbols to.
* @returns {Object} Returns `object`.
*/
function copySymbolsIn(source, object) {
return _copyObject(source, _getSymbolsIn(source), object);
}
var _copySymbolsIn = copySymbolsIn;
/**
* The base implementation of `getAllKeys` and `getAllKeysIn` which uses
* `keysFunc` and `symbolsFunc` to get the enumerable property names and
* symbols of `object`.
*
* @private
* @param {Object} object The object to query.
* @param {Function} keysFunc The function to get the keys of `object`.
* @param {Function} symbolsFunc The function to get the symbols of `object`.
* @returns {Array} Returns the array of property names and symbols.
*/
function baseGetAllKeys(object, keysFunc, symbolsFunc) {
var result = keysFunc(object);
return isArray_1(object) ? result : _arrayPush(result, symbolsFunc(object));
}
var _baseGetAllKeys = baseGetAllKeys;
/**
* Creates an array of own enumerable property names and symbols of `object`.
*
* @private
* @param {Object} object The object to query.
* @returns {Array} Returns the array of property names and symbols.
*/
function getAllKeys(object) {
return _baseGetAllKeys(object, keys_1, _getSymbols);
}
var _getAllKeys = getAllKeys;
/**
* Creates an array of own and inherited enumerable property names and
* symbols of `object`.
*
* @private
* @param {Object} object The object to query.
* @returns {Array} Returns the array of property names and symbols.
*/
function getAllKeysIn(object) {
return _baseGetAllKeys(object, keysIn_1, _getSymbolsIn);
}
var _getAllKeysIn = getAllKeysIn;
/* Built-in method references that are verified to be native. */
var DataView = _getNative(_root, 'DataView');
var _DataView = DataView;
/* Built-in method references that are verified to be native. */
var Promise = _getNative(_root, 'Promise');
var _Promise = Promise;
/* Built-in method references that are verified to be native. */
var Set = _getNative(_root, 'Set');
var _Set = Set;
/* Built-in method references that are verified to be native. */
var WeakMap = _getNative(_root, 'WeakMap');
var _WeakMap = WeakMap;
/** `Object#toString` result references. */
var mapTag$1 = '[object Map]',
objectTag$1 = '[object Object]',
promiseTag = '[object Promise]',
setTag$1 = '[object Set]',
weakMapTag$1 = '[object WeakMap]';
var dataViewTag$1 = '[object DataView]';
/** Used to detect maps, sets, and weakmaps. */
var dataViewCtorString = _toSource(_DataView),
mapCtorString = _toSource(_Map),
promiseCtorString = _toSource(_Promise),
setCtorString = _toSource(_Set),
weakMapCtorString = _toSource(_WeakMap);
/**
* Gets the `toStringTag` of `value`.
*
* @private
* @param {*} value The value to query.
* @returns {string} Returns the `toStringTag`.
*/
var getTag = _baseGetTag;
// Fallback for data views, maps, sets, and weak maps in IE 11 and promises in Node.js < 6.
if ((_DataView && getTag(new _DataView(new ArrayBuffer(1))) != dataViewTag$1) ||
(_Map && getTag(new _Map) != mapTag$1) ||
(_Promise && getTag(_Promise.resolve()) != promiseTag) ||
(_Set && getTag(new _Set) != setTag$1) ||
(_WeakMap && getTag(new _WeakMap) != weakMapTag$1)) {
getTag = function(value) {
var result = _baseGetTag(value),
Ctor = result == objectTag$1 ? value.constructor : undefined,
ctorString = Ctor ? _toSource(Ctor) : '';
if (ctorString) {
switch (ctorString) {
case dataViewCtorString: return dataViewTag$1;
case mapCtorString: return mapTag$1;
case promiseCtorString: return promiseTag;
case setCtorString: return setTag$1;
case weakMapCtorString: return weakMapTag$1;
}
}
return result;
};
}
var _getTag = getTag;
/** Used for built-in method references. */
var objectProto$12 = Object.prototype;
/** Used to check objects for own properties. */
var hasOwnProperty$9 = objectProto$12.hasOwnProperty;
/**
* Initializes an array clone.
*
* @private
* @param {Array} array The array to clone.
* @returns {Array} Returns the initialized clone.
*/
function initCloneArray(array) {
var length = array.length,
result = new array.constructor(length);
// Add properties assigned by `RegExp#exec`.
if (length && typeof array[0] == 'string' && hasOwnProperty$9.call(array, 'index')) {
result.index = array.index;
result.input = array.input;
}
return result;
}
var _initCloneArray = initCloneArray;
/** Built-in value references. */
var Uint8Array = _root.Uint8Array;
var _Uint8Array = Uint8Array;
/**
* Creates a clone of `arrayBuffer`.
*
* @private
* @param {ArrayBuffer} arrayBuffer The array buffer to clone.
* @returns {ArrayBuffer} Returns the cloned array buffer.
*/
function cloneArrayBuffer(arrayBuffer) {
var result = new arrayBuffer.constructor(arrayBuffer.byteLength);
new _Uint8Array(result).set(new _Uint8Array(arrayBuffer));
return result;
}
var _cloneArrayBuffer = cloneArrayBuffer;
/**
* Creates a clone of `dataView`.
*
* @private
* @param {Object} dataView The data view to clone.
* @param {boolean} [isDeep] Specify a deep clone.
* @returns {Object} Returns the cloned data view.
*/
function cloneDataView(dataView, isDeep) {
var buffer = isDeep ? _cloneArrayBuffer(dataView.buffer) : dataView.buffer;
return new dataView.constructor(buffer, dataView.byteOffset, dataView.byteLength);
}
var _cloneDataView = cloneDataView;
/** Used to match `RegExp` flags from their coerced string values. */
var reFlags = /\w*$/;
/**
* Creates a clone of `regexp`.
*
* @private
* @param {Object} regexp The regexp to clone.
* @returns {Object} Returns the cloned regexp.
*/
function cloneRegExp(regexp) {
var result = new regexp.constructor(regexp.source, reFlags.exec(regexp));
result.lastIndex = regexp.lastIndex;
return result;
}
var _cloneRegExp = cloneRegExp;
/** Used to convert symbols to primitives and strings. */
var symbolProto = _Symbol ? _Symbol.prototype : undefined,
symbolValueOf = symbolProto ? symbolProto.valueOf : undefined;
/**
* Creates a clone of the `symbol` object.
*
* @private
* @param {Object} symbol The symbol object to clone.
* @returns {Object} Returns the cloned symbol object.
*/
function cloneSymbol(symbol) {
return symbolValueOf ? Object(symbolValueOf.call(symbol)) : {};
}
var _cloneSymbol = cloneSymbol;
/**
* Creates a clone of `typedArray`.
*
* @private
* @param {Object} typedArray The typed array to clone.
* @param {boolean} [isDeep] Specify a deep clone.
* @returns {Object} Returns the cloned typed array.
*/
function cloneTypedArray(typedArray, isDeep) {
var buffer = isDeep ? _cloneArrayBuffer(typedArray.buffer) : typedArray.buffer;
return new typedArray.constructor(buffer, typedArray.byteOffset, typedArray.length);
}
var _cloneTypedArray = cloneTypedArray;
/** `Object#toString` result references. */
var boolTag$1 = '[object Boolean]',
dateTag$1 = '[object Date]',
mapTag$2 = '[object Map]',
numberTag$1 = '[object Number]',
regexpTag$1 = '[object RegExp]',
setTag$2 = '[object Set]',
stringTag$1 = '[object String]',
symbolTag = '[object Symbol]';
var arrayBufferTag$1 = '[object ArrayBuffer]',
dataViewTag$2 = '[object DataView]',
float32Tag$1 = '[object Float32Array]',
float64Tag$1 = '[object Float64Array]',
int8Tag$1 = '[object Int8Array]',
int16Tag$1 = '[object Int16Array]',
int32Tag$1 = '[object Int32Array]',
uint8Tag$1 = '[object Uint8Array]',
uint8ClampedTag$1 = '[object Uint8ClampedArray]',
uint16Tag$1 = '[object Uint16Array]',
uint32Tag$1 = '[object Uint32Array]';
/**
* Initializes an object clone based on its `toStringTag`.
*
* **Note:** This function only supports cloning values with tags of
* `Boolean`, `Date`, `Error`, `Map`, `Number`, `RegExp`, `Set`, or `String`.
*
* @private
* @param {Object} object The object to clone.
* @param {string} tag The `toStringTag` of the object to clone.
* @param {boolean} [isDeep] Specify a deep clone.
* @returns {Object} Returns the initialized clone.
*/
function initCloneByTag(object, tag, isDeep) {
var Ctor = object.constructor;
switch (tag) {
case arrayBufferTag$1:
return _cloneArrayBuffer(object);
case boolTag$1:
case dateTag$1:
return new Ctor(+object);
case dataViewTag$2:
return _cloneDataView(object, isDeep);
case float32Tag$1: case float64Tag$1:
case int8Tag$1: case int16Tag$1: case int32Tag$1:
case uint8Tag$1: case uint8ClampedTag$1: case uint16Tag$1: case uint32Tag$1:
return _cloneTypedArray(object, isDeep);
case mapTag$2:
return new Ctor;
case numberTag$1:
case stringTag$1:
return new Ctor(object);
case regexpTag$1:
return _cloneRegExp(object);
case setTag$2:
return new Ctor;
case symbolTag:
return _cloneSymbol(object);
}
}
var _initCloneByTag = initCloneByTag;
/** Built-in value references. */
var objectCreate = Object.create;
/**
* The base implementation of `_.create` without support for assigning
* properties to the created object.
*
* @private
* @param {Object} proto The object to inherit from.
* @returns {Object} Returns the new object.
*/
var baseCreate = (function() {
function object() {}
return function(proto) {
if (!isObject_1(proto)) {
return {};
}
if (objectCreate) {
return objectCreate(proto);
}
object.prototype = proto;
var result = new object;
object.prototype = undefined;
return result;
};
}());
var _baseCreate = baseCreate;
/**
* Initializes an object clone.
*
* @private
* @param {Object} object The object to clone.
* @returns {Object} Returns the initialized clone.
*/
function initCloneObject(object) {
return (typeof object.constructor == 'function' && !_isPrototype(object))
? _baseCreate(_getPrototype(object))
: {};
}
var _initCloneObject = initCloneObject;
/** `Object#toString` result references. */
var mapTag$3 = '[object Map]';
/**
* The base implementation of `_.isMap` without Node.js optimizations.
*
* @private
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a map, else `false`.
*/
function baseIsMap(value) {
return isObjectLike_1(value) && _getTag(value) == mapTag$3;
}
var _baseIsMap = baseIsMap;
/* Node.js helper references. */
var nodeIsMap = _nodeUtil && _nodeUtil.isMap;
/**
* Checks if `value` is classified as a `Map` object.
*
* @static
* @memberOf _
* @since 4.3.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a map, else `false`.
* @example
*
* _.isMap(new Map);
* // => true
*
* _.isMap(new WeakMap);
* // => false
*/
var isMap = nodeIsMap ? _baseUnary(nodeIsMap) : _baseIsMap;
var isMap_1 = isMap;
/** `Object#toString` result references. */
var setTag$3 = '[object Set]';
/**
* The base implementation of `_.isSet` without Node.js optimizations.
*
* @private
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a set, else `false`.
*/
function baseIsSet(value) {
return isObjectLike_1(value) && _getTag(value) == setTag$3;
}
var _baseIsSet = baseIsSet;
/* Node.js helper references. */
var nodeIsSet = _nodeUtil && _nodeUtil.isSet;
/**
* Checks if `value` is classified as a `Set` object.
*
* @static
* @memberOf _
* @since 4.3.0
* @category Lang
* @param {*} value The value to check.
* @returns {boolean} Returns `true` if `value` is a set, else `false`.
* @example
*
* _.isSet(new Set);
* // => true
*
* _.isSet(new WeakSet);
* // => false
*/
var isSet = nodeIsSet ? _baseUnary(nodeIsSet) : _baseIsSet;
var isSet_1 = isSet;
/** Used to compose bitmasks for cloning. */
var CLONE_DEEP_FLAG = 1,
CLONE_FLAT_FLAG = 2,
CLONE_SYMBOLS_FLAG = 4;
/** `Object#toString` result references. */
var argsTag$2 = '[object Arguments]',
arrayTag$1 = '[object Array]',
boolTag$2 = '[object Boolean]',
dateTag$2 = '[object Date]',
errorTag$1 = '[object Error]',
funcTag$2 = '[object Function]',
genTag$1 = '[object GeneratorFunction]',
mapTag$4 = '[object Map]',
numberTag$2 = '[object Number]',
objectTag$2 = '[object Object]',
regexpTag$2 = '[object RegExp]',
setTag$4 = '[object Set]',
stringTag$2 = '[object String]',
symbolTag$1 = '[object Symbol]',
weakMapTag$2 = '[object WeakMap]';
var arrayBufferTag$2 = '[object ArrayBuffer]',
dataViewTag$3 = '[object DataView]',
float32Tag$2 = '[object Float32Array]',
float64Tag$2 = '[object Float64Array]',
int8Tag$2 = '[object Int8Array]',
int16Tag$2 = '[object Int16Array]',
int32Tag$2 = '[object Int32Array]',
uint8Tag$2 = '[object Uint8Array]',
uint8ClampedTag$2 = '[object Uint8ClampedArray]',
uint16Tag$2 = '[object Uint16Array]',
uint32Tag$2 = '[object Uint32Array]';
/** Used to identify `toStringTag` values supported by `_.clone`. */
var cloneableTags = {};
cloneableTags[argsTag$2] = cloneableTags[arrayTag$1] =
cloneableTags[arrayBufferTag$2] = cloneableTags[dataViewTag$3] =
cloneableTags[boolTag$2] = cloneableTags[dateTag$2] =
cloneableTags[float32Tag$2] = cloneableTags[float64Tag$2] =
cloneableTags[int8Tag$2] = cloneableTags[int16Tag$2] =
cloneableTags[int32Tag$2] = cloneableTags[mapTag$4] =
cloneableTags[numberTag$2] = cloneableTags[objectTag$2] =
cloneableTags[regexpTag$2] = cloneableTags[setTag$4] =
cloneableTags[stringTag$2] = cloneableTags[symbolTag$1] =
cloneableTags[uint8Tag$2] = cloneableTags[uint8ClampedTag$2] =
cloneableTags[uint16Tag$2] = cloneableTags[uint32Tag$2] = true;
cloneableTags[errorTag$1] = cloneableTags[funcTag$2] =
cloneableTags[weakMapTag$2] = false;
/**
* The base implementation of `_.clone` and `_.cloneDeep` which tracks
* traversed objects.
*
* @private
* @param {*} value The value to clone.
* @param {boolean} bitmask The bitmask flags.
* 1 - Deep clone
* 2 - Flatten inherited properties
* 4 - Clone symbols
* @param {Function} [customizer] The function to customize cloning.
* @param {string} [key] The key of `value`.
* @param {Object} [object] The parent object of `value`.
* @param {Object} [stack] Tracks traversed objects and their clone counterparts.
* @returns {*} Returns the cloned value.
*/
function baseClone(value, bitmask, customizer, key, object, stack) {
var result,
isDeep = bitmask & CLONE_DEEP_FLAG,
isFlat = bitmask & CLONE_FLAT_FLAG,
isFull = bitmask & CLONE_SYMBOLS_FLAG;
if (customizer) {
result = object ? customizer(value, key, object, stack) : customizer(value);
}
if (result !== undefined) {
return result;
}
if (!isObject_1(value)) {
return value;
}
var isArr = isArray_1(value);
if (isArr) {
result = _initCloneArray(value);
if (!isDeep) {
return _copyArray(value, result);
}
} else {
var tag = _getTag(value),
isFunc = tag == funcTag$2 || tag == genTag$1;
if (isBuffer_1(value)) {
return _cloneBuffer(value, isDeep);
}
if (tag == objectTag$2 || tag == argsTag$2 || (isFunc && !object)) {
result = (isFlat || isFunc) ? {} : _initCloneObject(value);
if (!isDeep) {
return isFlat
? _copySymbolsIn(value, _baseAssignIn(result, value))
: _copySymbols(value, _baseAssign(result, value));
}
} else {
if (!cloneableTags[tag]) {
return object ? value : {};
}
result = _initCloneByTag(value, tag, isDeep);
}
}
// Check for circular references and return its corresponding clone.
stack || (stack = new _Stack);
var stacked = stack.get(value);
if (stacked) {
return stacked;
}
stack.set(value, result);
if (isSet_1(value)) {
value.forEach(function(subValue) {
result.add(baseClone(subValue, bitmask, customizer, subValue, value, stack));
});
return result;
}
if (isMap_1(value)) {
value.forEach(function(subValue, key) {
result.set(key, baseClone(subValue, bitmask, customizer, key, value, stack));
});
return result;
}
var keysFunc = isFull
? (isFlat ? _getAllKeysIn : _getAllKeys)
: (isFlat ? keysIn : keys_1);
var props = isArr ? undefined : keysFunc(value);
_arrayEach(props || value, function(subValue, key) {
if (props) {
key = subValue;
subValue = value[key];
}
// Recursively populate clone (susceptible to call stack limits).
_assignValue(result, key, baseClone(subValue, bitmask, customizer, key, value, stack));
});
return result;
}
var _baseClone = baseClone;
/** Used to compose bitmasks for cloning. */
var CLONE_DEEP_FLAG$1 = 1,
CLONE_SYMBOLS_FLAG$1 = 4;
/**
* This method is like `_.clone` except that it recursively clones `value`.
*
* @static
* @memberOf _
* @since 1.0.0
* @category Lang
* @param {*} value The value to recursively clone.
* @returns {*} Returns the deep cloned value.
* @see _.clone
* @example
*
* var objects = [{ 'a': 1 }, { 'b': 2 }];
*
* var deep = _.cloneDeep(objects);
* console.log(deep[0] === objects[0]);
* // => false
*/
function cloneDeep(value) {
return _baseClone(value, CLONE_DEEP_FLAG$1 | CLONE_SYMBOLS_FLAG$1);
}
var cloneDeep_1 = cloneDeep;
//
function errorHandler (
errorOrString,
vm
) {
var error =
typeof errorOrString === 'object'
? errorOrString
: new Error(errorOrString);
vm._error = error;
throw error
}
//
function createLocalVue (_Vue) {
if ( _Vue === void 0 ) _Vue = Vue;
var instance = _Vue.extend();
// clone global APIs
Object.keys(_Vue).forEach(function (key) {
if (!instance.hasOwnProperty(key)) {
var original = _Vue[key];
// cloneDeep can fail when cloning Vue instances
// cloneDeep checks that the instance has a Symbol
// which errors in Vue < 2.17 (https://github.com/vuejs/vue/pull/7878)
try {
instance[key] = typeof original === 'object'
? cloneDeep_1(original)
: original;
} catch (e) {
instance[key] = original;
}
}
});
// config is not enumerable
instance.config = cloneDeep_1(Vue.config);
instance.config.errorHandler = errorHandler;
// option merge strategies need to be exposed by reference
// so that merge strats registered by plugins can work properly
instance.config.optionMergeStrategies = Vue.config.optionMergeStrategies;
// make sure all extends are based on this instance.
// this is important so that global components registered by plugins,
// e.g. router-link are created using the correct base constructor
instance.options._base = instance;
// compat for vue-router < 2.7.1 where it does not allow multiple installs
if (instance._installedPlugins && instance._installedPlugins.length) {
instance._installedPlugins.length = 0;
}
var use = instance.use;
instance.use = function (plugin) {
var rest = [], len = arguments.length - 1;
while ( len-- > 0 ) rest[ len ] = arguments[ len + 1 ];
if (plugin.installed === true) {
plugin.installed = false;
}
if (plugin.install && plugin.install.installed === true) {
plugin.install.installed = false;
}
use.call.apply(use, [ instance, plugin ].concat( rest ));
};
return instance
}
//
function getOption (option, config) {
if (option || (config && Object.keys(config).length > 0)) {
if (option instanceof Function) {
return option
} else if (Array.isArray(option)) {
return option.concat( Object.keys(config || {}))
} else if (config instanceof Function) {
throw new Error("Config can't be a Function.")
} else {
return Object.assign({}, config,
option)
}
}
}
function mergeOptions (options, config) {
var mocks = (getOption(options.mocks, config.mocks));
var methods = (
(getOption(options.methods, config.methods)));
var provide = ((getOption(options.provide, config.provide)));
return Object.assign({}, options,
{logModifiedComponents: config.logModifiedComponents,
stubs: getOption(options.stubs, config.stubs),
mocks: mocks,
methods: methods,
provide: provide,
sync: !!(options.sync || options.sync === undefined)})
}
//
Vue.config.productionTip = false;
Vue.config.devtools = false;
function mount (
component,
options
) {
if ( options === void 0 ) options = {};
var existingErrorHandler = Vue.config.errorHandler;
Vue.config.errorHandler = errorHandler;
warnIfNoWindow();
// Remove cached constructor
delete component._Ctor;
var vueConstructor = createLocalVue(options.localVue);
var elm = options.attachToDocument ? createElement() : undefined;
var mergedOptions = mergeOptions(options, config);
var parentVm = createInstance(
component,
mergedOptions,
vueConstructor,
elm
);
var vm = parentVm.$mount(elm).$refs.vm;
// Workaround for Vue < 2.5
vm._staticTrees = [];
var componentsWithError = findAllVueComponentsFromVm(vm).filter(
function (c) { return c._error; }
);
if (componentsWithError.length > 0) {
throw componentsWithError[0]._error
}
Vue.config.errorHandler = existingErrorHandler;
var wrapperOptions = {
attachedToDocument: !!mergedOptions.attachToDocument,
sync: mergedOptions.sync
};
return new VueWrapper(vm, wrapperOptions)
}
//
function shallowMount (
component,
options
) {
if ( options === void 0 ) options = {};
var vue = options.localVue || Vue;
// remove any recursive components added to the constructor
// in vm._init from previous tests
if (component.name && component.components) {
delete component.components[capitalize(camelize(component.name))];
delete component.components[hyphenate(component.name)];
}
return mount(component, Object.assign({}, options,
{components: Object.assign({}, createComponentStubsForGlobals(vue),
createComponentStubsForAll(component))}))
}
//
var toTypes = [String, Object];
var eventTypes = [String, Array];
var RouterLinkStub = {
name: 'RouterLinkStub',
props: {
to: {
type: toTypes,
required: true
},
tag: {
type: String,
default: 'a'
},
exact: Boolean,
append: Boolean,
replace: Boolean,
activeClass: String,
exactActiveClass: String,
event: {
type: eventTypes,
default: 'click'
}
},
render: function render (h) {
return h(this.tag, undefined, this.$slots.default)
}
}
function shallow (component, options) {
warn(
"shallow has been renamed to shallowMount. shallow " +
"will be removed in 1.0.0, use shallowMount instead"
);
return shallowMount(component, options)
}
var index = {
createLocalVue: createLocalVue,
config: config,
mount: mount,
shallow: shallow,
shallowMount: shallowMount,
TransitionStub: TransitionStub,
TransitionGroupStub: TransitionGroupStub,
RouterLinkStub: RouterLinkStub
}
return index;
}(Vue,VueTemplateCompiler));
|
return !!(this.element.getAttribute(attribute) === value)
};
|
listener.rs
|
// Copyright 2017 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
//! Contains the `Listener` wrapper, which allows raw communications with a dialer.
use bytes::{Bytes, BytesMut};
use futures::{Async, AsyncSink, Future, Poll, Sink, StartSend, Stream};
use length_delimited::LengthDelimitedFramedRead;
use protocol::DialerToListenerMessage;
use protocol::ListenerToDialerMessage;
use protocol::MULTISTREAM_PROTOCOL_WITH_LF;
use protocol::MultistreamSelectError;
use tokio_io::{AsyncRead, AsyncWrite};
use tokio_io::codec::length_delimited::Builder as LengthDelimitedBuilder;
use tokio_io::codec::length_delimited::FramedWrite as LengthDelimitedFramedWrite;
use varint;
/// Wraps around a `AsyncRead+AsyncWrite`. Assumes that we're on the listener's side. Produces and
/// accepts messages.
pub struct Listener<R> {
inner: LengthDelimitedFramedRead<Bytes, LengthDelimitedFramedWrite<R, BytesMut>>,
}
impl<R> Listener<R>
where
R: AsyncRead + AsyncWrite,
{
/// Takes ownership of a socket and starts the handshake. If the handshake succeeds, the
/// future returns a `Listener`.
pub fn
|
<'a>(inner: R) -> Box<Future<Item = Listener<R>, Error = MultistreamSelectError> + 'a>
where
R: 'a,
{
let write = LengthDelimitedBuilder::new()
.length_field_length(1)
.new_write(inner);
let inner = LengthDelimitedFramedRead::<Bytes, _>::new(write);
let future = inner
.into_future()
.map_err(|(e, _)| e.into())
.and_then(|(msg, rest)| {
if msg.as_ref().map(|b| &b[..]) != Some(MULTISTREAM_PROTOCOL_WITH_LF) {
return Err(MultistreamSelectError::FailedHandshake);
}
Ok(rest)
})
.and_then(|socket| {
socket
.send(BytesMut::from(MULTISTREAM_PROTOCOL_WITH_LF))
.from_err()
})
.map(|inner| Listener { inner: inner });
Box::new(future)
}
/// Grants back the socket. Typically used after a `ProtocolRequest` has been received and a
/// `ProtocolAck` has been sent back.
#[inline]
pub fn into_inner(self) -> R {
self.inner.into_inner().into_inner()
}
}
impl<R> Sink for Listener<R>
where
R: AsyncRead + AsyncWrite,
{
type SinkItem = ListenerToDialerMessage;
type SinkError = MultistreamSelectError;
#[inline]
fn start_send(&mut self, item: Self::SinkItem) -> StartSend<Self::SinkItem, Self::SinkError> {
match item {
ListenerToDialerMessage::ProtocolAck { name } => {
if !name.starts_with(b"/") {
return Err(MultistreamSelectError::WrongProtocolName);
}
let mut protocol = BytesMut::from(name);
protocol.extend_from_slice(&[b'\n']);
match self.inner.start_send(protocol) {
Ok(AsyncSink::Ready) => Ok(AsyncSink::Ready),
Ok(AsyncSink::NotReady(mut protocol)) => {
let protocol_len = protocol.len();
protocol.truncate(protocol_len - 1);
let protocol = protocol.freeze();
Ok(AsyncSink::NotReady(ListenerToDialerMessage::ProtocolAck {
name: protocol,
}))
}
Err(err) => Err(err.into()),
}
}
ListenerToDialerMessage::NotAvailable => {
match self.inner.start_send(BytesMut::from(&b"na\n"[..])) {
Ok(AsyncSink::Ready) => Ok(AsyncSink::Ready),
Ok(AsyncSink::NotReady(_)) => {
Ok(AsyncSink::NotReady(ListenerToDialerMessage::NotAvailable))
}
Err(err) => Err(err.into()),
}
}
ListenerToDialerMessage::ProtocolsListResponse { list } => {
use std::iter;
let mut out_msg = varint::encode(list.len());
for elem in list.iter() {
out_msg.extend(iter::once(b'\r'));
out_msg.extend_from_slice(elem);
out_msg.extend(iter::once(b'\n'));
}
match self.inner.start_send(BytesMut::from(out_msg)) {
Ok(AsyncSink::Ready) => Ok(AsyncSink::Ready),
Ok(AsyncSink::NotReady(_)) => {
let m = ListenerToDialerMessage::ProtocolsListResponse { list };
Ok(AsyncSink::NotReady(m))
}
Err(err) => Err(err.into()),
}
}
}
}
#[inline]
fn poll_complete(&mut self) -> Poll<(), Self::SinkError> {
Ok(self.inner.poll_complete()?)
}
}
impl<R> Stream for Listener<R>
where
R: AsyncRead + AsyncWrite,
{
type Item = DialerToListenerMessage;
type Error = MultistreamSelectError;
fn poll(&mut self) -> Poll<Option<Self::Item>, Self::Error> {
loop {
let mut frame = match self.inner.poll() {
Ok(Async::Ready(Some(frame))) => frame,
Ok(Async::Ready(None)) => return Ok(Async::Ready(None)),
Ok(Async::NotReady) => return Ok(Async::NotReady),
Err(err) => return Err(err.into()),
};
if frame.get(0) == Some(&b'/') && frame.last() == Some(&b'\n') {
let frame_len = frame.len();
let protocol = frame.split_to(frame_len - 1);
return Ok(Async::Ready(Some(
DialerToListenerMessage::ProtocolRequest { name: protocol },
)));
} else if frame == &b"ls\n"[..] {
return Ok(Async::Ready(Some(
DialerToListenerMessage::ProtocolsListRequest,
)));
} else {
return Err(MultistreamSelectError::UnknownMessage);
}
}
}
}
#[cfg(test)]
mod tests {
extern crate tokio_core;
use bytes::Bytes;
use futures::{Sink, Stream};
use futures::Future;
use protocol::{Dialer, Listener, ListenerToDialerMessage, MultistreamSelectError};
use self::tokio_core::net::{TcpListener, TcpStream};
use self::tokio_core::reactor::Core;
#[test]
fn wrong_proto_name() {
let mut core = Core::new().unwrap();
let listener = TcpListener::bind(&"127.0.0.1:0".parse().unwrap(), &core.handle()).unwrap();
let listener_addr = listener.local_addr().unwrap();
let server = listener
.incoming()
.into_future()
.map_err(|(e, _)| e.into())
.and_then(move |(connec, _)| Listener::new(connec.unwrap().0))
.and_then(|listener| {
let proto_name = Bytes::from("invalid-proto");
listener.send(ListenerToDialerMessage::ProtocolAck { name: proto_name })
});
let client = TcpStream::connect(&listener_addr, &core.handle())
.from_err()
.and_then(move |stream| Dialer::new(stream));
match core.run(server.join(client)) {
Err(MultistreamSelectError::WrongProtocolName) => (),
_ => panic!(),
}
}
}
|
new
|
mlp_uk_learning.py
|
import numpy as np
np.random.seed(1337)
from keras.models import Sequential
from keras.layers import Dense
import matplotlib.pyplot as plt
model = Sequential()
model.add(Dense(units=50, input_dim=1, activation='relu'))
model.add(Dense(units=50, activation='relu'))
model.add(Dense(units=1, activation='sigmoid'))
|
model.summary()
# uk corona
import json
url = 'https://api.covid19uk.live/historyfigures'
def read_url_to_json(url):
import urllib.request as request
webpage = request.urlopen(url)
get_data = webpage.read()
data = json.loads(get_data)
return data
read_data = read_url_to_json(url)
each_data = read_data['data']
uk_comfirmed_data = []
for each in each_data:
uk_comfirmed_data.append(each['confirmed'])
uk_date_length = len(uk_comfirmed_data)
uk_dates = list(range(1, uk_date_length + 1))
uk_comfirmed_data = np.array(uk_comfirmed_data)
uk_dates = np.array(uk_dates)
uk_absorb_amount = uk_comfirmed_data[uk_date_length-1]
uk_comfirmed_data_norm = uk_comfirmed_data / uk_absorb_amount
# fit model
model.fit(uk_dates, uk_comfirmed_data_norm, epochs=10000, shuffle=False)
uk_comfirmed_data_predict = model.predict(uk_dates)
uk_comfirmed_data_predict = uk_comfirmed_data_predict * uk_absorb_amount
fig2 = plt.figure(figsize=(7, 5))
plt.scatter(uk_dates, uk_comfirmed_data, label='Real Confirmed')
plt.plot(uk_dates, uk_comfirmed_data_predict, label='Predict Result')
plt.title('UK Confirmed VS Dates')
plt.xlabel('Dates')
plt.ylabel('Amount')
plt.legend()
plt.show()
|
model.add(Dense(units=1, activation='linear'))
model.compile(optimizer='adam', loss='mean_squared_error')
|
service.rs
|
// Copyright 2021 Parallel Finance Developer.
// This file is part of Parallel Finance.
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::client::Block;
use cumulus_client_consensus_aura::{
build_aura_consensus, BuildAuraConsensusParams, SlotProportion,
};
use cumulus_client_network::build_block_announce_validator;
use cumulus_client_service::{
prepare_node_config, start_collator, start_full_node, StartCollatorParams, StartFullNodeParams,
};
use cumulus_primitives_core::ParaId;
use polkadot_service::ConstructRuntimeApi;
use sc_client_api::call_executor::ExecutorProvider;
use sc_executor::native_executor_instance;
use sc_service::{Configuration, PartialComponents, Role, TaskManager};
use sc_telemetry::{Telemetry, TelemetryWorker, TelemetryWorkerHandle};
use primitives::*;
use sp_consensus::SlotData;
use std::sync::Arc;
pub use sc_executor::{NativeExecutionDispatch, NativeExecutor};
// Native executor instance.
native_executor_instance!(
pub ParallelExecutor,
parallel_runtime::api::dispatch,
parallel_runtime::native_version,
frame_benchmarking::benchmarking::HostFunctions,
);
native_executor_instance!(
pub HeikoExecutor,
heiko_runtime::api::dispatch,
heiko_runtime::native_version,
frame_benchmarking::benchmarking::HostFunctions,
);
pub type FullBackend = sc_service::TFullBackend<Block>;
pub type FullClient<RuntimeApi, Executor> = sc_service::TFullClient<Block, RuntimeApi, Executor>;
pub trait IdentifyVariant {
fn is_parallel(&self) -> bool;
fn is_heiko(&self) -> bool;
}
impl IdentifyVariant for Box<dyn sc_service::ChainSpec> {
fn is_parallel(&self) -> bool {
self.id().starts_with("parallel")
}
fn is_heiko(&self) -> bool
|
}
/// Starts a `ServiceBuilder` for a full service.
///
/// Use this macro if you don't actually need the full service, but just the builder in order to
/// be able to perform chain operations.
pub fn new_partial<RuntimeApi, Executor>(
config: &Configuration,
) -> Result<
PartialComponents<
FullClient<RuntimeApi, Executor>,
FullBackend,
(),
sp_consensus::DefaultImportQueue<Block, FullClient<RuntimeApi, Executor>>,
sc_transaction_pool::FullPool<Block, FullClient<RuntimeApi, Executor>>,
(Option<Telemetry>, Option<TelemetryWorkerHandle>),
>,
sc_service::Error,
>
where
RuntimeApi:
ConstructRuntimeApi<Block, FullClient<RuntimeApi, Executor>> + Send + Sync + 'static,
RuntimeApi::RuntimeApi: crate::client::RuntimeApiCollection<
StateBackend = sc_client_api::StateBackendFor<FullBackend, Block>,
>,
Executor: NativeExecutionDispatch + 'static,
{
let telemetry = config
.telemetry_endpoints
.clone()
.filter(|x| !x.is_empty())
.map(|endpoints| -> Result<_, sc_telemetry::Error> {
let worker = TelemetryWorker::new(16)?;
let telemetry = worker.handle().new_telemetry(endpoints);
Ok((worker, telemetry))
})
.transpose()?;
let (client, backend, keystore_container, task_manager) =
sc_service::new_full_parts::<Block, RuntimeApi, Executor>(
&config,
telemetry.as_ref().map(|(_, telemetry)| telemetry.handle()),
)?;
let client = Arc::new(client);
let telemetry_worker_handle = telemetry.as_ref().map(|(worker, _)| worker.handle());
let telemetry = telemetry.map(|(worker, telemetry)| {
task_manager.spawn_handle().spawn("telemetry", worker.run());
telemetry
});
let transaction_pool = sc_transaction_pool::BasicPool::new_full(
config.transaction_pool.clone(),
config.role.is_authority().into(),
config.prometheus_registry(),
task_manager.spawn_essential_handle(),
client.clone(),
);
let slot_duration = cumulus_client_consensus_aura::slot_duration(&*client)?;
let import_queue = cumulus_client_consensus_aura::import_queue::<
sp_consensus_aura::sr25519::AuthorityPair,
_,
_,
_,
_,
_,
_,
>(cumulus_client_consensus_aura::ImportQueueParams {
block_import: client.clone(),
client: client.clone(),
create_inherent_data_providers: move |_, _| async move {
let time = sp_timestamp::InherentDataProvider::from_system_time();
let slot =
sp_consensus_aura::inherents::InherentDataProvider::from_timestamp_and_duration(
*time,
slot_duration.slot_duration(),
);
Ok((time, slot))
},
registry: config.prometheus_registry().clone(),
can_author_with: sp_consensus::CanAuthorWithNativeVersion::new(client.executor().clone()),
spawner: &task_manager.spawn_essential_handle(),
telemetry: telemetry.as_ref().map(|telemetry| telemetry.handle()),
})?;
let params = PartialComponents {
backend,
client,
import_queue,
keystore_container,
task_manager,
transaction_pool,
select_chain: (),
other: (telemetry, telemetry_worker_handle),
};
Ok(params)
}
/// Start a node with the given parachain `Configuration` and relay chain `Configuration`.
///
/// This is the actual implementation that is abstract over the executor and the runtime api.
#[sc_tracing::logging::prefix_logs_with("Parachain")]
async fn start_node_impl<RuntimeApi, Executor>(
parachain_config: Configuration,
polkadot_config: Configuration,
id: ParaId,
) -> sc_service::error::Result<(TaskManager, Arc<FullClient<RuntimeApi, Executor>>)>
where
RuntimeApi:
ConstructRuntimeApi<Block, FullClient<RuntimeApi, Executor>> + Send + Sync + 'static,
RuntimeApi::RuntimeApi: crate::client::RuntimeApiCollection<
StateBackend = sc_client_api::StateBackendFor<FullBackend, Block>,
>,
Executor: NativeExecutionDispatch + 'static,
{
if matches!(parachain_config.role, Role::Light) {
return Err("Light client not supported!".into());
}
let parachain_config = prepare_node_config(parachain_config);
let params = new_partial(¶chain_config)?;
let (mut telemetry, telemetry_worker_handle) = params.other;
let relay_chain_full_node =
cumulus_client_service::build_polkadot_full_node(polkadot_config, telemetry_worker_handle)
.map_err(|e| match e {
polkadot_service::Error::Sub(x) => x,
s => format!("{}", s).into(),
})?;
let client = params.client.clone();
let backend = params.backend.clone();
let block_announce_validator = build_block_announce_validator(
relay_chain_full_node.client.clone(),
id,
Box::new(relay_chain_full_node.network.clone()),
relay_chain_full_node.backend.clone(),
);
let force_authoring = parachain_config.force_authoring;
let validator = parachain_config.role.is_authority();
let prometheus_registry = parachain_config.prometheus_registry().cloned();
let transaction_pool = params.transaction_pool.clone();
let mut task_manager = params.task_manager;
let import_queue = cumulus_client_service::SharedImportQueue::new(params.import_queue);
let (network, system_rpc_tx, start_network) =
sc_service::build_network(sc_service::BuildNetworkParams {
config: ¶chain_config,
client: client.clone(),
transaction_pool: transaction_pool.clone(),
spawn_handle: task_manager.spawn_handle(),
import_queue: import_queue.clone(),
on_demand: None,
block_announce_validator_builder: Some(Box::new(|_| block_announce_validator)),
})?;
let rpc_extensions_builder = {
let client = client.clone();
let pool = transaction_pool.clone();
Box::new(move |deny_unsafe, _| {
let deps = crate::rpc::FullDeps {
client: client.clone(),
pool: pool.clone(),
deny_unsafe,
};
crate::rpc::create_full(deps)
})
};
if parachain_config.offchain_worker.enabled {
sc_service::build_offchain_workers(
¶chain_config,
task_manager.spawn_handle(),
client.clone(),
network.clone(),
);
}
sc_service::spawn_tasks(sc_service::SpawnTasksParams {
on_demand: None,
remote_blockchain: None,
rpc_extensions_builder,
client: client.clone(),
transaction_pool: transaction_pool.clone(),
task_manager: &mut task_manager,
config: parachain_config,
keystore: params.keystore_container.sync_keystore(),
backend: backend.clone(),
network: network.clone(),
system_rpc_tx,
telemetry: telemetry.as_mut(),
})?;
let announce_block = {
let network = network.clone();
Arc::new(move |hash, data| network.announce_block(hash, data))
};
if validator {
let proposer_factory = sc_basic_authorship::ProposerFactory::with_proof_recording(
task_manager.spawn_handle(),
client.clone(),
transaction_pool,
prometheus_registry.as_ref(),
telemetry.as_ref().map(|x| x.handle()),
);
let spawner = task_manager.spawn_handle();
let slot_duration = cumulus_client_consensus_aura::slot_duration(&*client)?;
let relay_chain_backend = relay_chain_full_node.backend.clone();
let relay_chain_client = relay_chain_full_node.client.clone();
let parachain_consensus = build_aura_consensus::<
sp_consensus_aura::sr25519::AuthorityPair,
_,
_,
_,
_,
_,
_,
_,
_,
_,
>(BuildAuraConsensusParams {
proposer_factory,
create_inherent_data_providers: move |_, (relay_parent, validation_data)| {
let parachain_inherent =
cumulus_primitives_parachain_inherent::ParachainInherentData::create_at_with_client(
relay_parent,
&relay_chain_client,
&*relay_chain_backend,
&validation_data,
id,
);
async move {
let time = sp_timestamp::InherentDataProvider::from_system_time();
let slot =
sp_consensus_aura::inherents::InherentDataProvider::from_timestamp_and_duration(
*time,
slot_duration.slot_duration(),
);
let parachain_inherent = parachain_inherent.ok_or_else(|| {
Box::<dyn std::error::Error + Send + Sync>::from(
"Failed to create parachain inherent",
)
})?;
Ok((time, slot, parachain_inherent))
}
},
block_import: client.clone(),
relay_chain_client: relay_chain_full_node.client.clone(),
relay_chain_backend: relay_chain_full_node.backend.clone(),
para_client: client.clone(),
backoff_authoring_blocks: Option::<()>::None,
sync_oracle: network,
keystore: params.keystore_container.sync_keystore(),
force_authoring,
slot_duration,
// We got around 500ms for proposing
block_proposal_slot_portion: SlotProportion::new(1f32 / 24f32),
telemetry: telemetry.as_ref().map(|telemetry| telemetry.handle()),
});
let params = StartCollatorParams {
para_id: id,
block_status: client.clone(),
announce_block,
client: client.clone(),
task_manager: &mut task_manager,
relay_chain_full_node,
spawner,
parachain_consensus,
import_queue,
};
start_collator(params).await?;
} else {
let params = StartFullNodeParams {
client: client.clone(),
announce_block,
task_manager: &mut task_manager,
para_id: id,
relay_chain_full_node,
};
start_full_node(params)?;
}
start_network.start_network();
Ok((task_manager, client))
}
/// Start a normal parachain node.
pub async fn start_node<RuntimeApi, Executor>(
parachain_config: Configuration,
polkadot_config: Configuration,
id: ParaId,
) -> sc_service::error::Result<(TaskManager, Arc<FullClient<RuntimeApi, Executor>>)>
where
RuntimeApi:
ConstructRuntimeApi<Block, FullClient<RuntimeApi, Executor>> + Send + Sync + 'static,
RuntimeApi::RuntimeApi: crate::client::RuntimeApiCollection<
StateBackend = sc_client_api::StateBackendFor<FullBackend, Block>,
>,
RuntimeApi::RuntimeApi: sp_consensus_aura::AuraApi<Block, AuraId>,
Executor: NativeExecutionDispatch + 'static,
{
start_node_impl(parachain_config, polkadot_config, id).await
}
|
{
self.id().starts_with("heiko")
}
|
merkle.rs
|
#![allow(clippy::len_without_is_empty)]
use std::marker::PhantomData;
use merkletree::hash::Algorithm;
use merkletree::merkle;
use merkletree::proof;
use merkletree::store::StoreConfig;
use paired::bls12_381::Fr;
use rayon::prelude::*;
use crate::error::*;
use crate::hasher::{Domain, Hasher};
use crate::util::{data_at_node, NODE_SIZE};
// Reexport here, so we don't depend on merkletree directly in other places.
use merkletree::merkle::FromIndexedParallelIterator;
pub use merkletree::store::Store;
type DiskStore<E> = merkletree::store::DiskStore<E>;
pub type MerkleTree<T, A> = merkle::MerkleTree<T, A, DiskStore<T>>;
pub type MerkleStore<T> = DiskStore<T>;
/// Representation of a merkle proof.
/// Each element in the `path` vector consists of a tuple `(hash, is_right)`, with `hash` being the the hash of the node at the current level and `is_right` a boolean indicating if the path is taking the right path.
/// The first element is the hash of leaf itself, and the last is the root hash.
#[derive(Default, Debug, Clone, Serialize, Deserialize)]
pub struct MerkleProof<H: Hasher> {
pub root: H::Domain,
path: Vec<(H::Domain, bool)>,
leaf: H::Domain,
#[serde(skip)]
_h: PhantomData<H>,
}
pub fn make_proof_for_test<H: Hasher>(
root: H::Domain,
leaf: H::Domain,
path: Vec<(H::Domain, bool)>,
) -> MerkleProof<H> {
MerkleProof {
path,
root,
leaf,
_h: PhantomData,
}
}
impl<H: Hasher> MerkleProof<H> {
pub fn new(n: usize) -> MerkleProof<H> {
let mut m = MerkleProof::default();
m.path = vec![(Default::default(), false); n];
m
}
pub fn new_from_proof(p: &proof::Proof<H::Domain>) -> MerkleProof<H> {
MerkleProof {
path: p
.lemma()
.iter()
.skip(1)
.zip(p.path().iter())
.map(|(hash, is_left)| (*hash, !is_left))
.collect::<Vec<_>>(),
root: p.root(),
leaf: p.item(),
_h: PhantomData,
}
}
/// Convert the merkle path into the format expected by the circuits, which is a vector of options of the tuples.
/// This does __not__ include the root and the leaf.
pub fn as_options(&self) -> Vec<Option<(Fr, bool)>> {
self.path
.iter()
.map(|v| Some((v.0.into(), v.1)))
.collect::<Vec<_>>()
}
pub fn into_options_with_leaf(self) -> (Option<Fr>, Vec<Option<(Fr, bool)>>) {
let MerkleProof { leaf, path, .. } = self;
(
Some(leaf.into()),
path.into_iter().map(|(a, b)| Some((a.into(), b))).collect(),
)
}
pub fn as_pairs(&self) -> Vec<(Fr, bool)> {
self.path
.iter()
.map(|v| (v.0.into(), v.1))
.collect::<Vec<_>>()
}
fn verify(&self) -> bool {
let mut a = H::Function::default();
self.root()
== &(0..self.path.len()).fold(self.leaf, |h, i| {
a.reset();
let is_right = self.path[i].1;
let (left, right) = if is_right {
(self.path[i].0, h)
} else {
(h, self.path[i].0)
};
a.node(left, right, i)
})
}
/// Validates the MerkleProof and that it corresponds to the supplied node.
pub fn validate(&self, node: usize) -> bool {
if path_index(&self.path) != node {
return false;
}
self.verify()
}
/// Validates that the data hashes to the leaf of the merkle path.
pub fn validate_data(&self, data: &[u8]) -> bool {
if !self.verify() {
return false;
}
self.leaf().into_bytes() == data
}
/// Returns the hash of leaf that this MerkleProof represents.
pub fn leaf(&self) -> &H::Domain {
&self.leaf
}
/// Returns the root hash
pub fn root(&self) -> &H::Domain {
&self.root
}
pub fn verified_leaf(&self) -> IncludedNode<H> {
IncludedNode::new(*self.leaf())
}
/// Returns the length of the proof. That is all path elements plus 1 for the
/// leaf and 1 for the root.
pub fn len(&self) -> usize {
self.path.len() + 2
}
/// Serialize into bytes.
/// TODO: probably improve
pub fn serialize(&self) -> Vec<u8> {
let mut out = Vec::new();
for (hash, is_right) in &self.path {
out.extend(hash.serialize());
out.push(*is_right as u8);
}
out.extend(self.leaf().serialize());
out.extend(self.root().serialize());
out
}
pub fn path(&self) -> &Vec<(H::Domain, bool)> {
&self.path
}
/// proves_challenge returns true if this self.proof corresponds to challenge.
/// This is useful for verifying that a supplied proof is actually relevant to a given challenge.
pub fn proves_challenge(&self, challenge: usize) -> bool {
let mut c = challenge;
for (_, is_right) in self.path().iter() {
if ((c & 1) == 1) ^ is_right {
return false;
};
c >>= 1;
}
true
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct IncludedNode<H: Hasher> {
value: H::Domain,
_h: PhantomData<H>,
}
impl<H: Hasher> IncludedNode<H> {
pub fn new(value: H::Domain) -> Self {
IncludedNode {
value,
_h: PhantomData,
}
}
pub fn into_fr(self) -> Fr {
self.value.into()
}
}
impl<H: Hasher> std::ops::Deref for IncludedNode<H> {
type Target = H::Domain;
fn deref(&self) -> &Self::Target
|
}
fn path_index<T: Domain>(path: &[(T, bool)]) -> usize {
path.iter().rev().fold(0, |acc, (_, is_right)| {
(acc << 1) + if *is_right { 1 } else { 0 }
})
}
/// Construct a new merkle tree.
pub fn create_merkle_tree<H: Hasher>(
config: Option<StoreConfig>,
size: usize,
data: &[u8],
) -> Result<MerkleTree<H::Domain, H::Function>> {
ensure!(
data.len() == (NODE_SIZE * size) as usize,
Error::InvalidMerkleTreeArgs(data.len(), NODE_SIZE, size)
);
let f = |i| {
let d = data_at_node(&data, i).expect("data_at_node math failed");
// TODO/FIXME: This can panic. FOR NOW, let's leave this since we're experimenting with
// optimization paths. However, we need to ensure that bad input will not lead to a panic
// that isn't caught by the FPS API.
// Unfortunately, it's not clear how to perform this error-handling in the parallel
// iterator case.
H::Domain::try_from_bytes(d).expect("failed to convert node data to domain element")
};
match config {
Some(x) => MerkleTree::from_par_iter_with_config((0..size).into_par_iter().map(f), x),
None => MerkleTree::from_par_iter((0..size).into_par_iter().map(f)),
}
}
#[cfg(test)]
mod tests {
use super::*;
use rand;
use std::io::Write;
use crate::drgraph::{new_seed, BucketGraph, Graph, BASE_DEGREE};
use crate::hasher::{Blake2sHasher, PedersenHasher, Sha256Hasher};
fn merklepath<H: Hasher>() {
let g = BucketGraph::<H>::new(10, BASE_DEGREE, 0, new_seed());
let mut rng = rand::thread_rng();
let node_size = 32;
let mut data = Vec::new();
for _ in 0..10 {
let elt: H::Domain = H::Domain::random(&mut rng);
let bytes = H::Domain::into_bytes(&elt);
data.write(&bytes).unwrap();
}
let tree = g.merkle_tree(data.as_slice()).unwrap();
for i in 0..10 {
let proof = tree.gen_proof(i).unwrap();
assert!(proof.validate::<H::Function>());
let len = proof.lemma().len();
let mp = MerkleProof::<H>::new_from_proof(&proof);
assert_eq!(mp.len(), len);
assert!(mp.validate(i), "failed to validate valid merkle path");
let data_slice = &data[i * node_size..(i + 1) * node_size].to_vec();
assert!(
mp.validate_data(data_slice),
"failed to validate valid data"
);
}
}
#[test]
fn merklepath_pedersen() {
merklepath::<PedersenHasher>();
}
#[test]
fn merklepath_sha256() {
merklepath::<Sha256Hasher>();
}
#[test]
fn merklepath_blake2s() {
merklepath::<Blake2sHasher>();
}
}
|
{
&self.value
}
|
ButtonsResignGame.js
|
import React from 'react';
import Button from '@material-ui/core/Button';
import { useSelector } from 'react-redux';
const ButtonResignGame = () => {
const state = useSelector(state => state);
if (state.mode.playfriend.accepted) {
|
</div>
);
}
return null;
}
export default ButtonResignGame;
|
return (
<div>
<Button variant="outlined">Resign</Button>
|
models.py
|
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, LSTM, Flatten, Embedding, Merge
from keras.layers.convolutional import Convolution2D, MaxPooling2D, ZeroPadding2D
import h5py
def Word2VecModel(embedding_matrix, num_words, embedding_dim, seq_length, dropout_rate):
print "Creating text model..."
model = Sequential()
model.add(Embedding(num_words, embedding_dim,
weights=[embedding_matrix], input_length=seq_length, trainable=False))
model.add(LSTM(units=512, return_sequences=True, input_shape=(seq_length, embedding_dim)))
model.add(Dropout(dropout_rate))
model.add(LSTM(units=512, return_sequences=False))
model.add(Dropout(dropout_rate))
model.add(Dense(1024, activation='tanh'))
return model
def img_model(dropout_rate):
|
def vqa_model(embedding_matrix, num_words, embedding_dim, seq_length, dropout_rate, num_classes):
vgg_model = img_model(dropout_rate)
lstm_model = Word2VecModel(embedding_matrix, num_words, embedding_dim, seq_length, dropout_rate)
print "Merging final model..."
fc_model = Sequential()
fc_model.add(Merge([vgg_model, lstm_model], mode='mul'))
fc_model.add(Dropout(dropout_rate))
fc_model.add(Dense(1000, activation='tanh'))
fc_model.add(Dropout(dropout_rate))
fc_model.add(Dense(num_classes, activation='softmax'))
fc_model.compile(optimizer='rmsprop', loss='categorical_crossentropy',
metrics=['accuracy'])
return fc_model
|
print "Creating image model..."
model = Sequential()
model.add(Dense(1024, input_dim=4096, activation='tanh'))
return model
|
Maximum Subarray.py
|
'''https://leetcode.com/problems/maximum-subarray/
53. Maximum Subarray
Easy
15507
728
Add to List
Share
Given an integer array nums, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.
A subarray is a contiguous part of an array.
Example 1:
Input: nums = [-2,1,-3,4,-1,2,1,-5,4]
Output: 6
Explanation: [4,-1,2,1] has the largest sum = 6.
Example 2:
Input: nums = [1]
Output: 1
Example 3:
Input: nums = [5,4,-1,7,8]
Output: 23
Constraints:
1 <= nums.length <= 105
-104 <= nums[i] <= 104
Follow up: If you have figured out the O(n) solution,
try coding another solution using the divide and conquer approach, which is more subtle.'''
class Solution:
def maxSubArray(self, nums: List[int]) -> int:
ans = nums[0]
cur_sum = 0
for i in range(len(nums)):
if cur_sum > 0:
cur_sum += nums[i]
else:
cur_sum = nums[i]
ans = max(ans, cur_sum)
return ans
# @lc code=end
def brute_force(nums):
|
def Devided_Conquer(nums, left, right):
if left == right:
return nums[left] # if nums[left] > 0 else 0
center = (left+right) // 2
max_left = Devided_Conquer(nums, left, center)
max_right = Devided_Conquer(nums, center+1, right)
left_Sum = 0
maxLeft_Sum = nums[center]
for i in range(center-1, left-1, -1):
left_Sum += nums[i]
if left_Sum > maxLeft_Sum:
maxLeft_Sum = left_Sum
right_sum = 0
max_right_sum = nums[center+1]
for i in range(center+2, right+1):
right_sum += nums[i]
if right_sum > max_right_sum:
max_right_sum = right_sum
print("max_left:{0}, max_right:{1} ".format(maxLeft_Sum, max_right_sum))
print("left:{0}, right:{1}, mid:{2}".format(
max_left, max_right, maxLeft_Sum+max_right_sum))
return max(max_left, max_right, maxLeft_Sum+max_right_sum)
def One_Pass(nums):
max_sum = nums[0]
this_sum = nums[0]
for num in nums[1:]:
this_sum = max(num, this_sum+num)
if this_sum > max_sum:
max_sum = this_sum
return max_sum
if __name__ == '__main__':
nums = [-2, 1, -3, 4, -1, 2, 1, -5, 4]
print(One_Pass(nums))
|
max_sum = 0
for L in range(len(nums)):
for R in range(L, len(nums)):
cur_sum = 0
for i in range(L, R):
cur_sum += nums[i]
if cur_sum > max_sum:
max_sum = cur_sum
return max_sum
|
babel.config.js
|
module.exports = {
presets: [
[
'@babel/preset-env',
{
modules: 'commonjs',
targets: {
node: 8
}
}
|
],
'@babel/preset-flow'
]
};
| |
index.ts
|
import { BinaryReader } from './BinaryReader'
import { BinaryWriter } from './BinaryWriter'
import * as Caching from './Caching/index'
import { ISerializable } from './ISerializable'
import { MemoryStream } from './MemoryStream'
import { Stream } from './Stream'
// export * from './BinaryReader'
// export * from './BinaryWriter'
// // export * from'./Caching/index'
// export * from './ISerializable'
// export * from './MemoryStream'
// export * from './Stream'
// export { Caching }
export {
BinaryReader,
BinaryWriter,
Caching,
|
}
|
ISerializable,
MemoryStream,
Stream
|
warnings_models.py
|
from __future__ import annotations
from datetime import datetime
|
NEVER = datetime(9999, 1, 1)
def __init__(self, user_id: int, timestamp: datetime, mod_name: str, reason: str = "",
expiration_time: datetime = NEVER):
self.user_id = user_id
self.timestamp = timestamp
self.mod_name = mod_name
self.reason = reason
self.expiration_time = expiration_time
def __eq__(self, other):
if not isinstance(other, type(self)):
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
def __hash__(self):
return hash((self.user_id, self.timestamp, self.reason, self.expiration_time))
def __repr__(self):
return str(self.__dict__)
@property
def timestamp_str(self):
return self.timestamp.strftime("%b-%d-%Y %H:%M")
@property
def date_str(self):
return self.timestamp.strftime("%b-%d-%Y")
@property
def expiration_str(self):
return self.expiration_time.strftime("%b-%d-%Y %H:%M")
@property
def expiration_date_str(self):
return self.expiration_time.strftime("%b-%d-%Y")
def is_expired(self):
return self.expiration_time < datetime.now()
|
class RefWarning:
|
plugin.py
|
#!/usr/bin/env python
#
# Electrum - lightweight Bitcoin client
# Copyright (C) 2015 Thomas Voegtlin
#
# Permission is hereby granted, free of charge, to any person
# obtaining a copy of this software and associated documentation files
# (the "Software"), to deal in the Software without restriction,
# including without limitation the rights to use, copy, modify, merge,
# publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import os
import pkgutil
import importlib.util
import time
import threading
import sys
from typing import (NamedTuple, Any, Union, TYPE_CHECKING, Optional, Tuple,
Dict, Iterable, List, Sequence, Callable, TypeVar)
import concurrent
from concurrent import futures
from functools import wraps, partial
from .i18n import _
from .util import (profiler, DaemonThread, UserCancelled, ThreadJob, UserFacingException)
from . import bip32
from . import plugins
from .simple_config import SimpleConfig
from .logging import get_logger, Logger
if TYPE_CHECKING:
from .plugins.hw_wallet import HW_PluginBase, HardwareClientBase, HardwareHandlerBase
from .keystore import Hardware_KeyStore
from .wallet import Abstract_Wallet
_logger = get_logger(__name__)
plugin_loaders = {}
hook_names = set()
hooks = {}
class Plugins(DaemonThread):
LOGGING_SHORTCUT = 'p'
@profiler
def __init__(self, config: SimpleConfig, gui_name):
DaemonThread.__init__(self)
self.setName('Plugins')
self.pkgpath = os.path.dirname(plugins.__file__)
self.config = config
self.hw_wallets = {}
self.plugins = {} # type: Dict[str, BasePlugin]
self.gui_name = gui_name
self.descriptions = {}
self.device_manager = DeviceMgr(config)
self.load_plugins()
self.add_jobs(self.device_manager.thread_jobs())
self.start()
def load_plugins(self):
for loader, name, ispkg in pkgutil.iter_modules([self.pkgpath]):
full_name = f'electrum_dash.plugins.{name}'
spec = importlib.util.find_spec(full_name)
if spec is None: # pkgutil found it but importlib can't ?!
raise Exception(f"Error pre-loading {full_name}: no spec")
try:
module = importlib.util.module_from_spec(spec)
# sys.modules needs to be modified for relative imports to work
# see https://stackoverflow.com/a/50395128
sys.modules[spec.name] = module
spec.loader.exec_module(module)
except Exception as e:
raise Exception(f"Error pre-loading {full_name}: {repr(e)}") from e
d = module.__dict__
gui_good = self.gui_name in d.get('available_for', [])
if not gui_good:
continue
details = d.get('registers_wallet_type')
if details:
self.register_wallet_type(name, gui_good, details)
details = d.get('registers_keystore')
if details:
self.register_keystore(name, gui_good, details)
self.descriptions[name] = d
if not d.get('requires_wallet_type') and self.config.get('use_' + name):
try:
self.load_plugin(name)
except BaseException as e:
self.logger.exception(f"cannot initialize plugin {name}: {e}")
def get(self, name):
return self.plugins.get(name)
def count(self):
return len(self.plugins)
def load_plugin(self, name) -> 'BasePlugin':
if name in self.plugins:
return self.plugins[name]
full_name = f'electrum_dash.plugins.{name}.{self.gui_name}'
spec = importlib.util.find_spec(full_name)
if spec is None:
raise RuntimeError("%s implementation for %s plugin not found"
% (self.gui_name, name))
try:
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
plugin = module.Plugin(self, self.config, name)
except Exception as e:
raise Exception(f"Error loading {name} plugin: {repr(e)}") from e
self.add_jobs(plugin.thread_jobs())
self.plugins[name] = plugin
self.logger.info(f"loaded {name}")
return plugin
def close_plugin(self, plugin):
self.remove_jobs(plugin.thread_jobs())
def enable(self, name: str) -> 'BasePlugin':
self.config.set_key('use_' + name, True, True)
p = self.get(name)
if p:
return p
return self.load_plugin(name)
def disable(self, name: str) -> None:
self.config.set_key('use_' + name, False, True)
p = self.get(name)
if not p:
return
self.plugins.pop(name)
p.close()
self.logger.info(f"closed {name}")
def toggle(self, name: str) -> Optional['BasePlugin']:
p = self.get(name)
return self.disable(name) if p else self.enable(name)
def is_available(self, name: str, wallet: 'Abstract_Wallet') -> bool:
d = self.descriptions.get(name)
if not d:
return False
deps = d.get('requires', [])
for dep, s in deps:
try:
__import__(dep)
except ImportError as e:
self.logger.warning(f'Plugin {name} unavailable: {repr(e)}')
return False
requires = d.get('requires_wallet_type', [])
return not requires or wallet.wallet_type in requires
def get_hardware_support(self):
out = []
for name, (gui_good, details) in self.hw_wallets.items():
if gui_good:
try:
p = self.get_plugin(name)
if p.is_enabled():
out.append(HardwarePluginToScan(name=name,
description=details[2],
plugin=p,
exception=None))
except Exception as e:
self.logger.exception(f"cannot load plugin for: {name}")
out.append(HardwarePluginToScan(name=name,
description=details[2],
plugin=None,
exception=e))
return out
def register_wallet_type(self, name, gui_good, wallet_type):
from .wallet import register_wallet_type, register_constructor
self.logger.info(f"registering wallet type {(wallet_type, name)}")
def loader():
plugin = self.get_plugin(name)
register_constructor(wallet_type, plugin.wallet_class)
register_wallet_type(wallet_type)
plugin_loaders[wallet_type] = loader
def register_keystore(self, name, gui_good, details):
from .keystore import register_keystore
def dynamic_constructor(d):
return self.get_plugin(name).keystore_class(d)
if details[0] == 'hardware':
self.hw_wallets[name] = (gui_good, details)
self.logger.info(f"registering hardware {name}: {details}")
register_keystore(details[1], dynamic_constructor)
def get_plugin(self, name: str) -> 'BasePlugin':
if name not in self.plugins:
self.load_plugin(name)
return self.plugins[name]
def run(self):
while self.is_running():
time.sleep(0.1)
self.run_jobs()
self.on_stop()
def hook(func):
hook_names.add(func.__name__)
return func
def run_hook(name, *args):
results = []
f_list = hooks.get(name, [])
for p, f in f_list:
if p.is_enabled():
try:
r = f(*args)
except Exception:
_logger.exception(f"Plugin error. plugin: {p}, hook: {name}")
r = False
if r:
results.append(r)
if results:
assert len(results) == 1, results
return results[0]
class BasePlugin(Logger):
|
class DeviceUnpairableError(UserFacingException): pass
class HardwarePluginLibraryUnavailable(Exception): pass
class CannotAutoSelectDevice(Exception): pass
class Device(NamedTuple):
path: Union[str, bytes]
interface_number: int
id_: str
product_key: Any # when using hid, often Tuple[int, int]
usage_page: int
transport_ui_string: str
class DeviceInfo(NamedTuple):
device: Device
label: Optional[str] = None
initialized: Optional[bool] = None
exception: Optional[Exception] = None
plugin_name: Optional[str] = None # manufacturer, e.g. "trezor"
soft_device_id: Optional[str] = None # if available, used to distinguish same-type hw devices
model_name: Optional[str] = None # e.g. "Ledger Nano S"
class HardwarePluginToScan(NamedTuple):
name: str
description: str
plugin: Optional['HW_PluginBase']
exception: Optional[Exception]
PLACEHOLDER_HW_CLIENT_LABELS = {None, "", " "}
# hidapi is not thread-safe
# see https://github.com/signal11/hidapi/issues/205#issuecomment-527654560
# https://github.com/libusb/hidapi/issues/45
# https://github.com/signal11/hidapi/issues/45#issuecomment-4434598
# https://github.com/signal11/hidapi/pull/414#issuecomment-445164238
# It is not entirely clear to me, exactly what is safe and what isn't, when
# using multiple threads...
# Hence, we use a single thread for all device communications, including
# enumeration. Everything that uses hidapi, libusb, etc, MUST run on
# the following thread:
_hwd_comms_executor = concurrent.futures.ThreadPoolExecutor(
max_workers=1,
thread_name_prefix='hwd_comms_thread'
)
T = TypeVar('T')
def run_in_hwd_thread(func: Callable[[], T]) -> T:
if threading.current_thread().name.startswith("hwd_comms_thread"):
return func()
else:
fut = _hwd_comms_executor.submit(func)
return fut.result()
#except (concurrent.futures.CancelledError, concurrent.futures.TimeoutError) as e:
def runs_in_hwd_thread(func):
@wraps(func)
def wrapper(*args, **kwargs):
return run_in_hwd_thread(partial(func, *args, **kwargs))
return wrapper
def assert_runs_in_hwd_thread():
if not threading.current_thread().name.startswith("hwd_comms_thread"):
raise Exception("must only be called from HWD communication thread")
class DeviceMgr(ThreadJob):
'''Manages hardware clients. A client communicates over a hardware
channel with the device.
In addition to tracking device HID IDs, the device manager tracks
hardware wallets and manages wallet pairing. A HID ID may be
paired with a wallet when it is confirmed that the hardware device
matches the wallet, i.e. they have the same master public key. A
HID ID can be unpaired if e.g. it is wiped.
Because of hotplugging, a wallet must request its client
dynamically each time it is required, rather than caching it
itself.
The device manager is shared across plugins, so just one place
does hardware scans when needed. By tracking HID IDs, if a device
is plugged into a different port the wallet is automatically
re-paired.
Wallets are informed on connect / disconnect events. It must
implement connected(), disconnected() callbacks. Being connected
implies a pairing. Callbacks can happen in any thread context,
and we do them without holding the lock.
Confusingly, the HID ID (serial number) reported by the HID system
doesn't match the device ID reported by the device itself. We use
the HID IDs.
This plugin is thread-safe. Currently only devices supported by
hidapi are implemented.'''
def __init__(self, config: SimpleConfig):
ThreadJob.__init__(self)
# Keyed by xpub. The value is the device id
# has been paired, and None otherwise. Needs self.lock.
self.xpub_ids = {} # type: Dict[str, str]
# A list of clients. The key is the client, the value is
# a (path, id_) pair. Needs self.lock.
self.clients = {} # type: Dict[HardwareClientBase, Tuple[Union[str, bytes], str]]
# What we recognise. (vendor_id, product_id) -> Plugin
self._recognised_hardware = {} # type: Dict[Tuple[int, int], HW_PluginBase]
self._recognised_vendor = {} # type: Dict[int, HW_PluginBase] # vendor_id -> Plugin
# Custom enumerate functions for devices we don't know about.
self._enumerate_func = set() # Needs self.lock.
self.lock = threading.RLock()
self.config = config
def thread_jobs(self):
# Thread job to handle device timeouts
return [self]
def run(self):
'''Handle device timeouts. Runs in the context of the Plugins
thread.'''
with self.lock:
clients = list(self.clients.keys())
cutoff = time.time() - self.config.get_session_timeout()
for client in clients:
client.timeout(cutoff)
def register_devices(self, device_pairs, *, plugin: 'HW_PluginBase'):
for pair in device_pairs:
self._recognised_hardware[pair] = plugin
def register_vendor_ids(self, vendor_ids: Iterable[int], *, plugin: 'HW_PluginBase'):
for vendor_id in vendor_ids:
self._recognised_vendor[vendor_id] = plugin
def register_enumerate_func(self, func):
with self.lock:
self._enumerate_func.add(func)
@runs_in_hwd_thread
def create_client(self, device: 'Device', handler: Optional['HardwareHandlerBase'],
plugin: 'HW_PluginBase') -> Optional['HardwareClientBase']:
# Get from cache first
client = self._client_by_id(device.id_)
if client:
return client
client = plugin.create_client(device, handler)
if client:
self.logger.info(f"Registering {client}")
with self.lock:
self.clients[client] = (device.path, device.id_)
return client
def xpub_id(self, xpub):
with self.lock:
return self.xpub_ids.get(xpub)
def xpub_by_id(self, id_):
with self.lock:
for xpub, xpub_id in self.xpub_ids.items():
if xpub_id == id_:
return xpub
return None
def unpair_xpub(self, xpub):
with self.lock:
if xpub not in self.xpub_ids:
return
_id = self.xpub_ids.pop(xpub)
self._close_client(_id)
def unpair_id(self, id_):
xpub = self.xpub_by_id(id_)
if xpub:
self.unpair_xpub(xpub)
else:
self._close_client(id_)
def _close_client(self, id_):
with self.lock:
client = self._client_by_id(id_)
self.clients.pop(client, None)
if client:
client.close()
def pair_xpub(self, xpub, id_):
with self.lock:
self.xpub_ids[xpub] = id_
def _client_by_id(self, id_) -> Optional['HardwareClientBase']:
with self.lock:
for client, (path, client_id) in self.clients.items():
if client_id == id_:
return client
return None
def client_by_id(self, id_, *, scan_now: bool = True) -> Optional['HardwareClientBase']:
'''Returns a client for the device ID if one is registered. If
a device is wiped or in bootloader mode pairing is impossible;
in such cases we communicate by device ID and not wallet.'''
if scan_now:
self.scan_devices()
return self._client_by_id(id_)
@runs_in_hwd_thread
def client_for_keystore(self, plugin: 'HW_PluginBase', handler: Optional['HardwareHandlerBase'],
keystore: 'Hardware_KeyStore',
force_pair: bool, *,
devices: Sequence['Device'] = None,
allow_user_interaction: bool = True) -> Optional['HardwareClientBase']:
self.logger.info("getting client for keystore")
if handler is None:
raise Exception(_("Handler not found for") + ' ' + plugin.name + '\n' + _("A library is probably missing."))
handler.update_status(False)
if devices is None:
devices = self.scan_devices()
xpub = keystore.xpub
derivation = keystore.get_derivation_prefix()
assert derivation is not None
client = self.client_by_xpub(plugin, xpub, handler, devices)
if client is None and force_pair:
try:
info = self.select_device(plugin, handler, keystore, devices,
allow_user_interaction=allow_user_interaction)
except CannotAutoSelectDevice:
pass
else:
client = self.force_pair_xpub(plugin, handler, info, xpub, derivation)
if client:
handler.update_status(True)
if client:
# note: if select_device was called, we might also update label etc here:
keystore.opportunistically_fill_in_missing_info_from_device(client)
self.logger.info("end client for keystore")
return client
def client_by_xpub(self, plugin: 'HW_PluginBase', xpub, handler: 'HardwareHandlerBase',
devices: Sequence['Device']) -> Optional['HardwareClientBase']:
_id = self.xpub_id(xpub)
client = self._client_by_id(_id)
if client:
# An unpaired client might have another wallet's handler
# from a prior scan. Replace to fix dialog parenting.
client.handler = handler
return client
for device in devices:
if device.id_ == _id:
return self.create_client(device, handler, plugin)
def force_pair_xpub(self, plugin: 'HW_PluginBase', handler: 'HardwareHandlerBase',
info: 'DeviceInfo', xpub, derivation) -> Optional['HardwareClientBase']:
# The wallet has not been previously paired, so let the user
# choose an unpaired device and compare its first address.
xtype = bip32.xpub_type(xpub)
client = self._client_by_id(info.device.id_)
if client and client.is_pairable():
# See comment above for same code
client.handler = handler
# This will trigger a PIN/passphrase entry request
try:
client_xpub = client.get_xpub(derivation, xtype)
except (UserCancelled, RuntimeError):
# Bad / cancelled PIN / passphrase
client_xpub = None
if client_xpub == xpub:
self.pair_xpub(xpub, info.device.id_)
return client
# The user input has wrong PIN or passphrase, or cancelled input,
# or it is not pairable
raise DeviceUnpairableError(
_('Dash Electrum cannot pair with your {}.\n\n'
'Before you request Dash coins to be sent to addresses in this '
'wallet, ensure you can pair with your device, or that you have '
'its seed (and passphrase, if any). Otherwise all coins you '
'receive will be unspendable.').format(plugin.device))
def unpaired_device_infos(self, handler: Optional['HardwareHandlerBase'], plugin: 'HW_PluginBase',
devices: Sequence['Device'] = None,
include_failing_clients=False) -> List['DeviceInfo']:
'''Returns a list of DeviceInfo objects: one for each connected,
unpaired device accepted by the plugin.'''
if not plugin.libraries_available:
message = plugin.get_library_not_available_message()
raise HardwarePluginLibraryUnavailable(message)
if devices is None:
devices = self.scan_devices()
devices = [dev for dev in devices if not self.xpub_by_id(dev.id_)]
infos = []
for device in devices:
if not plugin.can_recognize_device(device):
continue
try:
client = self.create_client(device, handler, plugin)
label = client.label()
is_initialized = client.is_initialized()
soft_device_id = client.get_soft_device_id()
model_name = client.device_model_name()
except Exception as e:
self.logger.error(f'failed to create client for {plugin.name} at {device.path}: {repr(e)}')
if include_failing_clients:
infos.append(DeviceInfo(device=device, exception=e, plugin_name=plugin.name))
continue
if not client:
continue
infos.append(DeviceInfo(device=device,
label=label,
initialized=is_initialized,
plugin_name=plugin.name,
soft_device_id=soft_device_id,
model_name=model_name))
return infos
def select_device(self, plugin: 'HW_PluginBase', handler: 'HardwareHandlerBase',
keystore: 'Hardware_KeyStore', devices: Sequence['Device'] = None,
*, allow_user_interaction: bool = True) -> 'DeviceInfo':
"""Select the device to use for keystore."""
# ideally this should not be called from the GUI thread...
# assert handler.get_gui_thread() != threading.current_thread(), 'must not be called from GUI thread'
while True:
infos = self.unpaired_device_infos(handler, plugin, devices)
if infos:
break
if not allow_user_interaction:
raise CannotAutoSelectDevice()
msg = _('Please insert your {}').format(plugin.device)
if keystore.label:
msg += ' ({})'.format(keystore.label)
msg += '. {}\n\n{}'.format(
_('Verify the cable is connected and that '
'no other application is using it.'),
_('Try to connect again?')
)
if not handler.yes_no_question(msg):
raise UserCancelled()
devices = None
# select device automatically. (but only if we have reasonable expectation it is the correct one)
# method 1: select device by id
if keystore.soft_device_id:
for info in infos:
if info.soft_device_id == keystore.soft_device_id:
return info
# method 2: select device by label
# but only if not a placeholder label and only if there is no collision
device_labels = [info.label for info in infos]
if (keystore.label not in PLACEHOLDER_HW_CLIENT_LABELS
and device_labels.count(keystore.label) == 1):
for info in infos:
if info.label == keystore.label:
return info
# method 3: if there is only one device connected, and we don't have useful label/soft_device_id
# saved for keystore anyway, select it
if (len(infos) == 1
and keystore.label in PLACEHOLDER_HW_CLIENT_LABELS
and keystore.soft_device_id is None):
return infos[0]
if not allow_user_interaction:
raise CannotAutoSelectDevice()
# ask user to select device manually
msg = _("Please select which {} device to use:").format(plugin.device)
descriptions = ["{label} ({maybe_model}{init}, {transport})"
.format(label=info.label or _("An unnamed {}").format(info.plugin_name),
init=(_("initialized") if info.initialized else _("wiped")),
transport=info.device.transport_ui_string,
maybe_model=f"{info.model_name}, " if info.model_name else "")
for info in infos]
c = handler.query_choice(msg, descriptions)
if c is None:
raise UserCancelled()
info = infos[c]
# note: updated label/soft_device_id will be saved after pairing succeeds
return info
@runs_in_hwd_thread
def _scan_devices_with_hid(self) -> List['Device']:
try:
import hid
except ImportError:
return []
devices = []
for d in hid.enumerate(0, 0):
vendor_id = d['vendor_id']
product_key = (vendor_id, d['product_id'])
plugin = None
if product_key in self._recognised_hardware:
plugin = self._recognised_hardware[product_key]
elif vendor_id in self._recognised_vendor:
plugin = self._recognised_vendor[vendor_id]
if plugin:
device = plugin.create_device_from_hid_enumeration(d, product_key=product_key)
if device:
devices.append(device)
return devices
@runs_in_hwd_thread
@profiler
def scan_devices(self) -> Sequence['Device']:
self.logger.info("scanning devices...")
# First see what's connected that we know about
devices = self._scan_devices_with_hid()
# Let plugin handlers enumerate devices we don't know about
with self.lock:
enumerate_funcs = list(self._enumerate_func)
for f in enumerate_funcs:
try:
new_devices = f()
except BaseException as e:
self.logger.error('custom device enum failed. func {}, error {}'
.format(str(f), repr(e)))
else:
devices.extend(new_devices)
# find out what was disconnected
pairs = [(dev.path, dev.id_) for dev in devices]
disconnected_clients = []
with self.lock:
connected = {}
for client, pair in self.clients.items():
if pair in pairs and client.has_usable_connection_with_device():
connected[client] = pair
else:
disconnected_clients.append((client, pair[1]))
self.clients = connected
# Unpair disconnected devices
for client, id_ in disconnected_clients:
self.unpair_id(id_)
if client.handler:
client.handler.update_status(False)
return devices
|
def __init__(self, parent, config, name):
self.parent = parent # type: Plugins # The plugins object
self.name = name
self.config = config
self.wallet = None
Logger.__init__(self)
# add self to hooks
for k in dir(self):
if k in hook_names:
l = hooks.get(k, [])
l.append((self, getattr(self, k)))
hooks[k] = l
def __str__(self):
return self.name
def close(self):
# remove self from hooks
for attr_name in dir(self):
if attr_name in hook_names:
# found attribute in self that is also the name of a hook
l = hooks.get(attr_name, [])
try:
l.remove((self, getattr(self, attr_name)))
except ValueError:
# maybe attr name just collided with hook name and was not hook
continue
hooks[attr_name] = l
self.parent.close_plugin(self)
self.on_close()
def on_close(self):
pass
def requires_settings(self) -> bool:
return False
def thread_jobs(self):
return []
def is_enabled(self):
return self.is_available() and self.config.get('use_'+self.name) is True
def is_available(self):
return True
def can_user_disable(self):
return True
def settings_widget(self, window):
raise NotImplementedError()
def settings_dialog(self, window):
raise NotImplementedError()
|
DrawSVGPlugin.min.js
|
/*!
* VERSION: 0.2.0
* DATE: 2018-08-17
* UPDATES AND DOCS AT: http://greensock.com
*
* @license Copyright (c) 2008-2018, GreenSock. All rights reserved.
* DrawSVGPlugin is a Club GreenSock membership benefit; You must have a valid membership to use
* this code without violating the terms of use. Visit https://greensock.com/club/ to sign up or get more details.
* This work is subject to the software agreement that was issued with your membership.
*
* @author: Jack Doyle, jack@greensock.com
*/
var _gsScope="undefined"!=typeof module&&module.exports&&"undefined"!=typeof global?global:this||window;(_gsScope._gsQueue||(_gsScope._gsQueue=[])).push(function(){"use strict";var e,t=_gsScope.document,p=t.defaultView?t.defaultView.getComputedStyle:function(){},l=/(?:(-|-=|\+=)?\d*\.?\d*(?:e[\-+]?\d+)?)[0-9]/gi,_=-1!==((_gsScope.navigator||{}).userAgent||"").indexOf("Edge"),g={rect:["width","height"],circle:["r","r"],ellipse:["rx","ry"],line:["x2","y2"]},C="DrawSVGPlugin",m=String.fromCharCode(103,114,101,101,110,115,111,99,107,46,99,111,109),S=String.fromCharCode(47,114,101,113,117,105,114,101,115,45,109,101,109,98,101,114,115,104,105,112,47),w=function(e){for(var t=-1!==(window?'codepen.io':"").indexOf(String.fromCharCode(103,114,101,101,110,115,111,99,107))&&-1!==e.indexOf(String.fromCharCode(108,111,99,97,108,104,111,115,116)),r=[m,String.fromCharCode(99,111,100,101,112,101,110,46,105,111),String.fromCharCode(99,111,100,101,112,101,110,46,112,108,117,109,98,105,110,103),String.fromCharCode(99,111,100,101,112,101,110,46,100,101,118),String.fromCharCode(99,115,115,45,116,114,105,99,107,115,46,99,111,109),String.fromCharCode(99,100,112,110,46,105,111),String.fromCharCode(103,97,110,110,111,110,46,116,118),String.fromCharCode(99,111,100,101,99,97,110,121,111,110,46,110,101,116),String.fromCharCode(116,104,101,109,101,102,111,114,101,115,116,46,110,101,116),String.fromCharCode(99,101,114,101,98,114,97,120,46,99,111,46,117,107),String.fromCharCode(116,121,109,112,97,110,117,115,46,110,101,116),String.fromCharCode(116,119,101,101,110,109,97,120,46,99,111,109),String.fromCharCode(116,119,101,101,110,108,105,116,101,46,99,111,109),String.fromCharCode(112,108,110,107,114,46,99,111),String.fromCharCode(104,111,116,106,97,114,46,99,111,109),String.fromCharCode(119,101,98,112,97,99,107,98,105,110,46,99,111,109),String.fromCharCode(97,114,99,104,105,118,101,46,111,114,103),String.fromCharCode(99,111,100,101,115,97,110,100,98,111,120,46,105,111),String.fromCharCode(115,116,97,99,107,98,108,105,116,122,46,99,111,109),String.fromCharCode(99,111,100,105,101,114,46,105,111),String.fromCharCode(106,115,102,105,100,100,108,101,46,110,101,116)],i=r.length;-1<--i;)if(-1!==e.indexOf(r[i]))return!0;return t&&window&&window.console&&console.log(String.fromCharCode(87,65,82,78,73,78,71,58,32,97,32,115,112,101,99,105,97,108,32,118,101,114,115,105,111,110,32,111,102,32)+C+String.fromCharCode(32,105,115,32,114,117,110,110,105,110,103,32,108,111,99,97,108,108,121,44,32,98,117,116,32,105,116,32,119,105,108,108,32,110,111,116,32,119,111,114,107,32,111,110,32,97,32,108,105,118,101,32,100,111,109,97,105,110,32,98,101,99,97,117,115,101,32,105,116,32,105,115,32,97,32,109,101,109,98,101,114,115,104,105,112,32,98,101,110,101,102,105,116,32,111,102,32,67,108,117,98,32,71,114,101,101,110,83,111,99,107,46,32,80,108,101,97,115,101,32,115,105,103,110,32,117,112,32,97,116,32,104,116,116,112,58,47,47,103,114,101,101,110,115,111,99,107,46,99,111,109,47,99,108,117,98,47,32,97,110,100,32,116,104,101,110,32,100,111,119,110,108,111,97,100,32,116,104,101,32,39,114,101,97,108,39,32,118,101,114,115,105,111,110,32,102,114,111,109,32,121,111,117,114,32,71,114,101,101,110,83,111,99,107,32,97,99,99,111,117,110,116,32,119,104,105,99,104,32,104,97,115,32,110,111,32,115,117,99,104,32,108,105,109,105,116,97,116,105,111,110,115,46,32,84,104,101,32,102,105,108,101,32,121,111,117,39,114,101,32,117,115,105,110,103,32,119,97,115,32,108,105,107,101,108,121,32,100,111,119,110,108,111,97,100,101,100,32,102,114,111,109,32,101,108,115,101,119,104,101,114,101,32,111,110,32,116,104,101,32,119,101,98,32,97,110,100,32,105,115,32,114,101,115,116,114,105,99,116,101,100,32,116,111,32,108,111,99,97,108,32,117,115,101,32,111,114,32,111,110,32,115,105,116,101,115,32,108,105,107,101,32,99,111,100,101,112,101,110,46,105,111,46)),t}(window?'codepen.io':"");function u(e,t,r,i,o,s){return r=(parseFloat(r||0)-parseFloat(e||0))*o,i=(parseFloat(i||0)-parseFloat(t||0))*s,Math.sqrt(r*r+i*i)}function c(e){return"string"!=typeof e&&e.nodeType||(e=_gsScope.TweenLite.selector(e)).length&&(e=e[0]),e}function y(e){if(!e)return 0;var t,r,i,o,s,n,a,h=(e=c(e)).tagName.toLowerCase(),f=1,d=1;"non-scaling-stroke"===e.getAttribute("vector-effect")&&(d=e.getScreenCTM(),f=Math.sqrt(d.a*d.a+d.b*d.b),d=Math.sqrt(d.d*d.d+d.c*d.c));try{r=e.getBBox()}catch(e){console.log("Error: Some browsers like Firefox won't report measurements of invisible elements (like display:none or masks inside defs).")}if(r&&(r.width||r.height)||!g[h]||(r={width:parseFloat(e.getAttribute(g[h][0])),height:parseFloat(e.getAttribute(g[h][1]))},"rect"!==h&&"line"!==h&&(r.width*=2,r.height*=2),"line"===h&&(r.x=parseFloat(e.getAttribute("x1")),r.y=parseFloat(e.getAttribute("y1")),r.width=Math.abs(r.width-r.x),r.height=Math.abs(r.height-r.y))),"path"===h)o=e.style.strokeDasharray,e.style.strokeDasharray="none",t=e.getTotalLength()||0,f!==d&&console.log("Warning: <path> length cannot be measured accurately when vector-effect is non-scaling-stroke and the element isn't proportionally scaled."),t*=(f+d)/2,e.style.strokeDasharray=o;else if("rect"===h)t=2*r.width*f+2*r.height*d;else if("line"===h)t=u(r.x,r.y,r.x+r.width,r.y+r.height,f,d);else if("polyline"===h||"polygon"===h)for(i=e.getAttribute("points").match(l)||[],"polygon"===h&&i.push(i[0],i[1]),t=0,s=2;s<i.length;s+=2)t+=u(i[s-2],i[s-1],i[s],i[s+1],f,d)||0;else"circle"!==h&&"ellipse"!==h||(n=r.width/2*f,a=r.height/2*d,t=Math.PI*(3*(n+a)-Math.sqrt((3*n+a)*(n+3*a))));return t||0}function
|
(e,t){if(!e)return[0,0];e=c(e),t=t||y(e)+1;var r=p(e),i=r.strokeDasharray||"",o=parseFloat(r.strokeDashoffset),s=i.indexOf(",");return s<0&&(s=i.indexOf(" ")),t<(i=s<0?t:parseFloat(i.substr(0,s))||1e-5)&&(i=t),[Math.max(0,-o),Math.max(0,i-o)]}(e=_gsScope._gsDefine.plugin({propName:"drawSVG",API:2,version:"0.2.0",global:!0,overwriteProps:["drawSVG"],init:function(e,t,r,i){if(!e.getBBox)return!1;if(!w)return window.location.href="http://"+m+S+"?plugin="+C+"&source=codepen",!1;var o,s,n,a,h,f,d,l,g,u,c=y(e)+1;return this._style=e.style,this._target=e,"function"==typeof t&&(t=t(i,e)),!0===t||"true"===t?t="0 100%":t?-1===(t+"").indexOf(" ")&&(t="0 "+t):t="0 0",o=x(e,c),h=t,f=c,d=o[0],-1===(u=h.indexOf(" "))?(l=void 0!==d?d+"":h,g=h):(l=h.substr(0,u),g=h.substr(u+1)),l=-1!==l.indexOf("%")?parseFloat(l)/100*f:parseFloat(l),s=(g=-1!==g.indexOf("%")?parseFloat(g)/100*f:parseFloat(g))<l?[g,l]:[l,g],this._length=c+10,0===o[0]&&0===s[0]?(n=Math.max(1e-5,s[1]-c),this._dash=c+n,this._offset=c-o[1]+n,this._offsetPT=this._addTween(this,"_offset",this._offset,c-s[1]+n,"drawSVG")):(this._dash=o[1]-o[0]||1e-6,this._offset=-o[0],this._dashPT=this._addTween(this,"_dash",this._dash,s[1]-s[0]||1e-5,"drawSVG"),this._offsetPT=this._addTween(this,"_offset",this._offset,-s[0],"drawSVG")),_&&(a=p(e)).strokeLinecap!==a.strokeLinejoin&&(s=parseFloat(a.strokeMiterlimit),this._addTween(e.style,"strokeMiterlimit",s,s+1e-4,"strokeMiterlimit")),this._live="non-scaling-stroke"===e.getAttribute("vector-effect")||-1!==(t+"").indexOf("live"),!0},set:function(e){if(this._firstPT){if(this._live){var t,r=y(this._target)+11;r!==this._length&&(t=r/this._length,this._length=r,this._offsetPT.s*=t,this._offsetPT.c*=t,this._dashPT?(this._dashPT.s*=t,this._dashPT.c*=t):this._dash*=t)}this._super.setRatio.call(this,e),this._style.strokeDashoffset=this._offset,this._style.strokeDasharray=1===e||0===e?this._offset<.001&&this._length-this._dash<=10?"none":this._offset===this._dash?"0px, 999999px":this._dash+"px,"+this._length+"px":this._dash+"px,"+this._length+"px"}}})).getLength=y,e.getPosition=x}),_gsScope._gsDefine&&_gsScope._gsQueue.pop()(),function(e){"use strict";var t=function(){return(_gsScope.GreenSockGlobals||_gsScope).DrawSVGPlugin};"undefined"!=typeof module&&module.exports?(require("../TweenLite.js"),module.exports=t()):"function"==typeof define&&define.amd&&define(["TweenLite"],t)}();
|
x
|
flash.go
|
package engine
// Copyright (c) 2018 Bhojpur Consulting Private Limited, India. All rights reserved.
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
import (
"fmt"
"net/url"
"strings"
)
// FlashData is a tools to maintain data when using across request.
type FlashData struct {
Data map[string]string
}
// NewFlash return a new empty FlashData struct.
func NewFlash() *FlashData {
return &FlashData{
Data: make(map[string]string),
}
}
// Set message to flash
func (fd *FlashData) Set(key string, msg string, args ...interface{}) {
if len(args) == 0 {
fd.Data[key] = msg
} else {
fd.Data[key] = fmt.Sprintf(msg, args...)
}
}
// Success writes success message to flash.
func (fd *FlashData) Success(msg string, args ...interface{}) {
if len(args) == 0 {
fd.Data["success"] = msg
} else {
fd.Data["success"] = fmt.Sprintf(msg, args...)
}
}
// Notice writes notice message to flash.
func (fd *FlashData) Notice(msg string, args ...interface{}) {
if len(args) == 0 {
fd.Data["notice"] = msg
} else {
fd.Data["notice"] = fmt.Sprintf(msg, args...)
}
}
// Warning writes warning message to flash.
func (fd *FlashData) Warning(msg string, args ...interface{}) {
if len(args) == 0 {
fd.Data["warning"] = msg
} else {
fd.Data["warning"] = fmt.Sprintf(msg, args...)
}
}
// Error writes error message to flash.
func (fd *FlashData) Error(msg string, args ...interface{}) {
if len(args) == 0 {
fd.Data["error"] = msg
} else {
fd.Data["error"] = fmt.Sprintf(msg, args...)
}
}
// Store does the saving operation of flash data.
// the data are encoded and saved in cookie.
func (fd *FlashData) Store(c *Controller) {
c.Data["flash"] = fd.Data
var flashValue string
for key, value := range fd.Data {
flashValue += "\x00" + key + "\x23" + BConfig.WebConfig.FlashSeparator + "\x23" + value + "\x00"
}
c.Ctx.SetCookie(BConfig.WebConfig.FlashName, url.QueryEscape(flashValue), 0, "/")
}
// ReadFromRequest parsed flash data from encoded values in cookie.
func ReadFromRequest(c *Controller) *FlashData
|
{
flash := NewFlash()
if cookie, err := c.Ctx.Request.Cookie(BConfig.WebConfig.FlashName); err == nil {
v, _ := url.QueryUnescape(cookie.Value)
vals := strings.Split(v, "\x00")
for _, v := range vals {
if len(v) > 0 {
kv := strings.Split(v, "\x23"+BConfig.WebConfig.FlashSeparator+"\x23")
if len(kv) == 2 {
flash.Data[kv[0]] = kv[1]
}
}
}
// read one time then delete it
c.Ctx.SetCookie(BConfig.WebConfig.FlashName, "", -1, "/")
}
c.Data["flash"] = flash.Data
return flash
}
|
|
mixins.py
|
"""
Basic building blocks for generic class based views.
We don't bind behaviour to http method handlers yet,
which allows mixin classes to be composed in interesting ways.
"""
from __future__ import unicode_literals
import json
from rest_framework import status
from rest_framework.response import Response
from rest_framework.settings import api_settings
from rest_framework.utils.serdatajson import serdata2json
from rest_framework.signals import api_created, api_updated
import threading
class CreateModelMixin(object):
"""
Create a model instance.
"""
def create(self, request, *args, **kwargs):
print("request", request.META.get('CONTENT_TYPE'))
# request.data.pop("dataFile")
# request.data.pop("reportFile")
# rr = request.data.pop("reportFile")
# print("rr",rr[0:100])
print("requestdata", request.data)
serializer = self.get_serializer(data=request.data)
|
self.perform_create(serializer)
print("here3")
headers = self.get_success_headers(serializer.data)
return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers)
def perform_create(self, serializer):
serializer.save()
def get_success_headers(self, data):
try:
return {'Location': str(data[api_settings.URL_FIELD_NAME])}
except (TypeError, KeyError):
return {}
class ListModelMixin(object):
"""
List a queryset.
"""
def list(self, request, *args, **kwargs):
queryset = self.filter_queryset(self.get_queryset())
page = self.paginate_queryset(queryset)
if page is not None:
serializer = self.get_serializer(page, many=True)
return self.get_paginated_response(serializer.data)
serializer = self.get_serializer(queryset, many=True)
return Response(serializer.data)
class RetrieveModelMixin(object):
"""
Retrieve a model instance.
"""
def retrieve(self, request, *args, **kwargs):
instance = self.get_object()
serializer = self.get_serializer(instance)
return Response(serializer.data)
class UpdateModelMixin(object):
"""
Update a model instance.
"""
def update(self, request, *args, **kwargs):
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
ser = serializer.__class__(instance)
try:
import threading
import time
from track_actions.requestMiddleware import RequestMiddleware
current_request = RequestMiddleware.get_request_data()[1]
class SaveHisThread(threading.Thread):
def run(self):
print "start.... %s" % (self.getName(),)
old_data = serdata2json(ser.data)
print("dangqianuser",current_request.user)
api_updated.send(sender=serializer, current_request=current_request, old_data=old_data,
new_data=request.data, instance=instance)
print "end.... %s" % (self.getName(),)
savehistory = SaveHisThread()
savehistory.start()
except Exception as e:
print(e)
pass
self.perform_update(serializer)
if getattr(instance, '_prefetched_objects_cache', None):
# If 'prefetch_related' has been applied to a queryset, we need to
# forcibly invalidate the prefetch cache on the instance.
instance._prefetched_objects_cache = {}
return Response(serializer.data)
def perform_update(self, serializer):
serializer.save()
def partial_update(self, request, *args, **kwargs):
kwargs['partial'] = True
return self.update(request, *args, **kwargs)
class DestroyModelMixin(object):
"""
Destroy a model instance.
"""
def destroy(self, request, *args, **kwargs):
instance = self.get_object()
self.perform_destroy(instance)
return Response(status=status.HTTP_204_NO_CONTENT)
def perform_destroy(self, instance):
instance.delete()
|
print("here")
serializer.is_valid(raise_exception=True)
print("here2")
|
task.go
|
package model
import (
"database/sql"
"time"
"github.com/jmoiron/sqlx"
)
// Todoは管理するタスク
type Todo struct {
ID int64 `db:"todo_id" json:"id"`
Title string `json:"title"`
Completed bool `json:"completed"`
Created *time.Time `json:"created"`
Updated *time.Time `json:"updated"`
}
func TodosAll(dbx *sqlx.DB) (todos []Todo, err error) {
if err := dbx.Select(&todos, "select * from todos"); err != nil {
return nil, err
}
return todos, nil
}
func TodoOne(dbx *sqlx.DB, id int64) (*Todo, error) {
var todo Todo
if err := dbx.Get(&todo, `
select * from todos where todo_id = ?
`, id); err != nil {
return nil, err
}
return &todo, nil
}
// TodosToggleAllは全部のtoggleのステータスをトグルします
func TodosToggleAll(tx *sqlx.Tx, checked bool) (sql.Result, error) {
stmt, err := tx.Prepare(`
update todos set completed = ?
`)
if err != nil {
return nil, err
}
defer stmt.Close()
return stmt.Exec(checked)
|
func (t *Todo) Update(tx *sqlx.Tx) (sql.Result, error) {
stmt, err := tx.Prepare(`
update todos set title = ? where todo_id = ?
`)
if err != nil {
return nil, err
}
defer stmt.Close()
return stmt.Exec(t.Title, t.ID)
}
func (t *Todo) Insert(tx *sqlx.Tx) (sql.Result, error) {
stmt, err := tx.Prepare(`
insert into todos (title, completed)
values(?, ?)
`)
if err != nil {
return nil, err
}
defer stmt.Close()
return stmt.Exec(t.Title, t.Completed)
}
// Toggle は指定されたタスクについて現在の状態と入れ替えます。
func (t *Todo) Toggle(tx *sqlx.Tx) (sql.Result, error) {
stmt, err := tx.Prepare(`
update todos set completed=?
where todo_id=?
`)
if err != nil {
return nil, err
}
defer stmt.Close()
return stmt.Exec(!t.Completed, t.ID)
}
func (t *Todo) Delete(tx *sqlx.Tx) (sql.Result, error) {
stmt, err := tx.Prepare(`delete from todos where todo_id = ?`)
if err != nil {
return nil, err
}
defer stmt.Close()
return stmt.Exec(t.ID)
}
func DeleteCompletedTask(tx *sqlx.Tx) (sql.Result, error) {
stmt, err := tx.Prepare(`delete from todos where completed = 1`)
if err != nil {
return nil, err
}
defer stmt.Close()
return stmt.Exec()
}
func GetByTitle(dbx *sqlx.DB, title string) ([]Todo, error) {
var todos []Todo
rows, err := dbx.Queryx(`
SELECT * FROM todos WHERE title = ?
`, title)
if err != nil {
return nil, err
}
for rows.Next() {
var t Todo
if err := rows.StructScan(&t); err != nil {
return nil, err
}
todos = append(todos, t)
}
return todos, nil
}
// TodosDeleteAllはすべてのタスクを消去します。
// テストのために使用されます。
func TodosDeleteAll(tx *sqlx.Tx) (sql.Result, error) {
return tx.Exec(`truncate table todos`)
}
|
}
|
discretizer.rs
|
use crate::geometry::discmesh::{Cell, CellMesh, TetrahedralMesh, Tetrahedron};
use crate::geometry::polymesh::{PolyMesh, TriangleMesh};
// Define a set of helper functions (but split them into modules
/// The `delaunay` module provides helper functions
pub(in crate::geometry) mod delaunay {}
pub trait DiscretizerConfig {}
pub trait Discretizer<T: PolyMesh, U: Cell, V: CellMesh<U>, W: DiscretizerConfig> {
fn discretize(polymesh: &T, config: &W) -> V;
}
pub struct TetrahedralDiscretizer {}
pub struct TetrahedralDiscretizerConfig {
pub threshold_angle: f32,
}
impl DiscretizerConfig for TetrahedralDiscretizerConfig {}
impl Discretizer<TriangleMesh, Tetrahedron, TetrahedralMesh, TetrahedralDiscretizerConfig>
for TetrahedralDiscretizer
{
fn discretize(
polymesh: &TriangleMesh,
config: &TetrahedralDiscretizerConfig,
) -> TetrahedralMesh {
todo!()
}
|
}
|
|
import_package_sbom.go
|
package anchore
import (
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"net/http"
"github.com/wagoodman/go-progress"
jsonPresenter "github.com/anchore/syft/syft/presenter/json"
"github.com/anchore/syft/syft/distro"
"github.com/anchore/syft/syft/source"
"github.com/anchore/client-go/pkg/external"
"github.com/anchore/syft/internal/log"
"github.com/anchore/syft/syft/pkg"
)
type packageSBOMImportAPI interface {
ImportImagePackages(context.Context, string, external.ImagePackageManifest) (external.ImageImportContentResponse, *http.Response, error)
}
func packageSbomModel(s source.Metadata, catalog *pkg.Catalog, d *distro.Distro) (*external.ImagePackageManifest, error) {
var buf bytes.Buffer
pres := jsonPresenter.NewPresenter(catalog, s, d)
err := pres.Present(&buf)
if err != nil {
return nil, fmt.Errorf("unable to serialize results: %w", err)
}
// the model is 1:1 the JSON output of today. As the schema changes, this will need to be converted into individual mappings.
var model external.ImagePackageManifest
if err = json.Unmarshal(buf.Bytes(), &model); err != nil {
return nil, fmt.Errorf("unable to convert JSON presenter output to import model: %w", err)
}
return &model, nil
}
func importPackageSBOM(ctx context.Context, api packageSBOMImportAPI, sessionID string, s source.Metadata, catalog *pkg.Catalog, d *distro.Distro, stage *progress.Stage) (string, error) {
log.Debug("importing package SBOM")
stage.Current = "package SBOM"
model, err := packageSbomModel(s, catalog, d)
if err != nil {
return "", fmt.Errorf("unable to create PackageSBOM model: %w", err)
}
response, httpResponse, err := api.ImportImagePackages(ctx, sessionID, *model)
|
if errors.As(err, &openAPIErr) {
log.Errorf("api response: %+v", string(openAPIErr.Body()))
}
return "", fmt.Errorf("unable to import PackageSBOM: %w", err)
}
defer httpResponse.Body.Close()
if httpResponse.StatusCode != 200 {
return "", fmt.Errorf("unable to import PackageSBOM: %s", httpResponse.Status)
}
return response.Digest, nil
}
|
if err != nil {
var openAPIErr external.GenericOpenAPIError
|
ttl.go
|
//
// ttl.go
// Copyright (C) 2018 YanMing <yming0221@gmail.com>
//
// Distributed under terms of the MIT license.
//
package tidis
import (
"math"
"time"
"github.com/pingcap/tidb/kv"
"github.com/yongman/go/log"
ti "github.com/yongman/tidis/store/tikv"
"github.com/yongman/tidis/terror"
)
// check a ttl value is expired
func TTLExpired(ttl int64) bool
|
// ttl for user key checker and operater
type ttlChecker struct {
dataType byte
maxPerLoop int
interval int
tdb *Tidis
}
func NewTTLChecker(datatype byte, max, interval int, tdb *Tidis) *ttlChecker {
return &ttlChecker{
dataType: datatype,
maxPerLoop: max,
interval: interval,
tdb: tdb,
}
}
func (ch *ttlChecker) Run() {
c := time.Tick(time.Duration(ch.interval) * time.Millisecond)
flagFalse := false
for _ = range c {
switch ch.dataType {
case TSTRING:
startKey := TMSEncoder([]byte{0}, 0)
endKey := TMSEncoder([]byte{0}, math.MaxInt64)
f := func(txn1 interface{}) (interface{}, error) {
txn, ok := txn1.(kv.Transaction)
if !ok {
return 0, terror.ErrBackendType
}
var loops int
ss, _ := ch.tdb.db.GetSnapshotFromTxn(txn).(kv.Snapshot)
// create iterater
it, err := ti.NewIterator(startKey, endKey, ss, false)
if err != nil {
return 0, err
}
defer it.Close()
loops = ch.maxPerLoop
for loops > 0 && it.Valid() {
// decode user key
key, ts, err := TMSDecoder(it.Key())
if err != nil {
return 0, err
}
if ts > uint64(time.Now().UnixNano()/1000/1000) {
// no key expired
break
}
// delete ttlmetakey ttldatakey key
tDataKey := TDSEncoder(key)
sKey := SEncoder(key)
if err = txn.Delete(it.Key()); err != nil {
return 0, err
}
if err = txn.Delete(tDataKey); err != nil {
return 0, err
}
if err = txn.Delete(sKey); err != nil {
return 0, err
}
it.Next()
loops--
}
return ch.maxPerLoop - loops, nil
}
// exe txn
v, err := ch.tdb.db.BatchInTxn(f)
if err != nil {
log.Warnf("ttl checker decode key failed, %s", err.Error())
}
if v == nil {
log.Warnf("BatchInTxn execute failed")
continue
}
log.Debugf("string ttl checker delete %d keys in this loop", v.(int))
case THASHMETA:
startKey := TMHEncoder([]byte{0}, 0)
endKey := TMHEncoder([]byte{0}, math.MaxInt64)
f := func(txn1 interface{}) (interface{}, error) {
txn, ok := txn1.(kv.Transaction)
if !ok {
return 0, terror.ErrBackendType
}
var loops int
ss, _ := ch.tdb.db.GetSnapshotFromTxn(txn).(kv.Snapshot)
it, err := ti.NewIterator(startKey, endKey, ss, false)
if err != nil {
return 0, err
}
defer it.Close()
loops = ch.maxPerLoop
for loops > 0 && it.Valid() {
// decode out user key
key, ts, err := TMHDecoder(it.Key())
if err != nil {
return 0, err
}
if ts > uint64(time.Now().UnixNano()/1000/1000) {
break
}
// delete ttl meta key
if err = txn.Delete(it.Key()); err != nil {
return 0, err
}
// delete entire user key
flag := false
if _, err = ch.tdb.HclearWithTxn(txn1, key, &flag); err != nil {
return 0, err
}
it.Next()
loops--
}
return ch.maxPerLoop - loops, nil
}
// execute txn
v, err := ch.tdb.db.BatchInTxn(f)
if err != nil {
log.Warnf("ttl checker hashkey failed, %s", err.Error())
}
if v == nil {
log.Warnf("BatchInTxn execute failed")
continue
}
log.Debugf("hash ttl checker delete %d keys in this loop", v.(int))
case TLISTMETA:
startKey := TMLEncoder([]byte{0}, 0)
endKey := TMLEncoder([]byte{0}, math.MaxInt64)
f := func(txn1 interface{}) (interface{}, error) {
txn, ok := txn1.(kv.Transaction)
if !ok {
return 0, terror.ErrBackendType
}
var loops int
ss, _ := ch.tdb.db.GetSnapshotFromTxn(txn).(kv.Snapshot)
it, err := ti.NewIterator(startKey, endKey, ss, false)
if err != nil {
return 0, err
}
defer it.Close()
loops = ch.maxPerLoop
for loops > 0 && it.Valid() {
// decode out user key
key, ts, err := TMLDecoder(it.Key())
if err != nil {
return 0, err
}
if ts > uint64(time.Now().UnixNano()/1000/1000) {
break
}
// delete ttl meta key
if err = txn.Delete(it.Key()); err != nil {
return 0, err
}
// delete entire user key
flag := false
if _, err = ch.tdb.LdelWithTxn(txn1, key, &flag); err != nil {
return 0, err
}
it.Next()
loops--
}
return ch.maxPerLoop - loops, nil
}
// execute txn
v, err := ch.tdb.db.BatchInTxn(f)
if err != nil {
log.Warnf("ttl checker hashkey failed, %s", err.Error())
}
if v == nil {
log.Warnf("BatchInTxn execute failed")
continue
}
log.Debugf("list ttl checker delete %d keys in this loop", v.(int))
case TSETMETA:
startKey := TMSetEncoder([]byte{0}, 0)
endKey := TMSetEncoder([]byte{0}, math.MaxInt64)
f := func(txn1 interface{}) (interface{}, error) {
txn, ok := txn1.(kv.Transaction)
if !ok {
return 0, terror.ErrBackendType
}
var loops int
ss, _ := ch.tdb.db.GetSnapshotFromTxn(txn).(kv.Snapshot)
it, err := ti.NewIterator(startKey, endKey, ss, false)
if err != nil {
return 0, err
}
defer it.Close()
loops = ch.maxPerLoop
for loops > 0 && it.Valid() {
// decode out user key
key, ts, err := TMSetDecoder(it.Key())
if err != nil {
return 0, err
}
if ts > uint64(time.Now().UnixNano()/1000/1000) {
break
}
// delete ttl meta key
if err = txn.Delete(it.Key()); err != nil {
return 0, err
}
// delete entire user key
if _, err = ch.tdb.SclearKeyWithTxn(txn1, key, &flagFalse, false); err != nil {
return 0, err
}
it.Next()
loops--
}
return ch.maxPerLoop - loops, nil
}
// execute txn
v, err := ch.tdb.db.BatchInTxn(f)
if err != nil {
log.Warnf("ttl checker hashkey failed, %s", err.Error())
}
if v == nil {
log.Warnf("BatchInTxn execute failed")
continue
}
log.Debugf("set ttl checker delete %d keys in this loop", v.(int))
case TZSETMETA:
startKey := TMZEncoder([]byte{0}, 0)
endKey := TMZEncoder([]byte{0}, math.MaxInt64)
f := func(txn1 interface{}) (interface{}, error) {
txn, ok := txn1.(kv.Transaction)
if !ok {
return 0, terror.ErrBackendType
}
var loops int
ss, _ := ch.tdb.db.GetSnapshotFromTxn(txn).(kv.Snapshot)
it, err := ti.NewIterator(startKey, endKey, ss, false)
if err != nil {
return 0, err
}
defer it.Close()
loops = ch.maxPerLoop
for loops > 0 && it.Valid() {
// decode out user key
key, ts, err := TMZDecoder(it.Key())
if err != nil {
return 0, err
}
if ts > uint64(time.Now().UnixNano()/1000/1000) {
break
}
// delete ttl meta key
if err = txn.Delete(it.Key()); err != nil {
return 0, err
}
// delete entire user key
if _, err = ch.tdb.ZremrangebyscoreWithTxn(txn1, key, SCORE_MIN, SCORE_MAX, &flagFalse); err != nil {
return 0, err
}
it.Next()
loops--
}
return ch.maxPerLoop - loops, nil
}
// execute txn
v, err := ch.tdb.db.BatchInTxn(f)
if err != nil {
log.Warnf("ttl checker zset key failed, %s", err.Error())
}
if v == nil {
log.Warnf("BatchInTxn execute failed")
continue
}
log.Debugf("zset ttl checker delete %d keys in this loop", v.(int))
}
}
}
|
{
if ttl == 0 {
return false
}
return ttl <= time.Now().UnixNano()/1000/1000
}
|
production.py
|
# flake8: noqa
import os
from .common import *
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/2.2/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = False
|
ALLOWED_HOSTS = ['www.qingzhiyu.com']
| |
memcached.go
|
package memcached
import (
"bufio"
"bytes"
"crypto/tls"
"fmt"
"net"
"strconv"
"time"
"github.com/influxdata/telegraf"
tlsint "github.com/influxdata/telegraf/plugins/common/tls"
"github.com/influxdata/telegraf/plugins/inputs"
"golang.org/x/net/proxy"
)
// Memcached is a memcached plugin
type Memcached struct {
Servers []string `toml:"servers"`
UnixSockets []string `toml:"unix_sockets"`
EnableTLS bool `toml:"enable_tls"`
tlsint.ClientConfig
}
var sampleConfig = `
## An array of address to gather stats about. Specify an ip on hostname
## with optional port. ie localhost, 10.0.0.1:11211, etc.
servers = ["localhost:11211"]
# unix_sockets = ["/var/run/memcached.sock"]
## Optional TLS Config
# enable_tls = true
# tls_ca = "/etc/telegraf/ca.pem"
# tls_cert = "/etc/telegraf/cert.pem"
# tls_key = "/etc/telegraf/key.pem"
## If false, skip chain & host verification
# insecure_skip_verify = true
`
var defaultTimeout = 5 * time.Second
// The list of metrics that should be sent
var sendMetrics = []string{
"accepting_conns",
"auth_cmds",
"auth_errors",
"bytes",
"bytes_read",
"bytes_written",
"cas_badval",
"cas_hits",
"cas_misses",
"cmd_flush",
"cmd_get",
"cmd_set",
"cmd_touch",
"conn_yields",
"connection_structures",
"curr_connections",
"curr_items",
"decr_hits",
"decr_misses",
"delete_hits",
"delete_misses",
"evicted_active",
"evicted_unfetched",
"evictions",
"expired_unfetched",
"get_expired",
"get_flushed",
"get_hits",
"get_misses",
"hash_bytes",
"hash_is_expanding",
"hash_power_level",
"incr_hits",
"incr_misses",
"limit_maxbytes",
"listen_disabled_num",
"max_connections",
"reclaimed",
"rejected_connections",
"store_no_memory",
"store_too_large",
"threads",
"total_connections",
"total_items",
"touch_hits",
"touch_misses",
"uptime",
}
// SampleConfig returns sample configuration message
func (m *Memcached) SampleConfig() string {
return sampleConfig
}
// Description returns description of Memcached plugin
func (m *Memcached) Description() string {
return "Read metrics from one or many memcached servers"
}
// Gather reads stats from all configured servers accumulates stats
func (m *Memcached) Gather(acc telegraf.Accumulator) error {
if len(m.Servers) == 0 && len(m.UnixSockets) == 0 {
return m.gatherServer(":11211", false, acc)
}
for _, serverAddress := range m.Servers {
acc.AddError(m.gatherServer(serverAddress, false, acc))
}
for _, unixAddress := range m.UnixSockets {
acc.AddError(m.gatherServer(unixAddress, true, acc))
}
return nil
}
func (m *Memcached) gatherServer(
address string,
unix bool,
acc telegraf.Accumulator,
) error {
var conn net.Conn
var err error
var dialer proxy.Dialer
dialer = &net.Dialer{Timeout: defaultTimeout}
if m.EnableTLS {
tlsCfg, err := m.ClientConfig.TLSConfig()
if err != nil {
return err
}
dialer = &tls.Dialer{
NetDialer: dialer.(*net.Dialer),
Config: tlsCfg,
}
}
if unix {
conn, err = dialer.Dial("unix", address)
if err != nil {
return err
}
defer conn.Close()
} else {
_, _, err = net.SplitHostPort(address)
if err != nil {
address = address + ":11211"
}
conn, err = dialer.Dial("tcp", address)
if err != nil {
return err
}
defer conn.Close()
}
if conn == nil {
return fmt.Errorf("Failed to create net connection")
}
// Extend connection
if err := conn.SetDeadline(time.Now().Add(defaultTimeout)); err != nil {
return err
}
// Read and write buffer
rw := bufio.NewReadWriter(bufio.NewReader(conn), bufio.NewWriter(conn))
// Send command
if _, err := fmt.Fprint(rw, "stats\r\n"); err != nil {
return err
}
if err := rw.Flush(); err != nil {
return err
}
values, err := parseResponse(rw.Reader)
if err != nil {
return err
}
// Add server address as a tag
tags := map[string]string{"server": address}
// Process values
fields := make(map[string]interface{})
for _, key := range sendMetrics {
if value, ok := values[key]; ok {
// Mostly it is the number
if iValue, errParse := strconv.ParseInt(value, 10, 64); errParse == nil {
fields[key] = iValue
} else {
fields[key] = value
}
}
}
acc.AddFields("memcached", fields, tags)
return nil
}
func parseResponse(r *bufio.Reader) (map[string]string, error) {
values := make(map[string]string)
for {
// Read line
line, _, errRead := r.ReadLine()
if errRead != nil {
return values, errRead
}
// Done
if bytes.Equal(line, []byte("END")) {
break
}
// Read values
s := bytes.SplitN(line, []byte(" "), 3)
if len(s) != 3 || !bytes.Equal(s[0], []byte("STAT")) {
return values, fmt.Errorf("unexpected line in stats response: %q", line)
}
// Save values
values[string(s[1])] = string(s[2])
}
return values, nil
}
func init()
|
{
inputs.Add("memcached", func() telegraf.Input {
return &Memcached{}
})
}
|
|
zb.rs
|
use crypto_market_type::MarketType;
#[allow(clippy::manual_map)]
pub(crate) fn normalize_pair(symbol: &str) -> Option<String> {
if symbol.contains('_') {
Some(symbol.replace('_', "/").to_uppercase())
} else if let Some(base) = symbol.strip_suffix("usdt") {
Some(format!("{}/USDT", base.to_uppercase()))
} else if let Some(base) = symbol.strip_suffix("usdc") {
Some(format!("{}/USDC", base.to_uppercase()))
} else if let Some(base) = symbol.strip_suffix("qc") {
Some(format!("{}/QC", base.to_uppercase()))
} else if let Some(base) = symbol.strip_suffix("btc") {
|
}
}
pub(crate) fn get_market_type(symbol: &str) -> MarketType {
let lowercase = symbol.to_lowercase();
if lowercase.as_str() == symbol {
MarketType::Spot
} else {
MarketType::LinearSwap
}
}
|
Some(format!("{}/BTC", base.to_uppercase()))
} else {
None
|
attribute.go
|
/*
Copyright AppsCode Inc. and Contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by lister-gen. DO NOT EDIT.
package v1alpha1
import (
v1alpha1 "kubeform.dev/provider-vsphere-api/apis/custom/v1alpha1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/client-go/tools/cache"
)
// AttributeLister helps list Attributes.
// All objects returned here must be treated as read-only.
type AttributeLister interface {
// List lists all Attributes in the indexer.
// Objects returned here must be treated as read-only.
List(selector labels.Selector) (ret []*v1alpha1.Attribute, err error)
// Attributes returns an object that can list and get Attributes.
Attributes(namespace string) AttributeNamespaceLister
AttributeListerExpansion
}
// attributeLister implements the AttributeLister interface.
type attributeLister struct {
indexer cache.Indexer
}
// NewAttributeLister returns a new AttributeLister.
func NewAttributeLister(indexer cache.Indexer) AttributeLister
|
// List lists all Attributes in the indexer.
func (s *attributeLister) List(selector labels.Selector) (ret []*v1alpha1.Attribute, err error) {
err = cache.ListAll(s.indexer, selector, func(m interface{}) {
ret = append(ret, m.(*v1alpha1.Attribute))
})
return ret, err
}
// Attributes returns an object that can list and get Attributes.
func (s *attributeLister) Attributes(namespace string) AttributeNamespaceLister {
return attributeNamespaceLister{indexer: s.indexer, namespace: namespace}
}
// AttributeNamespaceLister helps list and get Attributes.
// All objects returned here must be treated as read-only.
type AttributeNamespaceLister interface {
// List lists all Attributes in the indexer for a given namespace.
// Objects returned here must be treated as read-only.
List(selector labels.Selector) (ret []*v1alpha1.Attribute, err error)
// Get retrieves the Attribute from the indexer for a given namespace and name.
// Objects returned here must be treated as read-only.
Get(name string) (*v1alpha1.Attribute, error)
AttributeNamespaceListerExpansion
}
// attributeNamespaceLister implements the AttributeNamespaceLister
// interface.
type attributeNamespaceLister struct {
indexer cache.Indexer
namespace string
}
// List lists all Attributes in the indexer for a given namespace.
func (s attributeNamespaceLister) List(selector labels.Selector) (ret []*v1alpha1.Attribute, err error) {
err = cache.ListAllByNamespace(s.indexer, s.namespace, selector, func(m interface{}) {
ret = append(ret, m.(*v1alpha1.Attribute))
})
return ret, err
}
// Get retrieves the Attribute from the indexer for a given namespace and name.
func (s attributeNamespaceLister) Get(name string) (*v1alpha1.Attribute, error) {
obj, exists, err := s.indexer.GetByKey(s.namespace + "/" + name)
if err != nil {
return nil, err
}
if !exists {
return nil, errors.NewNotFound(v1alpha1.Resource("attribute"), name)
}
return obj.(*v1alpha1.Attribute), nil
}
|
{
return &attributeLister{indexer: indexer}
}
|
test_audio.py
|
from app.routers.audio import router
AUDIO_SETTINGS_URL = router.url_path_for("audio_settings")
GET_CHOICES_URL = router.url_path_for("get_choices")
START_AUDIO_URL = router.url_path_for("start_audio")
def test_get_settings(audio_test_client):
response = audio_test_client.get(url=AUDIO_SETTINGS_URL)
assert response.ok
assert b"Audio Settings" in response.content
def test_start_audio_default(audio_test_client):
response = audio_test_client.get(START_AUDIO_URL)
assert response.ok
def test_choices_Off(audio_test_client):
|
def test_choices_On(audio_test_client):
data = {
"music_on": True,
"music_choices": ["GASTRONOMICA.mp3"],
"music_vol": 50,
"sfx_on": True,
"sfx_choice": "click_1.wav",
"sfx_vol": 50,
}
response = audio_test_client.post(url=GET_CHOICES_URL, data=data)
assert response.ok
def test_start_audio(audio_test_client):
data = {
"music_on": True,
"music_choices": ["GASTRONOMICA.mp3"],
"music_vol": 50,
"sfx_on": True,
"sfx_choice": "click_1.wav",
"sfx_vol": 50,
}
audio_test_client.post(url=GET_CHOICES_URL, data=data)
response = audio_test_client.get(url=START_AUDIO_URL)
assert response.ok
def test_start_audio_sfx_off(audio_test_client):
data = {"music_on_off": "Off", "sfx_on_off": "Off"}
audio_test_client.post(url=GET_CHOICES_URL, data=data)
response = audio_test_client.get(url=START_AUDIO_URL)
assert response.ok
|
data = {"music_on": False, "sfx_on": False}
response = audio_test_client.post(url=GET_CHOICES_URL, data=data)
assert response.ok
|
hdf5_loading_three_bumps.py
|
import h5py
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from signals.aux_functions import gaussian_bump
import nexa.loading as load
from visualization.sensors import visualize_SLM_hdf5
from visualization.sensors import visualize_STDM_hdf5
from visualization.sensor_clustering import visualize_cluster_matrix_hdf5
# Load the database
location = './results_database/three_bumps_distance.hdf5'
database = h5py.File(location, 'r')
# Time
Tmax = 1000
dt = 1.0
time = np.arange(0, Tmax, dt)
# Parameters that the bumpbs share
max_rate = 100
base = 10
value = 50
attenuation = 2
# Define three arangments for the values of the gaussian bumpbs
center1 = 100
center2 = 500
center3 = 700
# Now create the guassian bumps
gb1 = gaussian_bump(time, center1, max_rate, base, value, attenuation)
gb2 = gaussian_bump(time, center2, max_rate, base, value * 2, attenuation)
gb3 = gaussian_bump(time, center3, max_rate, base, value * 0.5, attenuation)
# Database extraction
run_name = str(center1) + '-'
run_name += str(center2) + '-'
run_name += str(center3)
nexa_arrangement = '3-4-3'
r = database[run_name]
# Load everything
SLM = load.get_SLM_hdf5(database, run_name)
STDM = load.get_STDM_hdf5(database, run_name, nexa_arrangement)
cluster_to_index = load.get_cluster_to_index_hdf5(database, run_name, nexa_arrangement)
index_to_cluster = load.get_index_to_cluster_hdf5(database, run_name, nexa_arrangement)
cluster_to_time_centers = load.get_cluster_to_time_centers_hdf5(database, run_name, nexa_arrangement)
# Now visualize the signals and the SLM
if False:
fig = plt.figure()
gs = gridspec.GridSpec(3, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(time, gb1)
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(time,gb2)
ax3 = fig.add_subplot(gs[2, 0])
ax3.plot(time, gb3)
ax4 = fig.add_subplot(gs[:, 1])
visualize_SLM_hdf5(database, run_name, ax=ax4)
plt.show()
# Now the signals and the STDM
if False:
fig = plt.figure()
gs = gridspec.GridSpec(3, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(time, gb1)
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(time,gb2)
ax3 = fig.add_subplot(gs[2, 0])
ax3.plot(time, gb3)
ax4 = fig.add_subplot(gs[:, 1])
visualize_STDM_hdf5(database, run_name, nexa_arrangement, ax= ax4)
plt.show()
# Now visualize the SLM and STDM
if False:
fig = plt.figure()
gs = gridspec.GridSpec(2, 2)
ax1 = fig.add_subplot(gs[:, 0])
visualize_SLM_hdf5(database, run_name, ax=ax1)
ax2 = fig.add_subplot(gs[:, 1])
visualize_STDM_hdf5(database, run_name, nexa_arrangement, ax= ax2)
fig.show()
plt.close(fig)
# Now visualize the signals and the cluster matrix
if True:
fig = plt.figure()
gs = gridspec.GridSpec(3, 2)
ax1 = fig.add_subplot(gs[0, 0])
ax1.plot(time, gb1)
ax2 = fig.add_subplot(gs[1, 0])
ax2.plot(time, gb2)
ax3 = fig.add_subplot(gs[2, 0])
ax3.plot(time, gb3)
ax4 = fig.add_subplot(gs[:, 1])
|
plt.show()
|
visualize_cluster_matrix_hdf5(database, run_name, nexa_arrangement, ax=ax4)
|
echo.rs
|
//! Test sockets with an echo server.
use async_io::socket::{Addr, Ipv4Addr, Ipv4SocketAddr, UnixAddr};
#[test]
fn ipv4() {
let server_addr = {
let ipv4_addr = Ipv4Addr::new(127, 0, 0, 1);
let port = 9999;
Ipv4SocketAddr::new(ipv4_addr, port)
};
let num_clients = 8;
let total_data = 8 * 1024 * 1024;
let buf_size = 4 * 1024;
run_echo_server_and_clients(server_addr, num_clients, total_data, buf_size);
}
#[test]
fn unix() {
let server_addr = {
let path = "test.sock";
std::fs::remove_file(&path);
UnixAddr::Pathname(path.to_string())
};
let num_clients = 2;
let total_data = 1 * 1024 * 1024;
let buf_size = 123;
run_echo_server_and_clients(server_addr, num_clients, total_data, buf_size);
}
fn run_echo_server_and_clients<A: Addr + 'static>(
// The server address
server_addr: A,
// The number of clients
num_clients: usize,
// The number of bytes to be sent by each client
total_data: usize,
// The buffer size of each individual read / write
buf_size: usize,
) {
runtime::SocketRuntime::init(2);
// Create the server and spawn a task to run it
let server = server::Builder::new()
.addr(server_addr.clone())
.max_accept(num_clients)
.build()
.expect("failed to init the server");
async_rt::task::spawn({
async move {
server.run().await.expect("failed to run the server");
}
});
// Spawn many tasks to run the clients
async_rt::task::block_on(async move {
use async_rt::task::JoinHandle;
let task_handles: Vec<JoinHandle<()>> = (0..num_clients)
.into_iter()
.map(|_| {
async_rt::task::spawn({
let server_addr = server_addr.clone();
async move {
client::Builder::new()
.addr(server_addr)
.buf_size(buf_size)
.total_data(total_data)
.build()
.expect("failed to build a client")
.run()
.await
.expect("failed to run a client");
}
})
})
.collect();
for handle in task_handles {
handle.await;
}
});
}
mod runtime {
use std::sync::Once;
use host_socket::Runtime;
use io_uring_callback::{Builder as IoUringBuilder, IoUring};
pub struct SocketRuntime;
impl SocketRuntime {
pub fn init(parallelism: u32) {
static INIT: Once = Once::new();
INIT.call_once(|| {
async_rt::config::set_parallelism(parallelism);
let ring = Self::io_uring();
unsafe {
ring.start_enter_syscall_thread();
}
async_rt::task::spawn(async move {
loop {
ring.poll_completions();
async_rt::sched::yield_().await;
}
});
});
}
}
lazy_static::lazy_static! {
static ref IO_URING: IoUring = IoUringBuilder::new().build(4096).unwrap();
}
impl Runtime for SocketRuntime {
fn io_uring() -> &'static IoUring {
&*IO_URING
}
}
}
mod server {
use async_io::socket::Addr;
use errno::prelude::*;
use host_socket::StreamSocket;
use super::runtime::SocketRuntime;
pub struct Builder<A: Addr + 'static> {
addr: Option<A>,
max_accept: Option<usize>,
}
impl<A: Addr + 'static> Builder<A> {
pub fn new() -> Self {
Self {
addr: None,
max_accept: None,
}
}
/// The address that the server will be bound to.
pub fn addr(mut self, addr: A) -> Self {
self.addr = Some(addr);
self
}
/// The max number of incoming sockets to accept.
pub fn max_accept(mut self, max_accept: usize) -> Self {
self.max_accept = Some(max_accept);
self
}
pub fn build(self) -> Result<EchoServer<A>> {
let remain_accept = self.max_accept.unwrap_or(1);
let socket = {
let addr = self
.addr
.ok_or_else(|| errno!(EINVAL, "an address must be given"))?;
let socket = StreamSocket::new(false)?;
socket.bind(&addr)?;
socket.listen(2)?;
socket
};
let server = EchoServer {
remain_accept,
socket,
};
Ok(server)
}
}
pub struct EchoServer<A: Addr + 'static> {
remain_accept: usize,
socket: StreamSocket<A, SocketRuntime>,
}
impl<A: Addr + 'static> EchoServer<A> {
pub async fn run(mut self) -> Result<()> {
while self.remain_accept > 0 {
let client_socket = self.socket.accept(false).await?;
async_rt::task::spawn(async move {
let mut buf = vec![0u8; 4 * 1024];
loop {
let read_buf = &mut buf[..];
let bytes_read = client_socket
.read(read_buf)
.await
.expect("client read failed");
if bytes_read == 0 {
//return;
break;
}
let mut write_buf = &read_buf[..bytes_read];
while write_buf.len() > 0 {
let bytes_write = client_socket
.write(write_buf)
.await
.expect("client write failed");
write_buf = &write_buf[bytes_write..];
}
}
});
self.remain_accept -= 1;
}
Ok(())
}
}
}
mod client {
use async_io::socket::Addr;
use errno::prelude::*;
use host_socket::StreamSocket;
use super::random_base64::RandomBase64;
use super::runtime::SocketRuntime;
use super::stream_socket_ext::StreamSocketExt;
pub struct Builder<A: Addr + 'static> {
addr: Option<A>,
total_data: Option<usize>,
buf_size: Option<usize>,
}
impl<A: Addr + 'static> Builder<A> {
pub const DEFAULT_TOTAL_DATA: usize = 1024 * 1024; // 1MB
pub const DEFAULT_BUF_SIZE: usize = 4096; // 4KB
pub fn new() -> Self
|
pub fn addr(mut self, addr: A) -> Self {
self.addr = Some(addr);
self
}
pub fn total_data(mut self, total_data: usize) -> Self {
self.total_data = Some(total_data);
self
}
pub fn buf_size(mut self, buf_size: usize) -> Self {
self.buf_size = Some(buf_size);
self
}
pub fn build(self) -> Result<Client<A>> {
let addr = self
.addr
.ok_or_else(|| errno!(EINVAL, "an address must be given"))?;
let remain_data = self.total_data.unwrap_or(Self::DEFAULT_TOTAL_DATA);
let buf_size = self.buf_size.unwrap_or(Self::DEFAULT_BUF_SIZE);
let socket = StreamSocket::new(false)?;
let random_base64 = RandomBase64::new();
let client = Client {
addr,
remain_data,
buf_size,
socket,
random_base64,
};
Ok(client)
}
}
pub struct Client<A: Addr + 'static> {
addr: A,
remain_data: usize,
buf_size: usize,
socket: StreamSocket<A, SocketRuntime>,
random_base64: RandomBase64,
}
impl<A: Addr + 'static> Client<A> {
pub async fn run(mut self) -> Result<()> {
self.socket
.connect(&self.addr)
.await
.expect("failed to connect");
let mut write_buf = vec![0u8; self.buf_size];
let mut read_buf = vec![0u8; self.buf_size];
while self.remain_data > 0 {
let msg_len = self.remain_data.min(self.buf_size);
let write_msg = &mut write_buf[..msg_len];
self.gen_random_msg(write_msg);
self.socket.write_exact(write_msg).await;
let read_msg = &mut read_buf[..msg_len];
self.socket.read_exact(read_msg).await;
assert!(write_msg == read_msg);
self.remain_data -= msg_len;
}
Ok(())
}
fn gen_random_msg(&mut self, msg: &mut [u8]) {
for byte in msg {
*byte = self.random_base64.next();
}
}
}
}
mod stream_socket_ext {
use futures::future::BoxFuture;
use futures::prelude::*;
use async_io::socket::Addr;
use host_socket::StreamSocket;
use super::runtime::SocketRuntime;
pub trait StreamSocketExt {
fn write_exact<'a>(&'a self, buf: &'a [u8]) -> BoxFuture<'a, ()>;
fn read_exact<'a>(&'a self, buf: &'a mut [u8]) -> BoxFuture<'a, ()>;
}
impl<A: Addr + 'static> StreamSocketExt for StreamSocket<A, SocketRuntime> {
fn write_exact<'a>(&'a self, mut buf: &'a [u8]) -> BoxFuture<'a, ()> {
(async move {
while buf.len() > 0 {
let nbytes = self.write(buf).await.expect("failed to write");
buf = &buf[nbytes..];
}
})
.boxed()
}
fn read_exact<'a>(&'a self, mut buf: &'a mut [u8]) -> BoxFuture<'a, ()> {
(async move {
while buf.len() > 0 {
let nbytes = self.read(buf).await.expect("failed to read");
buf = &mut buf[nbytes..];
}
})
.boxed()
}
}
}
mod random_base64 {
use std::collections::hash_map::DefaultHasher;
use std::hash::Hasher;
use lazy_static::lazy_static;
/// A random generator for Base64 bytes.
///
/// The Base 64 characters are `A`-`Z`, `a`-`z`, `0`-`9`, `+`, and `/`.
pub struct RandomBase64 {
hasher: DefaultHasher,
}
impl RandomBase64 {
pub fn new() -> Self {
let mut new_self = Self {
hasher: DefaultHasher::new(),
};
// Give the hasher a random "seed"
let seed = &new_self as *const _ as usize;
new_self.hasher.write_usize(seed);
new_self
}
pub fn next(&mut self) -> u8 {
lazy_static! {
static ref ALPHABET: Box<[u8]> = {
let mut alphabet = Vec::with_capacity(64);
(0..26).for_each(|i| alphabet.push(b'A' + i));
(0..26).for_each(|i| alphabet.push(b'a' + i));
(0..10).for_each(|i| alphabet.push(b'0' + i));
alphabet.push(b'+');
alphabet.push(b'/');
alphabet.into_boxed_slice()
};
}
let new_hash = self.hasher.finish() as usize;
self.hasher.write_usize(new_hash);
let random_idx = new_hash % 64;
ALPHABET[random_idx]
}
}
}
|
{
Self {
addr: None,
total_data: None,
buf_size: None,
}
}
|
bench.rs
|
extern crate ring;
extern crate sha1_smol as sha1;
use std::env;
use std::fs;
use std::io::{Read, Write};
use std::process::{Command, Stdio};
use std::time::{Duration, Instant};
fn time<F, FMT>(desc: &str, f: F, fmt: FMT)
where
F: Fn(),
FMT: Fn(Duration) -> String,
{
let start = Instant::now();
f();
let duration = Instant::now() - start;
println!("{}: {}", desc, fmt(duration));
}
fn main()
|
{
let args: Vec<_> = env::args().collect();
let mut out = Vec::<u8>::new();
if args.len() == 1 {
std::io::stdin().read_to_end(&mut out).unwrap();
} else if args.len() == 2 {
let mut f = fs::File::open(&args[1]).unwrap();
f.read_to_end(&mut out).unwrap();
} else {
panic!("wrong argument count");
}
let throughput = |duration: Duration| {
let s = duration.as_secs() as f64;
let ns = duration.subsec_nanos() as f64 / 1000000000.0;
format!("{:.2} MB/s", out.len() as f64 / (s + ns) / 1000000.0)
};
if env::var("WITHOUT_SHA1SUM") != Ok("1".into()) {
time(
"sha1sum program",
|| {
let mut child = Command::new("sha1sum")
.stdin(Stdio::piped())
.spawn()
.unwrap();
if let Some(ref mut stdin) = child.stdin {
stdin.write(&out).unwrap();
}
child.wait().unwrap();
},
&throughput,
);
}
time(
"sha1 crate",
|| {
let mut sha1 = sha1::Sha1::new();
sha1.update(&out);
println!("{}", sha1.digest());
},
&throughput,
);
time(
"ring crate",
|| {
let digest = ring::digest::digest(&ring::digest::SHA1, &out);
println!("{:?}", digest);
},
&throughput,
);
}
|
|
eval_context.rs
|
// Copyright 2018 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use std::fmt::Write;
use std::mem;
use rustc::hir::def_id::DefId;
use rustc::hir::def::Def;
use rustc::hir::map::definitions::DefPathData;
use rustc::ich::StableHashingContext;
use rustc::mir;
use rustc::ty::layout::{
self, Size, Align, HasDataLayout, LayoutOf, TyLayout
};
use rustc::ty::subst::{Subst, Substs};
use rustc::ty::{self, Ty, TyCtxt, TypeFoldable};
use rustc::ty::query::TyCtxtAt;
use rustc_data_structures::indexed_vec::IndexVec;
use rustc_data_structures::stable_hasher::{HashStable, StableHasher, StableHasherResult};
use rustc::mir::interpret::{
GlobalId, Scalar, FrameInfo, AllocId,
EvalResult, EvalErrorKind,
ScalarMaybeUndef,
truncate, sign_extend,
};
use syntax::source_map::{self, Span};
use super::{
Value, Operand, MemPlace, MPlaceTy, Place,
Memory, Machine
};
use super::snapshot::InfiniteLoopDetector;
pub struct EvalContext<'a, 'mir, 'tcx: 'a + 'mir, M: Machine<'mir, 'tcx>> {
/// Stores the `Machine` instance.
pub machine: M,
/// The results of the type checker, from rustc.
pub tcx: TyCtxtAt<'a, 'tcx, 'tcx>,
/// Bounds in scope for polymorphic evaluations.
pub param_env: ty::ParamEnv<'tcx>,
/// The virtual memory system.
pub memory: Memory<'a, 'mir, 'tcx, M>,
/// The virtual call stack.
pub(crate) stack: Vec<Frame<'mir, 'tcx>>,
/// The maximum number of stack frames allowed
pub(super) stack_limit: usize,
/// When this value is negative, it indicates the number of interpreter
/// steps *until* the loop detector is enabled. When it is positive, it is
/// the number of steps after the detector has been enabled modulo the loop
/// detector period.
pub(super) steps_since_detector_enabled: isize,
pub(super) loop_detector: InfiniteLoopDetector<'a, 'mir, 'tcx, M>,
}
/// A stack frame.
#[derive(Clone)]
pub struct Frame<'mir, 'tcx: 'mir> {
////////////////////////////////////////////////////////////////////////////////
// Function and callsite information
////////////////////////////////////////////////////////////////////////////////
/// The MIR for the function called on this frame.
pub mir: &'mir mir::Mir<'tcx>,
/// The def_id and substs of the current function
pub instance: ty::Instance<'tcx>,
/// The span of the call site.
pub span: source_map::Span,
////////////////////////////////////////////////////////////////////////////////
// Return place and locals
////////////////////////////////////////////////////////////////////////////////
/// Work to perform when returning from this function
pub return_to_block: StackPopCleanup,
/// The location where the result of the current stack frame should be written to.
pub return_place: Place,
/// The list of locals for this stack frame, stored in order as
/// `[return_ptr, arguments..., variables..., temporaries...]`.
/// The locals are stored as `Option<Value>`s.
/// `None` represents a local that is currently dead, while a live local
/// can either directly contain `Scalar` or refer to some part of an `Allocation`.
pub locals: IndexVec<mir::Local, LocalValue<AllocId>>,
////////////////////////////////////////////////////////////////////////////////
// Current position within the function
////////////////////////////////////////////////////////////////////////////////
/// The block that is currently executed (or will be executed after the above call stacks
/// return).
pub block: mir::BasicBlock,
/// The index of the currently evaluated statement.
pub stmt: usize,
}
impl<'a, 'mir, 'tcx: 'mir> HashStable<StableHashingContext<'a>> for Frame<'mir, 'tcx> {
fn hash_stable<W: StableHasherResult>(
&self,
hcx: &mut StableHashingContext<'a>,
hasher: &mut StableHasher<W>) {
let Frame {
mir,
instance,
span,
return_to_block,
return_place,
locals,
block,
stmt,
} = self;
(mir, instance, span, return_to_block).hash_stable(hcx, hasher);
(return_place, locals, block, stmt).hash_stable(hcx, hasher);
}
}
#[derive(Clone, Debug, Eq, PartialEq, Hash)]
pub enum StackPopCleanup {
/// Jump to the next block in the caller, or cause UB if None (that's a function
/// that may never return).
Goto(Option<mir::BasicBlock>),
/// Just do nohing: Used by Main and for the box_alloc hook in miri.
/// `cleanup` says whether locals are deallocated. Static computation
/// wants them leaked to intern what they need (and just throw away
/// the entire `ecx` when it is done).
None { cleanup: bool },
}
impl<'a> HashStable<StableHashingContext<'a>> for StackPopCleanup {
fn hash_stable<W: StableHasherResult>(
&self,
hcx: &mut StableHashingContext<'a>,
hasher: &mut StableHasher<W>) {
match self {
StackPopCleanup::Goto(ref block) => block.hash_stable(hcx, hasher),
StackPopCleanup::None { cleanup } => cleanup.hash_stable(hcx, hasher),
}
}
}
// State of a local variable
#[derive(Copy, Clone, PartialEq, Eq, Hash)]
pub enum LocalValue<Id=AllocId> {
Dead,
// Mostly for convenience, we re-use the `Operand` type here.
// This is an optimization over just always having a pointer here;
// we can thus avoid doing an allocation when the local just stores
// immediate values *and* never has its address taken.
Live(Operand<Id>),
}
impl<'tcx> LocalValue {
pub fn access(&self) -> EvalResult<'tcx, &Operand> {
match self {
LocalValue::Dead => err!(DeadLocal),
LocalValue::Live(ref val) => Ok(val),
}
}
pub fn access_mut(&mut self) -> EvalResult<'tcx, &mut Operand> {
match self {
LocalValue::Dead => err!(DeadLocal),
LocalValue::Live(ref mut val) => Ok(val),
}
}
}
impl_stable_hash_for!(enum self::LocalValue {
Dead,
Live(x),
});
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> HasDataLayout for &'a EvalContext<'a, 'mir, 'tcx, M> {
#[inline]
fn data_layout(&self) -> &layout::TargetDataLayout {
&self.tcx.data_layout
}
}
impl<'c, 'b, 'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> HasDataLayout
for &'c &'b mut EvalContext<'a, 'mir, 'tcx, M>
{
#[inline]
fn data_layout(&self) -> &layout::TargetDataLayout {
&self.tcx.data_layout
}
}
impl<'a, 'mir, 'tcx, M> layout::HasTyCtxt<'tcx> for &'a EvalContext<'a, 'mir, 'tcx, M>
where M: Machine<'mir, 'tcx>
{
#[inline]
fn tcx<'b>(&'b self) -> TyCtxt<'b, 'tcx, 'tcx> {
*self.tcx
}
}
impl<'c, 'b, 'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> layout::HasTyCtxt<'tcx>
for &'c &'b mut EvalContext<'a, 'mir, 'tcx, M> {
#[inline]
fn tcx<'d>(&'d self) -> TyCtxt<'d, 'tcx, 'tcx> {
*self.tcx
}
}
impl<'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> LayoutOf for &'a EvalContext<'a, 'mir, 'tcx, M> {
type Ty = Ty<'tcx>;
type TyLayout = EvalResult<'tcx, TyLayout<'tcx>>;
#[inline]
fn layout_of(self, ty: Ty<'tcx>) -> Self::TyLayout {
self.tcx.layout_of(self.param_env.and(ty))
.map_err(|layout| EvalErrorKind::Layout(layout).into())
}
}
impl<'c, 'b, 'a, 'mir, 'tcx, M: Machine<'mir, 'tcx>> LayoutOf
for &'c &'b mut EvalContext<'a, 'mir, 'tcx, M> {
type Ty = Ty<'tcx>;
type TyLayout = EvalResult<'tcx, TyLayout<'tcx>>;
#[inline]
fn layout_of(self, ty: Ty<'tcx>) -> Self::TyLayout {
(&**self).layout_of(ty)
}
}
const STEPS_UNTIL_DETECTOR_ENABLED: isize = 1_000_000;
impl<'a, 'mir, 'tcx: 'mir, M: Machine<'mir, 'tcx>> EvalContext<'a, 'mir, 'tcx, M> {
pub fn new(
tcx: TyCtxtAt<'a, 'tcx, 'tcx>,
param_env: ty::ParamEnv<'tcx>,
machine: M,
memory_data: M::MemoryData,
) -> Self {
EvalContext {
machine,
tcx,
param_env,
memory: Memory::new(tcx, memory_data),
stack: Vec::new(),
stack_limit: tcx.sess.const_eval_stack_frame_limit,
loop_detector: Default::default(),
steps_since_detector_enabled: -STEPS_UNTIL_DETECTOR_ENABLED,
}
}
pub(crate) fn with_fresh_body<F: FnOnce(&mut Self) -> R, R>(&mut self, f: F) -> R {
let stack = mem::replace(&mut self.stack, Vec::new());
let steps = mem::replace(&mut self.steps_since_detector_enabled,
-STEPS_UNTIL_DETECTOR_ENABLED);
let r = f(self);
self.stack = stack;
self.steps_since_detector_enabled = steps;
r
}
pub fn memory(&self) -> &Memory<'a, 'mir, 'tcx, M> {
&self.memory
}
pub fn memory_mut(&mut self) -> &mut Memory<'a, 'mir, 'tcx, M> {
&mut self.memory
}
pub fn stack(&self) -> &[Frame<'mir, 'tcx>] {
&self.stack
}
#[inline]
pub fn cur_frame(&self) -> usize {
assert!(self.stack.len() > 0);
self.stack.len() - 1
}
/// Mark a storage as live, killing the previous content and returning it.
/// Remember to deallocate that!
pub fn storage_live(&mut self, local: mir::Local) -> EvalResult<'tcx, LocalValue> {
trace!("{:?} is now live", local);
let layout = self.layout_of_local(self.cur_frame(), local)?;
let init = LocalValue::Live(self.uninit_operand(layout)?);
// StorageLive *always* kills the value that's currently stored
Ok(mem::replace(&mut self.frame_mut().locals[local], init))
}
/// Returns the old value of the local.
/// Remember to deallocate that!
pub fn storage_dead(&mut self, local: mir::Local) -> LocalValue {
trace!("{:?} is now dead", local);
mem::replace(&mut self.frame_mut().locals[local], LocalValue::Dead)
}
pub fn str_to_value(&mut self, s: &str) -> EvalResult<'tcx, Value> {
let ptr = self.memory.allocate_static_bytes(s.as_bytes());
Ok(Value::new_slice(Scalar::Ptr(ptr), s.len() as u64, self.tcx.tcx))
}
pub(super) fn resolve(
&self,
def_id: DefId,
substs: &'tcx Substs<'tcx>
) -> EvalResult<'tcx, ty::Instance<'tcx>> {
trace!("resolve: {:?}, {:#?}", def_id, substs);
trace!("substs: {:#?}", self.substs());
trace!("param_env: {:#?}", self.param_env);
let substs = self.tcx.subst_and_normalize_erasing_regions(
self.substs(),
self.param_env,
&substs,
);
ty::Instance::resolve(
*self.tcx,
self.param_env,
def_id,
substs,
).ok_or_else(|| EvalErrorKind::TooGeneric.into())
}
pub(super) fn type_is_sized(&self, ty: Ty<'tcx>) -> bool {
ty.is_sized(self.tcx, self.param_env)
}
pub fn load_mir(
&self,
instance: ty::InstanceDef<'tcx>,
) -> EvalResult<'tcx, &'tcx mir::Mir<'tcx>> {
// do not continue if typeck errors occurred (can only occur in local crate)
let did = instance.def_id();
if did.is_local()
&& self.tcx.has_typeck_tables(did)
&& self.tcx.typeck_tables_of(did).tainted_by_errors
{
return err!(TypeckError);
}
trace!("load mir {:?}", instance);
match instance {
ty::InstanceDef::Item(def_id) => {
self.tcx.maybe_optimized_mir(def_id).ok_or_else(||
EvalErrorKind::NoMirFor(self.tcx.item_path_str(def_id)).into()
)
}
_ => Ok(self.tcx.instance_mir(instance)),
}
}
pub fn monomorphize<T: TypeFoldable<'tcx> + Subst<'tcx>>(
&self,
t: T,
substs: &'tcx Substs<'tcx>
) -> T {
// miri doesn't care about lifetimes, and will choke on some crazy ones
// let's simply get rid of them
let substituted = t.subst(*self.tcx, substs);
self.tcx.normalize_erasing_regions(ty::ParamEnv::reveal_all(), substituted)
}
pub fn layout_of_local(
&self,
frame: usize,
local: mir::Local
) -> EvalResult<'tcx, TyLayout<'tcx>> {
let local_ty = self.stack[frame].mir.local_decls[local].ty;
let local_ty = self.monomorphize(
local_ty,
self.stack[frame].instance.substs
);
self.layout_of(local_ty)
}
/// Return the actual dynamic size and alignment of the place at the given type.
/// Only the "extra" (metadata) part of the place matters.
pub(super) fn size_and_align_of(
&self,
metadata: Option<Scalar>,
layout: TyLayout<'tcx>,
) -> EvalResult<'tcx, (Size, Align)> {
let metadata = match metadata {
None => {
assert!(!layout.is_unsized());
return Ok(layout.size_and_align())
}
Some(metadata) => {
assert!(layout.is_unsized());
metadata
}
};
match layout.ty.sty {
ty::Adt(..) | ty::Tuple(..) => {
// First get the size of all statically known fields.
// Don't use type_of::sizing_type_of because that expects t to be sized,
// and it also rounds up to alignment, which we want to avoid,
// as the unsized field's alignment could be smaller.
assert!(!layout.ty.is_simd());
debug!("DST layout: {:?}", layout);
let sized_size = layout.fields.offset(layout.fields.count() - 1);
let sized_align = layout.align;
debug!(
"DST {} statically sized prefix size: {:?} align: {:?}",
layout.ty,
sized_size,
sized_align
);
// Recurse to get the size of the dynamically sized field (must be
// the last field).
let field = layout.field(self, layout.fields.count() - 1)?;
let (unsized_size, unsized_align) = self.size_and_align_of(Some(metadata), field)?;
// FIXME (#26403, #27023): We should be adding padding
// to `sized_size` (to accommodate the `unsized_align`
// required of the unsized field that follows) before
// summing it with `sized_size`. (Note that since #26403
// is unfixed, we do not yet add the necessary padding
// here. But this is where the add would go.)
// Return the sum of sizes and max of aligns.
let size = sized_size + unsized_size;
// Choose max of two known alignments (combined value must
// be aligned according to more restrictive of the two).
let align = sized_align.max(unsized_align);
// Issue #27023: must add any necessary padding to `size`
// (to make it a multiple of `align`) before returning it.
//
// Namely, the returned size should be, in C notation:
//
// `size + ((size & (align-1)) ? align : 0)`
//
// emulated via the semi-standard fast bit trick:
//
// `(size + (align-1)) & -align`
Ok((size.abi_align(align), align))
}
ty::Dynamic(..) => {
let vtable = metadata.to_ptr()?;
// the second entry in the vtable is the dynamic size of the object.
self.read_size_and_align_from_vtable(vtable)
}
ty::Slice(_) | ty::Str => {
let len = metadata.to_usize(self)?;
let (elem_size, align) = layout.field(self, 0)?.size_and_align();
Ok((elem_size * len, align))
}
_ => bug!("size_and_align_of::<{:?}> not supported", layout.ty),
}
}
#[inline]
pub fn size_and_align_of_mplace(
&self,
mplace: MPlaceTy<'tcx>
) -> EvalResult<'tcx, (Size, Align)> {
self.size_and_align_of(mplace.extra, mplace.layout)
}
pub fn push_stack_frame(
&mut self,
instance: ty::Instance<'tcx>,
span: source_map::Span,
mir: &'mir mir::Mir<'tcx>,
return_place: Place,
return_to_block: StackPopCleanup,
) -> EvalResult<'tcx> {
::log_settings::settings().indentation += 1;
// first push a stack frame so we have access to the local substs
self.stack.push(Frame {
mir,
block: mir::START_BLOCK,
return_to_block,
return_place,
// empty local array, we fill it in below, after we are inside the stack frame and
// all methods actually know about the frame
locals: IndexVec::new(),
span,
instance,
stmt: 0,
});
// don't allocate at all for trivial constants
if mir.local_decls.len() > 1 {
// We put some marker value into the locals that we later want to initialize.
// This can be anything except for LocalValue::Dead -- because *that* is the
// value we use for things that we know are initially dead.
let dummy =
LocalValue::Live(Operand::Immediate(Value::Scalar(ScalarMaybeUndef::Undef)));
let mut locals = IndexVec::from_elem(dummy, &mir.local_decls);
// Now mark those locals as dead that we do not want to initialize
match self.tcx.describe_def(instance.def_id()) {
// statics and constants don't have `Storage*` statements, no need to look for them
Some(Def::Static(..)) | Some(Def::Const(..)) | Some(Def::AssociatedConst(..)) => {},
_ => {
trace!("push_stack_frame: {:?}: num_bbs: {}", span, mir.basic_blocks().len());
for block in mir.basic_blocks() {
for stmt in block.statements.iter() {
use rustc::mir::StatementKind::{StorageDead, StorageLive};
match stmt.kind {
StorageLive(local) |
StorageDead(local) => {
locals[local] = LocalValue::Dead;
}
_ => {}
}
}
}
},
}
// Finally, properly initialize all those that still have the dummy value
for (local, decl) in locals.iter_mut().zip(mir.local_decls.iter()) {
match *local {
LocalValue::Live(_) => {
// This needs to be peoperly initialized.
let layout = self.layout_of(self.monomorphize(decl.ty, instance.substs))?;
*local = LocalValue::Live(self.uninit_operand(layout)?);
}
LocalValue::Dead => {
// Nothing to do
}
}
}
// done
self.frame_mut().locals = locals;
}
if self.stack.len() > self.stack_limit {
err!(StackFrameLimitReached)
} else {
Ok(())
}
}
pub(super) fn pop_stack_frame(&mut self) -> EvalResult<'tcx> {
::log_settings::settings().indentation -= 1;
let frame = self.stack.pop().expect(
"tried to pop a stack frame, but there were none",
);
match frame.return_to_block {
StackPopCleanup::Goto(block) => {
self.goto_block(block)?;
}
StackPopCleanup::None { cleanup } => {
if !cleanup {
// Leak the locals
return Ok(());
}
}
}
// deallocate all locals that are backed by an allocation
for local in frame.locals {
self.deallocate_local(local)?;
}
Ok(())
}
pub(super) fn deallocate_local(&mut self, local: LocalValue) -> EvalResult<'tcx> {
// FIXME: should we tell the user that there was a local which was never written to?
if let LocalValue::Live(Operand::Indirect(MemPlace { ptr, .. })) = local {
trace!("deallocating local");
let ptr = ptr.to_ptr()?;
self.memory.dump_alloc(ptr.alloc_id);
self.memory.deallocate_local(ptr)?;
};
Ok(())
}
pub fn const_eval(&self, gid: GlobalId<'tcx>) -> EvalResult<'tcx, &'tcx ty::Const<'tcx>> {
let param_env = if self.tcx.is_static(gid.instance.def_id()).is_some() {
ty::ParamEnv::reveal_all()
} else {
self.param_env
};
self.tcx.const_eval(param_env.and(gid))
.map_err(|err| EvalErrorKind::ReferencedConstant(err).into())
}
#[inline(always)]
pub fn frame(&self) -> &Frame<'mir, 'tcx> {
self.stack.last().expect("no call frames exist")
}
#[inline(always)]
pub fn frame_mut(&mut self) -> &mut Frame<'mir, 'tcx> {
self.stack.last_mut().expect("no call frames exist")
}
pub(super) fn
|
(&self) -> &'mir mir::Mir<'tcx> {
self.frame().mir
}
pub fn substs(&self) -> &'tcx Substs<'tcx> {
if let Some(frame) = self.stack.last() {
frame.instance.substs
} else {
Substs::empty()
}
}
pub fn dump_place(&self, place: Place) {
// Debug output
if !log_enabled!(::log::Level::Trace) {
return;
}
match place {
Place::Local { frame, local } => {
let mut allocs = Vec::new();
let mut msg = format!("{:?}", local);
if frame != self.cur_frame() {
write!(msg, " ({} frames up)", self.cur_frame() - frame).unwrap();
}
write!(msg, ":").unwrap();
match self.stack[frame].locals[local].access() {
Err(err) => {
if let EvalErrorKind::DeadLocal = err.kind {
write!(msg, " is dead").unwrap();
} else {
panic!("Failed to access local: {:?}", err);
}
}
Ok(Operand::Indirect(mplace)) => {
let (ptr, align) = mplace.to_scalar_ptr_align();
match ptr {
Scalar::Ptr(ptr) => {
write!(msg, " by align({}) ref:", align.abi()).unwrap();
allocs.push(ptr.alloc_id);
}
ptr => write!(msg, " by integral ref: {:?}", ptr).unwrap(),
}
}
Ok(Operand::Immediate(Value::Scalar(val))) => {
write!(msg, " {:?}", val).unwrap();
if let ScalarMaybeUndef::Scalar(Scalar::Ptr(ptr)) = val {
allocs.push(ptr.alloc_id);
}
}
Ok(Operand::Immediate(Value::ScalarPair(val1, val2))) => {
write!(msg, " ({:?}, {:?})", val1, val2).unwrap();
if let ScalarMaybeUndef::Scalar(Scalar::Ptr(ptr)) = val1 {
allocs.push(ptr.alloc_id);
}
if let ScalarMaybeUndef::Scalar(Scalar::Ptr(ptr)) = val2 {
allocs.push(ptr.alloc_id);
}
}
}
trace!("{}", msg);
self.memory.dump_allocs(allocs);
}
Place::Ptr(mplace) => {
match mplace.ptr {
Scalar::Ptr(ptr) => {
trace!("by align({}) ref:", mplace.align.abi());
self.memory.dump_alloc(ptr.alloc_id);
}
ptr => trace!(" integral by ref: {:?}", ptr),
}
}
}
}
pub fn generate_stacktrace(&self, explicit_span: Option<Span>) -> (Vec<FrameInfo>, Span) {
let mut last_span = None;
let mut frames = Vec::new();
// skip 1 because the last frame is just the environment of the constant
for &Frame { instance, span, mir, block, stmt, .. } in self.stack().iter().skip(1).rev() {
// make sure we don't emit frames that are duplicates of the previous
if explicit_span == Some(span) {
last_span = Some(span);
continue;
}
if let Some(last) = last_span {
if last == span {
continue;
}
} else {
last_span = Some(span);
}
let location = if self.tcx.def_key(instance.def_id()).disambiguated_data.data
== DefPathData::ClosureExpr
{
"closure".to_owned()
} else {
instance.to_string()
};
let block = &mir.basic_blocks()[block];
let source_info = if stmt < block.statements.len() {
block.statements[stmt].source_info
} else {
block.terminator().source_info
};
let lint_root = match mir.source_scope_local_data {
mir::ClearCrossCrate::Set(ref ivs) => Some(ivs[source_info.scope].lint_root),
mir::ClearCrossCrate::Clear => None,
};
frames.push(FrameInfo { span, location, lint_root });
}
trace!("generate stacktrace: {:#?}, {:?}", frames, explicit_span);
(frames, self.tcx.span)
}
#[inline(always)]
pub fn sign_extend(&self, value: u128, ty: TyLayout<'_>) -> u128 {
assert!(ty.abi.is_signed());
sign_extend(value, ty.size)
}
#[inline(always)]
pub fn truncate(&self, value: u128, ty: TyLayout<'_>) -> u128 {
truncate(value, ty.size)
}
}
|
mir
|
init.go
|
// *** WARNING: this file was generated by the Pulumi SDK Generator. ***
// *** Do not edit by hand unless you're certain you know what you are doing! ***
package v20201001preview
import (
"fmt"
"github.com/blang/semver"
"github.com/pulumi/pulumi-azure-native/sdk/go/azure"
"github.com/pulumi/pulumi/sdk/v2/go/pulumi"
)
type module struct {
version semver.Version
}
func (m *module) Version() semver.Version {
return m.version
}
func (m *module) Construct(ctx *pulumi.Context, name, typ, urn string) (r pulumi.Resource, err error) {
switch typ {
case "azure-native:cache/v20201001preview:Database":
r, err = NewDatabase(ctx, name, nil, pulumi.URN_(urn))
case "azure-native:cache/v20201001preview:PrivateEndpointConnection":
r, err = NewPrivateEndpointConnection(ctx, name, nil, pulumi.URN_(urn))
case "azure-native:cache/v20201001preview:RedisEnterprise":
r, err = NewRedisEnterprise(ctx, name, nil, pulumi.URN_(urn))
default:
return nil, fmt.Errorf("unknown resource type: %s", typ)
}
return
}
func
|
() {
version, err := azure.PkgVersion()
if err != nil {
fmt.Println("failed to determine package version. defaulting to v1: %v", err)
}
pulumi.RegisterResourceModule(
"azure-native",
"cache/v20201001preview",
&module{version},
)
}
|
init
|
ls_to_json.go
|
package others
import (
"encoding/json"
"strings"
)
/*
* 题目: 将所给的路径字符串数组转化为 json 字符串
*
* 背景: 从 oss 取出的文件列表格式为字符串数组, 但是前端需要转换成 map(对于前端而言即json),
* 固有此需求.
*
* 示例
* files:
* public/test.md
* public/readme.md
* readme.md
* test/
*
* result:
* {
* public:{
* readme.md:readme.md,
* test.md:test.md
* },
* readme.md:readme.md,
* test:{}
* }
*/
// 解决思想: 分两个方面, 第一是递归处理单条路径, 第二是处理多条路径
func ls2Json(paths []string) string {
var result = make(map[string]interface{})
for _, path := range paths {
parsePath(path, "/", result)
}
buf, _ := json.Marshal(result)
return string(buf)
}
func parsePath(path string, delimiter string, result map[string]interface{}) map[string]interface{} {
if path == "" {
return nil
}
dirs := strings.Split(path, delimiter)
parseFile(dirs, result)
return result
}
func parseFile(dirs []string, result map[string]interface{})
|
t[key] = key
case 0:
return nil
default:
// 检测是否添加过该key,且当key为文件时复写为map
var temp map[string]interface{}
if v, ok := result[key]; ok {
var isMap bool
temp, isMap = v.(map[string]interface{})
if !isMap {
temp = make(map[string]interface{})
result[key] = temp
}
} else {
temp = make(map[string]interface{})
result[key] = temp
}
parseFile(dirs[1:], temp)
}
return result
}
|
map[string]interface{} {
var key = dirs[0]
switch len(dirs) {
case 1:
if key == "" {
return nil
}
resul
|
test.js
|
function
|
(){
// 可能函数中抛出了 同步错误 要通过try-catch 捕获异常
// throw new Error('err');
return new Promise((resolve,reject)=>{
setTimeout(() => {
reject('xxx');
}, 3000);
})
}
Promise.try = function(callback){
return new Promise((resolve,reject)=>{
// Promise.resolve 只能返回一个成功的promise
return Promise.resolve(callback()).then(resolve,reject);
})
}
fn().then((data)=>{
console.log(data,'---');
},err=>{
console.log('err:'+err);
});
|
fn
|
eos_purge.py
|
#!/usr/bin/python
#
# Copyright (c) 2015, Arista Networks, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# Redistributions of source code must retain the above copyright notice,
# this list of conditions and the following disclaimer.
#
# Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
#
# Neither the name of Arista Networks nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL ARISTA NETWORKS
# BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
# BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
# OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
# IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
DOCUMENTATION = """
---
module: eos_purge
short_description: Purges resources from an Arista EOS node
description:
- The eos_purge module will scan the current nodes running-configuration
and purge resources of a specified type if the resource is not explicitly
configured in the playbook. This module will allow a playbook task to
dynamically determine which resources should be removed from the nodes
running-configuration based on the playbook.
- Note Purge is not supported for all EOS modules
version_added: 1.0.0
category: System
author: Arista EOS+
requirements:
- Arista EOS 4.13.7M or later with command API enabled
- Python Client for eAPI 0.3.0 or later
notes:
- All configuration is idempotent unless otherwise specified
- Supports eos metaparameters for using the eAPI transport
- Does not support stateful resource configuration.
options:
resource:
description:
- The name of the resource module to purge from the configuration. If
the provided resource name does not support purge, the module will
simply exit with an error message.
required: true
default: null
choices: []
aliases: []
version_added: 1.0.0
results:
description:
- The results argument is used to store the output from a previous
module run. Using the output from the module run allows the purge
function to filter which resources should be removed. See the
Examples for more
required: true
default: null
choices: []
aliases: []
version_added: 1.0.0
"""
EXAMPLES = """
# configure the set of vlans for the node
- name: configure vlans
eos_vlan: vlanid={{ item }}
with_items: ['1', '10', '11', '12', '13', '14', '15']
register: required_vlans
# note the value for results is the registered vlan variable. Also of
# importance is the to_nice_json filter which is required
- name: purge vlans not on the list
eos_purge: resource=eos_vlan results='{{ required_vlans|to_nice_json }}'
"""
#<<EOS_COMMON_MODULE_START>>
import syslog
import collections
from ansible.module_utils.basic import *
try:
import pyeapi
PYEAPI_AVAILABLE = True
except ImportError:
PYEAPI_AVAILABLE = False
DEFAULT_SYSLOG_PRIORITY = syslog.LOG_NOTICE
DEFAULT_CONNECTION = 'localhost'
TRANSPORTS = ['socket', 'http', 'https', 'http_local']
class EosAnsibleModule(AnsibleModule):
meta_args = {
'config': dict(),
'username': dict(),
'password': dict(),
'host': dict(),
'connection': dict(default=DEFAULT_CONNECTION),
'transport': dict(choices=TRANSPORTS),
'port': dict(),
'debug': dict(type='bool', default='false'),
'logging': dict(type='bool', default='true')
}
stateful_args = {
'state': dict(default='present', choices=['present', 'absent']),
}
def __init__(self, stateful=True, *args, **kwargs):
kwargs['argument_spec'].update(self.meta_args)
self._stateful = stateful
if stateful:
kwargs['argument_spec'].update(self.stateful_args)
super(EosAnsibleModule, self).__init__(*args, **kwargs)
self.result = dict(changed=False, changes=dict())
self._debug = kwargs.get('debug') or self.boolean(self.params['debug'])
self._logging = kwargs.get('logging') or self.params['logging']
self.log('DEBUG flag is %s' % self._debug)
self.debug('pyeapi_version', self.check_pyeapi())
self.debug('stateful', self._stateful)
self.debug('params', self.params)
self._attributes = self.map_argument_spec()
self.validate()
self._node = self.connect()
self._instance = None
self.desired_state = self.params['state'] if self._stateful else None
self.exit_after_flush = kwargs.get('exit_after_flush')
@property
def instance(self):
if self._instance:
return self._instance
func = self.func('instance')
if not func:
self.fail('Module does not support "instance"')
try:
self._instance = func(self)
except Exception as exc:
self.fail('instance[error]: %s' % exc.message)
self.log("called instance: %s" % self._instance)
return self._instance
@property
def attributes(self):
return self._attributes
@property
def node(self):
if self._node:
return self._node
self._node = self.connect()
return self._node
def check_pyeapi(self):
if not PYEAPI_AVAILABLE:
self.fail('Unable to import pyeapi, is it installed?')
return pyeapi.__version__
def map_argument_spec(self):
"""map_argument_spec maps only the module argument spec to attrs
This method will map the argumentspec minus the meta_args to attrs
and return the attrs. This returns a dict object that includes only
the original argspec plus the stateful_args (if self._stateful=True)
Returns:
dict: Returns a dict object that includes the original
argument_spec plus stateful_args with values minus meta_args
"""
keys = set(self.params).difference(self.meta_args)
attrs = dict()
attrs = dict([(k, self.params[k]) for k in self.params if k in keys])
if 'CHECKMODE' in attrs:
del attrs['CHECKMODE']
return attrs
def validate(self):
for key, value in self.attributes.iteritems():
func = self.func('validate_%s' % key)
if func:
self.attributes[key] = func(value)
def create(self):
if not self.check_mode:
func = self.func('create')
if not func:
self.fail('Module must define "create" function')
return self.invoke(func, self)
def remove(self):
if not self.check_mode:
func = self.func('remove')
if not func:
self.fail('Module most define "remove" function')
return self.invoke(func, self)
def flush(self, exit_after_flush=False):
self.exit_after_flush = exit_after_flush
if self.desired_state == 'present' or not self._stateful:
if self.instance.get('state') == 'absent':
changed = self.create()
self.result['changed'] = changed or True
self.refresh()
changeset = self.attributes.viewitems() - self.instance.viewitems()
if self._debug:
self.debug('desired_state', self.attributes)
self.debug('current_state', self.instance)
changes = self.update(changeset)
if changes:
self.result['changes'] = changes
self.result['changed'] = True
self._attributes.update(changes)
flush = self.func('flush')
if flush:
self.invoke(flush, self)
elif self.desired_state == 'absent' and self._stateful:
if self.instance.get('state') == 'present':
changed = self.remove()
self.result['changed'] = changed or True
elif self._stateful:
if self.desired_state != self.instance.get('state'):
changed = self.invoke(self.instance.get('state'))
self.result['changed'] = changed or True
self.refresh()
self.result['instance'] = self.instance
if self.exit_after_flush:
self.exit()
def update(self, changeset):
changes = dict()
for key, value in changeset:
if value is not None:
changes[key] = value
func = self.func('set_%s' % key)
if func and not self.check_mode:
try:
self.invoke(func, self)
except Exception as exc:
self.fail(exc.message)
return changes
def connect(self):
if self.params['config']:
pyeapi.load_config(self.params['config'])
config = dict()
if self.params['connection']:
config = pyeapi.config_for(self.params['connection'])
if not config:
msg = 'Connection name "%s" not found' % self.params['connection']
self.fail(msg)
if self.params['username']:
config['username'] = self.params['username']
if self.params['password']:
config['password'] = self.params['password']
if self.params['transport']:
config['transport'] = self.params['transport']
if self.params['port']:
config['port'] = self.params['port']
if self.params['host']:
config['host'] = self.params['host']
if 'transport' not in config:
self.fail('Connection must define a transport')
connection = pyeapi.client.make_connection(**config)
node = pyeapi.client.Node(connection, **config)
try:
resp = node.enable('show version')
self.debug('eos_version', resp[0]['result']['version'])
self.debug('eos_model', resp[0]['result']['modelName'])
except (pyeapi.eapilib.ConnectionError, pyeapi.eapilib.CommandError):
self.fail('unable to connect to %s' % node)
else:
self.log('Connected to node %s' % node)
self.debug('node', str(node))
return node
def config(self, commands):
self.result['changed'] = True
if not self.check_mode:
self.node.config(commands)
def api(self, module):
return self.node.api(module)
def func(self, name):
return globals().get(name)
def
|
(self, func, *args, **kwargs):
try:
return func(*args, **kwargs)
except Exception as exc:
self.fail(exc.message)
def invoke_function(self, name, *args, **kwargs):
func = self.func(name)
if func:
return self.invoke(func, *args, **kwargs)
def fail(self, msg):
self.invoke_function('on_fail', self)
self.log('ERROR: %s' % msg, syslog.LOG_ERR)
self.fail_json(msg=msg)
def exit(self):
self.invoke_function('on_exit', self)
self.log('Module completed successfully')
self.exit_json(**self.result)
def refresh(self):
self._instance = None
def debug(self, key, value):
if self._debug:
if 'debug' not in self.result:
self.result['debug'] = dict()
self.result['debug'][key] = value
def log(self, message, priority=None):
if self._logging:
syslog.openlog('ansible-eos')
priority = priority or DEFAULT_SYSLOG_PRIORITY
syslog.syslog(priority, str(message))
@classmethod
def add_state(cls, name):
cls.stateful_args['state']['choices'].append(name)
#<<EOS_COMMON_MODULE_END>>
def eos_vxlan_vlan(module):
api = module.api('interfaces')
current = api.get('Vxlan1')['vlans'].keys()
results = module.from_json(module.attributes['results'])
expected = list()
for item in results['results']:
if 'instance' in item:
expected.append(item['instance']['vlan'])
purgeset = set(current).difference(expected)
for item in purgeset:
if not module.check_mode:
api.remove_vlan('Vxlan1', item)
purged = [{'name': 'Vxlan1', 'state': 'absent', 'vlan': vid } for vid in purgeset]
return dict(purged=purged)
def eos_vlan(module):
current = module.api('vlans').getall()
results = module.from_json(module.attributes['results'])
expected = list()
for item in results['results']:
expected.append(item['instance']['vlanid'])
purgeset = set(current.keys()).difference(expected)
for item in purgeset:
if not module.check_mode:
module.api('vlans').delete(item)
purged = [{'vlanid': vid, 'state': 'absent'} for vid in purgeset]
return dict(purged=purged)
def eos_bgp_network(module):
network = lambda x: collections.namedtuple('Network', x.keys())(**x)
current = module.api('bgp').get()['networks']
current = [network(item) for item in current]
results = module.from_json(module.attributes['results'])
expected = list()
for item in results['results']:
inst = item['instance']
del inst['state']
expected.append(network(inst))
purgeset = set(current).difference(expected)
for item in purgeset:
if not module.check_mode:
module.api('bgp').remove_network(**vars(item))
purged = list()
for item in purgeset:
data = dict(vars(item))
data['state'] = 'absent'
purged.append(data)
return dict(purged=purged)
def eos_bgp_neighbor(module):
current = module.api('bgp').neighbors.getall()
results = module.from_json(module.attributes['results'])
expected = list()
for item in results['results']:
expected.append(item['instance']['name'])
purgeset = set(current.keys()).difference(expected)
for item in purgeset:
if not module.check_mode:
module.api('bgp').neighbors.delete(item)
purged = [{'name': name, 'state': 'absent'} for name in purgeset]
return dict(purged=purged)
def main():
""" The main module routine called when the module is run by Ansible
"""
argument_spec = dict(
resource=dict(required=True),
results=dict(required=True)
)
module = EosAnsibleModule(argument_spec=argument_spec, stateful=False)
func = globals().get(module.params['resource'])
if not func:
module.fail('Resource "%s" does not currently support the '
'purge function' % module.params['resource'])
resp = func(module)
if resp:
if resp.get('purged'):
module.result.update(resp)
module.result['changed'] = True
else:
module.result['purged'] = list()
module.exit()
main()
|
invoke
|
404.tsx
|
import React from "react"
|
<Layout>
<SEO title="404: Not found" />
<h1>NOT FOUND</h1>
<p>You just hit a route that doesn't exist... the sadness.</p>
</Layout>
)
export default NotFoundPage
|
import Layout from "../layouts/SiteLayout"
import SEO from "../components/SEO"
const NotFoundPage = () => (
|
main.go
|
package main
import (
"fmt"
"net/http"
"net/url"
"os"
"time"
qrcodeTerminal "github.com/Baozisoftware/qrcode-terminal-go"
"github.com/do3t/go-whatsapp"
)
func
|
() {
// set proxy
// or you can use *url.URL directly like loginWithProxy example
purl, err := url.Parse("socks5://127.0.0.1/")
if err != nil {
panic(err)
}
proxy := http.ProxyURL(purl)
// or just left it empty
proxy = nil
wac, err := whatsapp.NewConnWithOptions(&whatsapp.Options{
// timeout
Timeout: 20 * time.Second,
Proxy: proxy,
// set custom client name
ShortClientName: "My-WhatsApp-Client",
LongClientName: "My-WhatsApp-Clientttttttttttttt",
})
if err != nil {
panic(err)
}
qr := make(chan string)
go func() {
terminal := qrcodeTerminal.New()
terminal.Get(<-qr).Print()
}()
session, err := wac.Login(qr)
if err != nil {
fmt.Fprintf(os.Stderr, "error during login: %v\n", err)
return
}
fmt.Printf("login successful, session: %v\n", session)
}
|
main
|
wsgi.py
|
# Copyright 2017 DiCTIS UGR
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""
Utility methods for working with WSGI servers
"""
from __future__ import print_function
import errno
import functools
import os
|
import signal
import sys
import eventlet
import eventlet.greenio
import eventlet.wsgi
from eventlet.green import socket
from omlcc_catalog.common import config
from omlcc_catalog.common import exception
wsgi_opts = [
cfg.StrOpt('secure_proxy_ssl_header',
deprecated_for_removal=True,
deprecated_reason=_('Use the http_proxy_to_wsgi middleware '
'instead.'),
help=_('The HTTP header used to determine the scheme for the '
'original request, even if it was removed by an SSL '
'terminating proxy. Typical value is '
'"HTTP_X_FORWARDED_PROTO".')),
]
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CONF.register_opts(bind_opts)
CONF.register_opts(socket_opts)
CONF.register_opts(eventlet_opts)
CONF.register_opts(wsgi_opts)
def set_eventlet_hub():
try:
eventlet.hubs.use_hub('poll')
except Exception:
try:
eventlet.hubs.use_hub('selects')
except Exception:
msg = _("eventlet 'poll' nor 'selects' hubs are available "
"on this platform")
raise exception.WorkerCreationFailure(
reason=msg)
class Server(object):
"""Server class to manage multiple WSGI sockets and applications.
This class requires initialize_glance_store set to True if
glance store needs to be initialized.
"""
def __init__(self, threads=1000, initialize_glance_store=False):
os.umask(0o27) # ensure files are created with the correct privileges
self._logger = logging.getLogger("eventlet.wsgi.server")
self.threads = threads
self.children = set()
self.stale_children = set()
self.running = True
# NOTE(abhishek): Allows us to only re-initialize glance_store when
# the API's configuration reloads.
self.initialize_glance_store = initialize_glance_store
self.pgid = os.getpid()
try:
# NOTE(flaper87): Make sure this process
# runs in its own process group.
os.setpgid(self.pgid, self.pgid)
except OSError:
# NOTE(flaper87): When running glance-control,
# (glance's functional tests, for example)
# setpgid fails with EPERM as glance-control
# creates a fresh session, of which the newly
# launched service becomes the leader (session
# leaders may not change process groups)
#
# Running glance-(api|registry) is safe and
# shouldn't raise any error here.
self.pgid = 0
def hup(self, *args):
"""
Reloads configuration files with zero down time
"""
signal.signal(signal.SIGHUP, signal.SIG_IGN)
raise exception.SIGHUPInterrupt
def kill_children(self, *args):
"""Kills the entire process group."""
signal.signal(signal.SIGTERM, signal.SIG_IGN)
signal.signal(signal.SIGINT, signal.SIG_IGN)
self.running = False
os.killpg(self.pgid, signal.SIGTERM)
def start(self, application, default_port):
"""
Run a WSGI server with the given application.
:param application: The application to be run in the WSGI server
:param default_port: Port to bind to if none is specified in conf
"""
self.application = application
self.default_port = default_port
self.configure()
self.start_wsgi()
def start_wsgi(self):
workers = get_num_workers()
if workers == 0:
# Useful for profiling, test, debug etc.
self.pool = self.create_pool()
self.pool.spawn_n(self._single_run, self.application, self.sock)
return
else:
LOG.info(_LI("Starting %d workers"), workers)
signal.signal(signal.SIGTERM, self.kill_children)
signal.signal(signal.SIGINT, self.kill_children)
signal.signal(signal.SIGHUP, self.hup)
while len(self.children) < workers:
self.run_child()
def create_pool(self):
return get_asynchronous_eventlet_pool(size=self.threads)
def _remove_children(self, pid):
if pid in self.children:
self.children.remove(pid)
LOG.info(_LI('Removed dead child %s'), pid)
elif pid in self.stale_children:
self.stale_children.remove(pid)
LOG.info(_LI('Removed stale child %s'), pid)
else:
LOG.warn(_LW('Unrecognised child %s') % pid)
def _verify_and_respawn_children(self, pid, status):
if len(self.stale_children) == 0:
LOG.debug('No stale children')
if os.WIFEXITED(status) and os.WEXITSTATUS(status) != 0:
LOG.error(_LE('Not respawning child %d, cannot '
'recover from termination') % pid)
if not self.children and not self.stale_children:
LOG.info(
_LI('All workers have terminated. Exiting'))
self.running = False
else:
if len(self.children) < get_num_workers():
self.run_child()
def wait_on_children(self):
while self.running:
try:
pid, status = os.wait()
if os.WIFEXITED(status) or os.WIFSIGNALED(status):
self._remove_children(pid)
self._verify_and_respawn_children(pid, status)
except OSError as err:
if err.errno not in (errno.EINTR, errno.ECHILD):
raise
except KeyboardInterrupt:
LOG.info(_LI('Caught keyboard interrupt. Exiting.'))
break
except exception.SIGHUPInterrupt:
self.reload()
continue
eventlet.greenio.shutdown_safe(self.sock)
self.sock.close()
LOG.debug('Exited')
def configure(self, old_conf=None, has_changed=None):
"""
Apply configuration settings
:param old_conf: Cached old configuration settings (if any)
:param has changed: callable to determine if a parameter has changed
"""
eventlet.wsgi.MAX_HEADER_LINE = CONF.max_header_line
self.client_socket_timeout = CONF.client_socket_timeout or None
self.configure_socket(old_conf, has_changed)
if self.initialize_glance_store:
initialize_glance_store()
def reload(self):
"""
Reload and re-apply configuration settings
Existing child processes are sent a SIGHUP signal
and will exit after completing existing requests.
New child processes, which will have the updated
configuration, are spawned. This allows preventing
interruption to the service.
"""
def _has_changed(old, new, param):
old = old.get(param)
new = getattr(new, param)
return (new != old)
old_conf = utils.stash_conf_values()
has_changed = functools.partial(_has_changed, old_conf, CONF)
CONF.reload_config_files()
os.killpg(self.pgid, signal.SIGHUP)
self.stale_children = self.children
self.children = set()
# Ensure any logging config changes are picked up
logging.setup(CONF, 'glance')
config.set_config_defaults()
self.configure(old_conf, has_changed)
self.start_wsgi()
def wait(self):
"""Wait until all servers have completed running."""
try:
if self.children:
self.wait_on_children()
else:
self.pool.waitall()
except KeyboardInterrupt:
pass
def run_child(self):
def child_hup(*args):
"""Shuts down child processes, existing requests are handled."""
signal.signal(signal.SIGHUP, signal.SIG_IGN)
eventlet.wsgi.is_accepting = False
self.sock.close()
pid = os.fork()
if pid == 0:
signal.signal(signal.SIGHUP, child_hup)
signal.signal(signal.SIGTERM, signal.SIG_DFL)
# ignore the interrupt signal to avoid a race whereby
# a child worker receives the signal before the parent
# and is respawned unnecessarily as a result
signal.signal(signal.SIGINT, signal.SIG_IGN)
# The child has no need to stash the unwrapped
# socket, and the reference prevents a clean
# exit on sighup
self._sock = None
self.run_server()
LOG.info(_LI('Child %d exiting normally'), os.getpid())
# self.pool.waitall() is now called in wsgi's server so
# it's safe to exit here
sys.exit(0)
else:
LOG.info(_LI('Started child %s'), pid)
self.children.add(pid)
def run_server(self):
"""Run a WSGI server."""
if cfg.CONF.pydev_worker_debug_host:
utils.setup_remote_pydev_debug(cfg.CONF.pydev_worker_debug_host,
cfg.CONF.pydev_worker_debug_port)
eventlet.wsgi.HttpProtocol.default_request_version = "HTTP/1.0"
self.pool = self.create_pool()
try:
eventlet.wsgi.server(self.sock,
self.application,
log=self._logger,
custom_pool=self.pool,
debug=False,
keepalive=CONF.http_keepalive,
socket_timeout=self.client_socket_timeout)
except socket.error as err:
if err[0] != errno.EINVAL:
raise
# waiting on async pools
if ASYNC_EVENTLET_THREAD_POOL_LIST:
for pool in ASYNC_EVENTLET_THREAD_POOL_LIST:
pool.waitall()
def _single_run(self, application, sock):
"""Start a WSGI server in a new green thread."""
LOG.info(_LI("Starting single process server"))
eventlet.wsgi.server(sock, application, custom_pool=self.pool,
log=self._logger,
debug=False,
keepalive=CONF.http_keepalive,
socket_timeout=self.client_socket_timeout)
def configure_socket(self, old_conf=None, has_changed=None):
"""
Ensure a socket exists and is appropriately configured.
This function is called on start up, and can also be
called in the event of a configuration reload.
When called for the first time a new socket is created.
If reloading and either bind_host or bind port have been
changed the existing socket must be closed and a new
socket opened (laws of physics).
In all other cases (bind_host/bind_port have not changed)
the existing socket is reused.
:param old_conf: Cached old configuration settings (if any)
:param has changed: callable to determine if a parameter has changed
"""
# Do we need a fresh socket?
new_sock = (old_conf is None or (
has_changed('bind_host') or
has_changed('bind_port')))
# Will we be using https?
use_ssl = not (not CONF.cert_file or not CONF.key_file)
# Were we using https before?
old_use_ssl = (old_conf is not None and not (
not old_conf.get('key_file') or
not old_conf.get('cert_file')))
# Do we now need to perform an SSL wrap on the socket?
wrap_sock = use_ssl is True and (old_use_ssl is False or new_sock)
# Do we now need to perform an SSL unwrap on the socket?
unwrap_sock = use_ssl is False and old_use_ssl is True
if new_sock:
self._sock = None
if old_conf is not None:
self.sock.close()
_sock = get_socket(self.default_port)
_sock.setsockopt(socket.SOL_SOCKET,
socket.SO_REUSEADDR, 1)
# sockets can hang around forever without keepalive
_sock.setsockopt(socket.SOL_SOCKET,
socket.SO_KEEPALIVE, 1)
self._sock = _sock
if wrap_sock:
self.sock = ssl_wrap_socket(self._sock)
if unwrap_sock:
self.sock = self._sock
if new_sock and not use_ssl:
self.sock = self._sock
# Pick up newly deployed certs
if old_conf is not None and use_ssl is True and old_use_ssl is True:
if has_changed('cert_file') or has_changed('key_file'):
utils.validate_key_cert(CONF.key_file, CONF.cert_file)
if has_changed('cert_file'):
self.sock.certfile = CONF.cert_file
if has_changed('key_file'):
self.sock.keyfile = CONF.key_file
if new_sock or (old_conf is not None and has_changed('tcp_keepidle')):
# This option isn't available in the OS X version of eventlet
if hasattr(socket, 'TCP_KEEPIDLE'):
self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE,
CONF.tcp_keepidle)
if old_conf is not None and has_changed('backlog'):
self.sock.listen(CONF.backlog)
| |
executor.go
|
// Copyright (c) 2013 The go-meeko AUTHORS
//
// Use of this source code is governed by The MIT License
// that can be found in the LICENSE file.
package rpc
import (
"errors"
log "github.com/cihub/seelog"
)
type RequestHandler func(request RemoteRequest)
type executor struct {
transport Transport
methodHandlers map[string]RequestHandler
taskManager *asyncTaskManager
registerCh chan *registerCmd
unregisterCh chan *unregisterCmd
deleteCh chan *string
termCh chan struct{}
termAckCh chan struct{}
}
func newExecutor(transport Transport) *executor {
exec := &executor{
transport: transport,
methodHandlers: make(map[string]RequestHandler),
taskManager: newAsyncTaskManager(),
registerCh: make(chan *registerCmd),
unregisterCh: make(chan *unregisterCmd),
deleteCh: make(chan *string),
termCh: make(chan struct{}),
termAckCh: make(chan struct{}),
}
go exec.loop()
return exec
}
// Public API ------------------------------------------------------------------
type registerCmd struct {
method string
handler RequestHandler
errCh chan error
}
func (cmd *registerCmd) Type() int {
return CmdRegister
}
func (cmd *registerCmd) Method() string {
return cmd.method
}
func (cmd *registerCmd) RequestHandler() RequestHandler {
return cmd.handler
}
func (cmd *registerCmd) ErrorChan() chan<- error {
return cmd.errCh
}
func (exec *executor) RegisterMethod(method string, handler RequestHandler) (err error) {
errCh := make(chan error, 1)
select {
case exec.registerCh <- ®isterCmd{method, handler, errCh}:
err = <-errCh
if err != nil
|
case <-exec.termCh:
return ErrTerminated
}
return
}
func (exec *executor) MustRegisterMethod(method string, handler RequestHandler) {
if err := exec.RegisterMethod(method, handler); err != nil {
panic(err)
}
}
type unregisterCmd struct {
method string
errCh chan error
}
func (cmd *unregisterCmd) Type() int {
return CmdUnregister
}
func (cmd *unregisterCmd) Method() string {
return cmd.method
}
func (cmd *unregisterCmd) ErrorChan() chan<- error {
return cmd.errCh
}
func (exec *executor) UnregisterMethod(method string) (err error) {
errCh := make(chan error, 1)
select {
case exec.unregisterCh <- &unregisterCmd{method, errCh}:
err = <-errCh
if err == nil {
err = exec.deleteMethod(method)
}
case <-exec.termCh:
err = ErrTerminated
}
return
}
func (exec *executor) deleteMethod(method string) (err error) {
select {
case exec.deleteCh <- &method:
case <-exec.termCh:
err = ErrTerminated
}
return
}
// Private API for Server ------------------------------------------------------
func (exec *executor) shutdown() {
select {
case <-exec.termCh:
default:
close(exec.termCh)
}
}
func (exec *executor) terminated() <-chan struct{} {
return exec.termAckCh
}
// Private methods -------------------------------------------------------------
func (exec *executor) loop() {
for {
select {
// registerCh is an internal command channel that accepts requests for
// method handlers to be registered and exported.
case cmd := <-exec.registerCh:
if _, ok := exec.methodHandlers[cmd.method]; ok {
cmd.errCh <- ErrAlreadyRegistered
continue
}
exec.methodHandlers[cmd.method] = cmd.handler
exec.transport.RegisterMethod(cmd)
// unregisterCh is an internal command channel that accepts requests for
// method handlers to be unregistered.
case cmd := <-exec.unregisterCh:
if _, ok := exec.methodHandlers[cmd.method]; !ok {
cmd.errCh <- ErrNotRegistered
continue
}
exec.transport.UnregisterMethod(cmd)
// deleteCh accepts requests for method deletion from the internal map.
case method := <-exec.deleteCh:
delete(exec.methodHandlers, *method)
// RequestChan contains incoming RPC requests.
case request := <-exec.transport.RequestChan():
handler, ok := exec.methodHandlers[request.Method()]
if !ok {
// This should never ever happen since the broker should not
// event route messages to unregistered methods here.
continue
}
exec.taskManager.Go(func() {
handler(request)
})
// termCh is closed when the executor is to be terminated.
case <-exec.termCh:
log.Debug("Executor: terminating")
for {
select {
case <-exec.taskManager.Terminate():
close(exec.termAckCh)
log.Debug("Executor: terminated")
return
case request := <-exec.transport.RequestChan():
request.Resolve(254, "terminating")
}
}
}
}
}
// Errors ----------------------------------------------------------------------
var (
ErrAlreadyRegistered = errors.New("method already registered")
ErrNotRegistered = errors.New("method not registered")
)
|
{
exec.deleteMethod(method)
}
|
input-select.component.ts
|
import { ValidarCamposService } from './../validar-campos.service';
import { FormGroup, AbstractControl } from '@angular/forms';
|
selector: 'dio-input-select',
templateUrl: './input-select.component.html',
styleUrls: ['./input-select.component.css']
})
export class InputSelectComponent {
@Input() formGroup: FormGroup;
@Input() titulo: string;
@Input() controlName: string;
@Input() opcoes: Array<string>;
constructor(public validacao: ValidarCamposService) { }
get formControl(): AbstractControl {
return this.formGroup.controls[this.controlName];
}
}
|
import { Component, Input } from '@angular/core';
@Component({
|
administracion_bloqueos.js
|
var base_url = $("#txt_base_url").val();
function des_habilitar(a, b) {
$("#btn_nuevo").prop("disabled", b);
$("#btn_modificar").prop("disabled", a);
$("#btn_anular").prop("disabled", a);
$("#btn_aceptar").prop("disabled", a);
$("#btn_cancelar").prop("disabled", a);
$("#cmb_tipo_bloqueo").prop("disabled", a);
$("#dt_fecha_inicio").prop("disabled", a);
$("#dt_fecha_fin").prop("disabled", a);
$("#cmb_agenda").prop("disabled", a);
$("#cmb_profesional").prop("disabled", a);
$("#txt_observaciones").prop("disabled", a);
}
|
var fecha_ini = $("#dt_fecha_inicio").val();
var fecha_fin = $("#dt_fecha_fin").val();
if (fecha_ini =! "" && fecha_fin != "") {
$.ajax({
type: "GET",
dataType: "json",
url: base_url + "/Agenda/ctrl_administracion_bloqueos/llenar_cmb_agenda/" + fecha_ini + "/" + fecha_fin,
}).done( function(data) {
$("#cmb_agenda").html('');
var opciones = "<option value=\"\">Seleccione una Agenda</option>";
for (var i = 0; i < data.length; i++) {
opciones += "<option value=\"" + data[i].id_agenda + "\">" + data[i].agenda + "</option>";
}
$("#cmb_agenda").append(opciones);
}).fail(function(error){
respuesta = JSON.parse(error["responseText"]);
alerta.error("alerta_detage", respuesta.message);
});
} else {
alerta.aviso("alerta", "Fecha Inicio o Fecha Fin, vacíos");
}
}
$(document).ready(function() {
des_habilitar(true, false);
llenar_cmb_agenda();
// llenar_cmb_profesional();
$("#dt_fecha_inicio").datetimepicker({
format: "DD-MM-YYYY",
minDate: new Date()
}).on("dp.change", function(value) {
$('#dt_fecha_fin').data("DateTimePicker").minDate(this.value);
});
$("#dt_fecha_fin").datetimepicker({
format: "DD-MM-YYYY"
});
$("#form_bloqueos").validate({
debug: true,
errorClass: "my-error-class",
highlight: function (element, required) {
$(element).fadeOut(function () {
$(element).fadeIn();
$(element).css('border', '2px solid #FDADAF');
});
},
unhighlight: function (element, errorClass, validClass) {
$(element).css('border', '1px solid #CCC');
},
rules: {
cmb_tipo_bloqueo: {
required: true
},
dt_fecha_inicio: {
required: true
},
dt_fecha_fin: {
required: true
},
cmb_agenda: {
required: true
},
cmb_profesional: {
required: true
}
},
messages: {
cmb_tipo_bloqueo: {
required: "Tipo de bloqueo es obligatorio"
},
dt_fecha_inicio: {
required: "Fecha inicio es obligatoria"
},
dt_fecha_fin: {
required: "Fecha fin es obligatoria"
},
cmb_agenda: {
required: "La agenda es obligatoria"
},
cmb_profesional: {
required: "El profesional es obligatorio"
}
}
});
$("#btn_nuevo").on("click", function() {
des_habilitar(false, true);
$("#form_bloqueos")[0].reset();
$("#btn_modificar").prop("disabled", true);
$("#btn_anular").prop("disabled", true);
});
$("#btn_modificar").on("click", function() {
des_habilitar(false, true);
$("#btn_modificar").prop("disabled", true);
$("#btn_anular").prop("disabled", true);
});
$("#btn_anular").on("click", function() {
Swal.fire({
title: "¿Eliminar Bloqueo?",
text: "¿Está seguro de eliminar este bloqueo?",
icon: "question",
showCancelButton: true,
confirmButtonColor: "#3085d6",
cancelButtonColor: "#d33",
confirmButtonText: "Si",
cancelButtonText: "No"
}).then((result) => {
if (result.isConfirmed) {
anular_bloqueo();
}
});
});
$("#btn_aceptar").on("click", function() {
if ($("#form_bloqueos").valid()) {
guardar_bloqueo();
}
});
$("#btn_cancelar").on("click", function() {
$("#form_bloqueos")[0].reset();
des_habilitar(true, false);
});
var grid_agendas = $("#grid_agendas").DataTable({
responsive: true,
paging: true,
scrollY: '50vh',
scrollCollapse: true,
destroy: true,
select: {
toggleable: false
},
// ajax: base_url + "/Agenda/ctrl_ingreso_agenda/datatable_agendas",
orderClasses: true,
columns: [
{ "data": "id" },
{ "data": "id_tipo_bloqueo" },
{ "data": "tipo_bloqueo" },
{ "data": "id_especialidad" },
{ "data": "Agenda" },
{ "data": "usu_cod_prof" },
{ "data": "profesional" },
{ "data": "id_agenda" },
{ "data": "dia" },
{ "data": "fecha_ini" },
{ "data": "fecha_fin" },
{ "data": "usuario" },
{ "data": "fecha" },
{ "data": "estado" },
{
"data": "id",
"render": function(data, type, row) {
return "<button type='button' class='traza_bloqueo btn btn-warning' title='Traza Agenda'><i class='fas fa-shoe-prints'></i></button>";
}
}
],
"columnDefs": [
{
"targets": [ 1 ],
"visible": false,
"searchable": false
},
{
"targets": [ 3 ],
"visible": false,
"searchable": false
},
{
"targets": [ 5 ],
"visible": false,
"searchable": false
},
{
"targets": [ 7 ],
"visible": false,
"searchable": false
}
],
language: {
"decimal": "",
"emptyTable": "No hay información",
"info": "Mostrando _START_ a _END_ de _TOTAL_ Entradas",
"infoEmpty": "Mostrando 0 a 0 de 0 Entradas",
"infoFiltered": "(Filtrado de _MAX_ total entradas)",
"infoPostFix": "",
"thousands": ",",
"lengthMenu": "Mostrar _MENU_ Entradas",
"loadingRecords": "Cargando...",
"processing": "Procesando...",
"search": "Buscar:",
"zeroRecords": "Sin resultados encontrados",
"select": {
"rows": "<br/>%d Perfiles Seleccionados"
},
"paginate": {
"first": "Primero",
"last": "Ultimo",
"next": "Sig.",
"previous": "Ant."
}
}
});
});
|
function llenar_cmb_agenda() {
|
mod.rs
|
// Copyright 2015 Matthew Collins
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use log::warn;
use std::cmp::Ordering;
use std::collections::HashMap;
use std::collections::VecDeque;
use std::hash::BuildHasherDefault;
use std::io::{Cursor, Read};
use std::sync::Arc;
use cgmath::prelude::*;
use flate2::read::ZlibDecoder;
pub use leafish_blocks as block;
use crate::chunk_builder;
use crate::ecs;
use crate::entity::block_entity;
use crate::format;
use crate::protocol;
use crate::render;
use crate::shared::{Direction, Position};
use crate::types::hash::FNVHash;
use crate::types::{bit, nibble};
use byteorder::ReadBytesExt;
use instant::Instant;
pub mod biome;
mod storage;
use crate::chunk_builder::CullInfo;
use crate::world::biome::Biome;
use collision::Frustum;
use crossbeam_channel::unbounded;
use crossbeam_channel::{Receiver, Sender};
use dashmap::DashMap;
use lazy_static::lazy_static;
use parking_lot::RwLock;
pub struct World {
pub chunks: Arc<DashMap<CPos, Chunk, BuildHasherDefault<FNVHash>>>,
pub lighting_cache: Arc<RwLock<HashMap<CPos, LightData, BuildHasherDefault<FNVHash>>>>,
pub render_list: Arc<RwLock<Vec<(i32, i32, i32)>>>,
pub(crate) light_updates: Sender<LightUpdate>,
block_entity_actions: (Sender<BlockEntityAction>, Receiver<BlockEntityAction>),
protocol_version: i32,
pub modded_block_ids: Arc<RwLock<HashMap<usize, String>>>,
pub id_map: Arc<block::VanillaIDMap>,
}
pub struct LightData {
pub arrays: Cursor<Vec<u8>>,
pub block_light_mask: i32,
pub sky_light_mask: i32,
}
#[derive(Clone, Debug)]
pub enum BlockEntityAction {
Create(Position),
Remove(Position),
UpdateSignText(
Box<(
Position,
format::Component,
format::Component,
format::Component,
format::Component,
)>,
),
}
#[derive(Clone, Copy, PartialEq, Eq)]
enum LightType {
Block,
Sky,
}
// TODO: make use of "get_light" and "set_light"
impl LightType {
#[allow(dead_code)]
fn get_light(self, world: &World, pos: Position) -> u8 {
match self {
LightType::Block => world.get_block_light(pos),
LightType::Sky => world.get_sky_light(pos),
}
}
#[allow(dead_code)]
fn set_light(self, world: &World, pos: Position, light: u8) {
match self {
LightType::Block => world.set_block_light(pos, light),
LightType::Sky => world.set_sky_light(pos, light),
}
}
}
// TODO: make use of "ty: LightType" and "pos: Position"
#[allow(dead_code)]
pub struct LightUpdate {
ty: LightType,
pos: Position,
}
impl World {
pub fn new(protocol_version: i32, sender: Sender<LightUpdate>) -> World {
let id_map = Arc::new(block::VanillaIDMap::new(protocol_version));
World {
chunks: Arc::new(Default::default()),
lighting_cache: Arc::new(Default::default()),
protocol_version,
modded_block_ids: Arc::new(Default::default()),
id_map,
light_updates: sender,
render_list: Arc::new(Default::default()),
block_entity_actions: unbounded(),
}
}
pub fn reset(&self, protocol_version: i32) {
if self.protocol_version != protocol_version {
warn!("Can't switch protocol version, when resetting the world :(");
}
// TODO: Check if we actually have to do anything here.
}
pub fn is_chunk_loaded(&self, x: i32, z: i32) -> bool {
self.chunks.clone().contains_key(&CPos(x, z))
}
pub fn set_block(&self, pos: Position, b: block::Block)
|
fn set_block_raw(&self, pos: Position, b: block::Block) -> bool {
let cpos = CPos(pos.x >> 4, pos.z >> 4);
let chunks = self.chunks.clone();
let mut chunk = chunks.entry(cpos).or_insert_with(|| Chunk::new(cpos));
if chunk.set_block(pos.x & 0xF, pos.y, pos.z & 0xF, b) {
if chunk.block_entities.contains_key(&pos) {
self.block_entity_actions
.0
.send(BlockEntityAction::Remove(pos))
.unwrap();
}
if block_entity::BlockEntityType::get_block_entity(b).is_some() {
self.block_entity_actions
.0
.send(BlockEntityAction::Create(pos))
.unwrap();
}
true
} else {
false
}
}
pub fn update_block(&self, pos: Position) {
for yy in -1..2 {
for zz in -1..2 {
for xx in -1..2 {
let bp = pos + (xx, yy, zz);
let current = self.get_block(bp);
let new = current.update_state(self, bp);
if current != new {
self.set_block_raw(bp, new);
}
self.set_dirty(bp.x >> 4, bp.y >> 4, bp.z >> 4);
self.update_light(bp, LightType::Block);
self.update_light(bp, LightType::Sky);
}
}
}
}
fn update_range(&self, x1: i32, y1: i32, z1: i32, x2: i32, y2: i32, z2: i32) {
for by in y1..y2 {
for bz in z1..z2 {
for bx in x1..x2 {
let bp = Position::new(bx, by, bz);
let current = self.get_block(bp);
let new = current.update_state(self, bp);
let sky_light = self.get_sky_light(bp);
let block_light = self.get_block_light(bp);
if current != new {
self.set_block_raw(bp, new);
// Restore old lighting
self.set_sky_light(bp, sky_light);
self.set_block_light(bp, block_light);
}
}
}
}
}
pub fn get_block(&self, pos: Position) -> block::Block {
match self.chunks.clone().get(&CPos(pos.x >> 4, pos.z >> 4)) {
Some(chunk) => chunk.get_block(pos.x & 0xF, pos.y, pos.z & 0xF),
None => block::Missing {},
}
}
fn set_block_light(&self, pos: Position, light: u8) {
let cpos = CPos(pos.x >> 4, pos.z >> 4);
let chunks = self.chunks.clone();
let mut chunk = chunks.entry(cpos).or_insert_with(|| Chunk::new(cpos));
chunk.set_block_light(pos.x & 0xF, pos.y, pos.z & 0xF, light);
}
pub fn get_block_light(&self, pos: Position) -> u8 {
match self.chunks.clone().get(&CPos(pos.x >> 4, pos.z >> 4)) {
Some(chunk) => chunk.get_block_light(pos.x & 0xF, pos.y, pos.z & 0xF),
None => 0,
}
}
fn set_sky_light(&self, pos: Position, light: u8) {
let cpos = CPos(pos.x >> 4, pos.z >> 4);
let chunks = self.chunks.clone();
let mut chunk = chunks.entry(cpos).or_insert_with(|| Chunk::new(cpos));
chunk.set_sky_light(pos.x & 0xF, pos.y, pos.z & 0xF, light);
}
pub fn get_sky_light(&self, pos: Position) -> u8 {
match self.chunks.clone().get(&CPos(pos.x >> 4, pos.z >> 4)) {
Some(chunk) => chunk.get_sky_light(pos.x & 0xF, pos.y, pos.z & 0xF),
None => 15,
}
}
fn update_light(&self, pos: Position, ty: LightType) {
self.light_updates.send(LightUpdate { ty, pos }).unwrap();
}
pub fn add_block_entity_action(&self, action: BlockEntityAction) {
self.block_entity_actions.0.send(action).unwrap();
}
#[allow(clippy::verbose_bit_mask)] // "llvm generates better code" for updates_performed & 0xFFF "on x86"
pub fn tick(&self, m: &mut ecs::Manager) {
let sign_info: ecs::Key<block_entity::sign::SignInfo> = m.get_key();
while let Ok(action) = self.block_entity_actions.1.try_recv() {
match action {
BlockEntityAction::Remove(pos) => {
if let Some(mut chunk) =
self.chunks.clone().get_mut(&CPos(pos.x >> 4, pos.z >> 4))
{
if let Some(entity) = chunk.block_entities.remove(&pos) {
m.remove_entity(entity);
}
}
}
BlockEntityAction::Create(pos) => {
if let Some(mut chunk) =
self.chunks.clone().get_mut(&CPos(pos.x >> 4, pos.z >> 4))
{
// Remove existing entity
if let Some(entity) = chunk.block_entities.remove(&pos) {
m.remove_entity(entity);
}
let block = chunk.get_block(pos.x & 0xF, pos.y, pos.z & 0xF);
if let Some(entity_type) =
block_entity::BlockEntityType::get_block_entity(block)
{
let entity = entity_type.create_entity(m, pos);
chunk.block_entities.insert(pos, entity);
}
}
}
BlockEntityAction::UpdateSignText(bx) => {
let (pos, line1, line2, line3, line4) = *bx;
if let Some(chunk) = self.chunks.clone().get(&CPos(pos.x >> 4, pos.z >> 4)) {
if let Some(entity) = chunk.block_entities.get(&pos) {
if let Some(sign) = m.get_component_mut(*entity, sign_info) {
sign.lines = [line1, line2, line3, line4];
sign.dirty = true;
}
}
}
}
}
}
}
// TODO: make use of "do_light_update"
#[allow(dead_code)]
pub(crate) fn do_light_update(&self, update: LightUpdate) {
use std::cmp;
if update.pos.y < 0
|| update.pos.y > 255
|| !self.is_chunk_loaded(update.pos.x >> 4, update.pos.z >> 4)
{
return;
}
let block = self.get_block(update.pos).get_material();
// Find the brightest source of light nearby
let mut best = update.ty.get_light(self, update.pos);
let old = best;
for dir in Direction::all() {
let light = update.ty.get_light(self, update.pos.shift(dir));
if light > best {
best = light;
}
}
best = best.saturating_sub(cmp::max(1, block.absorbed_light));
// If the light from the block itself is brighter than the light passing through
// it use that.
if update.ty == LightType::Block && block.emitted_light != 0 {
best = cmp::max(best, block.emitted_light);
}
// Sky light doesn't decrease when going down at full brightness
if update.ty == LightType::Sky
&& block.absorbed_light == 0
&& update.ty.get_light(self, update.pos.shift(Direction::Up)) == 15
{
best = 15;
}
// Nothing to do, we are already at the right value
if best == old {
return;
}
// Use our new light value
update.ty.set_light(self, update.pos, best);
// Flag surrounding chunks as dirty
for yy in -1..2 {
for zz in -1..2 {
for xx in -1..2 {
let bp = update.pos + (xx, yy, zz);
self.set_dirty(bp.x >> 4, bp.y >> 4, bp.z >> 4);
}
}
}
// Update surrounding blocks
for dir in Direction::all() {
self.update_light(update.pos.shift(dir), update.ty);
}
}
pub fn copy_cloud_heightmap(&self, data: &mut [u8]) -> bool {
let mut dirty = false;
for mut c in self.chunks.clone().iter_mut() {
if c.heightmap_dirty {
dirty = true;
c.heightmap_dirty = false;
for xx in 0..16 {
for zz in 0..16 {
data[(((c.position.0 << 4) as usize + xx) & 0x1FF)
+ ((((c.position.1 << 4) as usize + zz) & 0x1FF) << 9)] =
c.heightmap[(zz << 4) | xx];
}
}
}
}
dirty
}
pub fn compute_render_list(&self, renderer: Arc<RwLock<render::Renderer>>) {
let start_rec = Instant::now();
// self.render_list.clone().write().clear(); // TODO: Sync with the main thread somehow!
// renderer.clone().read()
let mut valid_dirs = [false; 6];
for dir in Direction::all() {
let (ox, oy, oz) = dir.get_offset();
let dir_vec = cgmath::Vector3::new(ox as f32, oy as f32, oz as f32);
valid_dirs[dir.index()] = renderer.clone().read().view_vector.dot(dir_vec) > -0.9;
}
let start = (
((renderer.read().camera.pos.x as i32) >> 4),
((renderer.read().camera.pos.y as i32) >> 4),
((renderer.read().camera.pos.z as i32) >> 4),
);
let render_queue = Arc::new(RwLock::new(Vec::new()));
let mut process_queue = VecDeque::with_capacity(self.chunks.clone().len() * 16);
// debug!("processqueue size {}", self.chunks.len() * 16);
process_queue.push_front((Direction::Invalid, start));
let _diff = Instant::now().duration_since(start_rec);
let frustum = renderer.read().frustum;
let frame_id = renderer.read().frame_id;
self.do_render_queue(
Arc::new(RwLock::new(process_queue)),
frustum,
frame_id,
valid_dirs,
render_queue.clone(),
);
let render_list_write = self.render_list.clone();
let mut render_list_write = render_list_write.write();
render_list_write.clear();
render_list_write.extend(render_queue.read().iter());
// TODO: Improve the performance of the following by moving this to another thread!
/*
process_queue.par_iter().for_each(|(from, pos)| {
let (exists, cull) = if let Some((sec, rendered_on)) =
self.get_render_section_mut(pos.0, pos.1, pos.2)
{
if rendered_on == renderer.frame_id {
return;
}
if let Some(chunk) = self.chunks.clone().write().get_mut(&CPos(pos.0, pos.2)) {
chunk.sections_rendered_on[pos.1 as usize] = renderer.frame_id;
}
let min = cgmath::Point3::new(
pos.0 as f32 * 16.0,
-pos.1 as f32 * 16.0,
pos.2 as f32 * 16.0,
);
let bounds =
collision::Aabb3::new(min, min + cgmath::Vector3::new(16.0, -16.0, 16.0));
if renderer.frustum.contains(&bounds) == collision::Relation::Out
&& *from != Direction::Invalid
{
return;
}
(
sec.is_some(),
sec.map_or(chunk_builder::CullInfo::all_vis(), |v| v.clone().read().cull_info),
)
} else {
return;
};
if exists {
self.render_list.clone().write().push(*pos);
}
for dir in Direction::all() {
let (ox, oy, oz) = dir.get_offset();
let opos = (pos.0 + ox, pos.1 + oy, pos.2 + oz);
if let Some((_, rendered_on)) = self.get_render_section_mut(opos.0, opos.1, opos.2)
{
if rendered_on == renderer.frame_id {
continue;
}
if *from == Direction::Invalid
|| (valid_dirs[dir.index()] && cull.is_visible(*from, dir))
{
process_queue.push_back((dir.opposite(), opos));
}
}
}
});*/
/*while let Some((from, pos)) = process_queue.pop_front() { // TODO: Use par iters
let (exists, cull) = if let Some((sec, rendered_on)) =
self.get_render_section_mut(pos.0, pos.1, pos.2)
{
if rendered_on == renderer.frame_id {
continue;
}
if let Some(chunk) = self.chunks.clone().write().get_mut(&CPos(pos.0, pos.2)) {
chunk.sections_rendered_on[pos.1 as usize] = renderer.frame_id;
}
let min = cgmath::Point3::new(
pos.0 as f32 * 16.0,
-pos.1 as f32 * 16.0,
pos.2 as f32 * 16.0,
);
let bounds =
collision::Aabb3::new(min, min + cgmath::Vector3::new(16.0, -16.0, 16.0));
if renderer.frustum.contains(&bounds) == collision::Relation::Out
&& from != Direction::Invalid
{
continue;
}
(
sec.is_some(),
sec.map_or(chunk_builder::CullInfo::all_vis(), |v| v.clone().read().cull_info),
)
} else {
continue;
};
if exists {
self.render_list.clone().write().push(pos);
}
for dir in Direction::all() {
let (ox, oy, oz) = dir.get_offset();
let opos = (pos.0 + ox, pos.1 + oy, pos.2 + oz);
if let Some((_, rendered_on)) = self.get_render_section_mut(opos.0, opos.1, opos.2)
{
if rendered_on == renderer.frame_id {
continue;
}
if from == Direction::Invalid
|| (valid_dirs[dir.index()] && cull.is_visible(from, dir))
{
process_queue.push_back((dir.opposite(), opos));
}
}
}
}*/
}
#[allow(clippy::type_complexity)]
fn do_render_queue(
&self,
process_queue: Arc<RwLock<VecDeque<(Direction, (i32, i32, i32))>>>,
frustum: Frustum<f32>,
frame_id: u32,
valid_dirs: [bool; 6],
render_queue: Arc<RwLock<Vec<(i32, i32, i32)>>>,
) {
let out = Arc::new(RwLock::new(VecDeque::new()));
/*let tmp_renderer = renderer.clone();
let tmp_renderer = tmp_renderer.read();
let frame_id = tmp_renderer.frame_id.clone();*/
// let frame_id = renderer.clone().read().frame_id.clone();
// let frustum = renderer.clone().read().frustum.clone().read().as_ref().unwrap();
let tmp_frustum = frustum;
// debug!("rendering {} elems", process_queue.clone().read().len());
process_queue.read().iter().for_each(|(from, pos)| {
let (exists, cull) = if let Some((sec, rendered_on)) =
self.get_render_section_mut(pos.0, pos.1, pos.2)
{
if rendered_on == frame_id {
return;
}
if let Some(mut chunk) = self.chunks.clone().get_mut(&CPos(pos.0, pos.2)) {
chunk.sections_rendered_on[pos.1 as usize] = frame_id;
}
let min = cgmath::Point3::new(
pos.0 as f32 * 16.0,
-pos.1 as f32 * 16.0,
pos.2 as f32 * 16.0,
);
let bounds =
collision::Aabb3::new(min, min + cgmath::Vector3::new(16.0, -16.0, 16.0));
if tmp_frustum.contains(&bounds) == collision::Relation::Out
&& *from != Direction::Invalid
{
return;
}
(
sec.is_some(),
sec.map_or(chunk_builder::CullInfo::all_vis(), |v| v),
)
} else {
return;
};
if exists {
render_queue.clone().write().push(*pos);
}
for dir in Direction::all() {
let (ox, oy, oz) = dir.get_offset();
let opos = (pos.0 + ox, pos.1 + oy, pos.2 + oz);
if let Some((_, rendered_on)) = self.get_render_section_mut(opos.0, opos.1, opos.2)
{
if rendered_on == frame_id {
continue;
}
if *from == Direction::Invalid
|| (valid_dirs[dir.index()] && cull.is_visible(*from, dir))
{
out.clone().write().push_back((dir.opposite(), opos));
}
}
}
});
if !out.read().is_empty() {
self.do_render_queue(out, frustum, frame_id, valid_dirs, render_queue);
}
}
#[allow(clippy::type_complexity)]
pub fn get_render_list(&self) -> Vec<((i32, i32, i32), Arc<RwLock<render::ChunkBuffer>>)> {
self.render_list
.clone()
.read()
.iter()
// .par_iter()
.filter_map(|v| {
let chunks = self.chunks.clone();
let chunk = chunks.get(&CPos(v.0, v.2));
if let Some(chunk) = chunk {
if let Some(sec) = chunk.sections[v.1 as usize].as_ref() {
return Some((*v, sec.render_buffer.clone()));
}
}
None
})
.collect()
}
/*
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src/world/mod.rs:414:62
stack backtrace:
0: rust_begin_unwind
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/std/src/panicking.rs:493:5
1: core::panicking::panic_fmt
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:92:14
2: core::panicking::panic
at /rustc/53cb7b09b00cbea8754ffb78e7e3cb521cb8af4b/library/core/src/panicking.rs:50:5
3: core::option::Option<T>::unwrap
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/option.rs:386:21
4: leafish::world::World::get_render_list::{{closure}}
at /home/threadexception/IdeaProjects/Leafish/src/world/mod.rs:414:29
5: core::iter::adapters::map::map_fold::{{closure}}
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:82:28
6: core::iter::traits::iterator::Iterator::fold
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:2146:21
7: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:122:9
8: core::iter::traits::iterator::Iterator::for_each
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:776:9
9: <alloc::vec::Vec<T,A> as alloc::vec::spec_extend::SpecExtend<T,I>>::spec_extend
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/vec/spec_extend.rs:40:17
10: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter_nested::SpecFromIterNested<T,I>>::from_iter
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter_nested.rs:56:9
11: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/vec/spec_from_iter.rs:36:9
12: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/alloc/src/vec/mod.rs:2404:9
13: core::iter::traits::iterator::Iterator::collect
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:1788:9
14: leafish::world::World::get_render_list
at /home/threadexception/IdeaProjects/Leafish/src/world/mod.rs:411:9
15: leafish::chunk_builder::ChunkBuilder::tick
at /home/threadexception/IdeaProjects/Leafish/src/chunk_builder.rs:97:30
16: leafish::tick_all
at /home/threadexception/IdeaProjects/Leafish/src/main.rs:507:5
17: leafish::main::{{closure}}
at /home/threadexception/IdeaProjects/Leafish/src/main.rs:423:9
18: winit::platform_impl::platform::sticky_exit_callback
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/platform_impl/linux/mod.rs:746:5
19: winit::platform_impl::platform::wayland::event_loop::EventLoop<T>::run_return
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/platform_impl/linux/wayland/event_loop/mod.rs:354:13
20: winit::platform_impl::platform::wayland::event_loop::EventLoop<T>::run
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/platform_impl/linux/wayland/event_loop/mod.rs:191:9
21: winit::platform_impl::platform::EventLoop<T>::run
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/platform_impl/linux/mod.rs:662:56
22: winit::event_loop::EventLoop<T>::run
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/event_loop.rs:154:9
23: leafish::main
at /home/threadexception/IdeaProjects/Leafish/src/main.rs:403:5
24: core::ops::function::FnOnce::call_once
at /home/threadexception/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Process finished with exit code 101
*/
/*
rendering 179 elems
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value', src/world/mod.rs:590:57
stack backtrace:
0: rust_begin_unwind
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/std/src/panicking.rs:515:5
1: core::panicking::panic_fmt
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/panicking.rs:92:14
2: core::panicking::panic
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/panicking.rs:50:5
3: core::option::Option<T>::unwrap
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/option.rs:388:21
4: leafish::world::World::get_render_list::{{closure}}
at /home/threadexception/IdeaProjects/Leafish/src/world/mod.rs:590:29
5: core::iter::adapters::map::map_fold::{{closure}}
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/iter/adapters/map.rs:82:28
6: core::iter::traits::iterator::Iterator::fold
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/iter/traits/iterator.rs:2112:21
7: <core::iter::adapters::map::Map<I,F> as core::iter::traits::iterator::Iterator>::fold
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/iter/adapters/map.rs:122:9
8: core::iter::traits::iterator::Iterator::for_each
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/iter/traits/iterator.rs:736:9
9: <alloc::vec::Vec<T,A> as alloc::vec::spec_extend::SpecExtend<T,I>>::spec_extend
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/vec/spec_extend.rs:40:17
10: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter_nested::SpecFromIterNested<T,I>>::from_iter
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/vec/spec_from_iter_nested.rs:56:9
11: <alloc::vec::Vec<T> as alloc::vec::spec_from_iter::SpecFromIter<T,I>>::from_iter
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/vec/spec_from_iter.rs:33:9
12: <alloc::vec::Vec<T> as core::iter::traits::collect::FromIterator<T>>::from_iter
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/alloc/src/vec/mod.rs:2449:9
13: core::iter::traits::iterator::Iterator::collect
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/iter/traits/iterator.rs:1748:9
14: leafish::world::World::get_render_list
at /home/threadexception/IdeaProjects/Leafish/src/world/mod.rs:584:9
15: leafish::chunk_builder::ChunkBuilder::tick
at /home/threadexception/IdeaProjects/Leafish/src/chunk_builder.rs:96:30
16: leafish::tick_all
at /home/threadexception/IdeaProjects/Leafish/src/main.rs:526:9
17: leafish::main::{{closure}}
at /home/threadexception/IdeaProjects/Leafish/src/main.rs:437:9
18: winit::platform_impl::platform::sticky_exit_callback
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/platform_impl/linux/mod.rs:746:5
19: winit::platform_impl::platform::wayland::event_loop::EventLoop<T>::run_return
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/platform_impl/linux/wayland/event_loop/mod.rs:354:13
20: winit::platform_impl::platform::wayland::event_loop::EventLoop<T>::run
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/platform_impl/linux/wayland/event_loop/mod.rs:191:9
21: winit::platform_impl::platform::EventLoop<T>::run
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/platform_impl/linux/mod.rs:662:56
22: winit::event_loop::EventLoop<T>::run
at /home/threadexception/.cargo/registry/src/github.com-1ecc6299db9ec823/winit-0.25.0/src/event_loop.rs:154:9
23: leafish::main
at /home/threadexception/IdeaProjects/Leafish/src/main.rs:416:5
24: core::ops::function::FnOnce::call_once
at /rustc/a178d0322ce20e33eac124758e837cbd80a6f633/library/core/src/ops/function.rs:227:5
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
do next!
rendering 198 elems
Process finished with exit code 101
*/
/*
pub fn get_section_mut(&self, x: i32, y: i32, z: i32) -> Option<Section> {
if let Some(chunk) = self.chunks.clone().get(&CPos(x, z)) {
if let Some(sec) = chunk.sections[y as usize].as_ref() {
return Some(sec.clone());
}
}
None
}*/
// TODO: Improve the perf of this method as it is the MAIN bottleneck slowing down the program!
fn get_render_section_mut(&self, x: i32, y: i32, z: i32) -> Option<(Option<CullInfo>, u32)> {
if !(0..=15).contains(&y) {
return None;
}
if let Some(chunk) = self.chunks.clone().get(&CPos(x, z)) {
let rendered = &chunk.sections_rendered_on[y as usize];
if let Some(sec) = chunk.sections[y as usize].as_ref() {
return Some((Some(sec.cull_info), *rendered));
}
return Some((None, *rendered));
}
None
}
pub fn get_dirty_chunk_sections(&self) -> Vec<(i32, i32, i32)> {
let mut out = vec![];
for chunk in self.chunks.clone().iter() {
for sec in &chunk.sections {
if let Some(sec) = sec.as_ref() {
if !sec.building && sec.dirty {
out.push((chunk.position.0, sec.y as i32, chunk.position.1));
}
}
}
}
out
}
fn set_dirty(&self, x: i32, y: i32, z: i32) {
if let Some(mut chunk) = self.chunks.clone().get_mut(&CPos(x, z)) {
if let Some(mut sec) = chunk.sections.get_mut(y as usize).and_then(|v| v.as_mut()) {
sec.dirty = true;
}
}
}
pub fn is_section_dirty(&self, pos: (i32, i32, i32)) -> bool {
if let Some(chunk) = self.chunks.clone().get(&CPos(pos.0, pos.2)) {
if let Some(sec) = chunk.sections[pos.1 as usize].as_ref() {
return sec.dirty && !sec.building;
}
}
false
}
pub fn set_building_flag(&self, pos: (i32, i32, i32)) {
if let Some(mut chunk) = self.chunks.clone().get_mut(&CPos(pos.0, pos.2)) {
if let Some(mut sec) = chunk.sections[pos.1 as usize].as_mut() {
sec.building = true;
sec.dirty = false;
}
}
}
pub fn reset_building_flag(&self, pos: (i32, i32, i32)) {
if let Some(mut chunk) = self.chunks.clone().get_mut(&CPos(pos.0, pos.2)) {
if let Some(section) = chunk.sections[pos.1 as usize].as_mut() {
section.building = false;
}
}
}
pub fn flag_dirty_all(&self) {
for mut chunk in self.chunks.clone().iter_mut() {
for sec in &mut chunk.sections {
if let Some(sec) = sec.as_mut() {
sec.dirty = true;
}
}
}
}
pub fn capture_snapshot(&self, x: i32, y: i32, z: i32) -> Option<SectionSnapshot> {
// TODO: Improve performance!
let cx = x >> 4;
let cy = y >> 4;
let cz = z >> 4;
let chunks = self.chunks.clone();
let chunk = match chunks.get(&CPos(cx, cz)) {
Some(val) => val,
None => {
return None;
}
};
let sec = &chunk.sections[cy as usize];
if sec.is_none() {
return None;
}
return Some(sec.as_ref().unwrap().capture_snapshot(chunk.biomes));
}
pub fn unload_chunk(&self, x: i32, z: i32, m: &mut ecs::Manager) {
if let Some(chunk) = self.chunks.clone().remove(&CPos(x, z)) {
for entity in chunk.1.block_entities.values() {
m.remove_entity(*entity);
}
}
}
pub fn load_chunk(
&self,
x: i32,
z: i32,
new: bool,
skylight: bool,
read_biomes: bool,
mask: u16,
mask_add: u16,
data: &mut Cursor<Vec<u8>>,
version: u8,
) -> Result<(), protocol::Error> {
let additional_light_data = self.lighting_cache.clone().write().remove(&CPos(x, z));
let has_add_light = additional_light_data.is_some();
let cpos = CPos(x, z);
{
if new {
// TODO: Improve lighting with something similar to bixilon's light accessor!
self.chunks.clone().insert(cpos, Chunk::new(cpos));
} else if !self.chunks.clone().contains_key(&cpos) {
return Ok(());
}
let chunks = self.chunks.clone();
let chunk = &mut chunks.get_mut(&cpos).unwrap();
// Block type array - whole byte per block // 17
let mut block_types: [[u8; 4096]; 16] = [[0u8; 4096]; 16]; // 17
for (i, block_type) in block_types.iter_mut().enumerate() {
if chunk.sections[i].is_none() {
let mut fill_sky = chunk.sections.iter().skip(i).all(|v| v.is_none());
fill_sky &= (mask & !((1 << i) | ((1 << i) - 1))) == 0;
if !fill_sky || mask & (1 << i) != 0 {
chunk.sections[i] = Some(Section::new(i as u8, fill_sky));
}
}
if mask & (1 << i) == 0 {
continue;
}
if version == 17 {
data.read_exact(block_type)?;
} else if version == 18 {
self.prep_section_18(chunk, data, i);
} else if version == 19 {
self.prep_section_19(chunk, data, i);
}
let mut section = chunk.sections[i as usize].as_mut().unwrap();
section.dirty = true;
}
if version == 17 {
self.finish_17(chunk, mask, mask_add, skylight, data, block_types);
} else if version != 19 {
self.read_light(chunk, mask, skylight, data);
} else if has_add_light {
let mut additional_light_data = additional_light_data.unwrap();
self.load_light(
chunk,
additional_light_data.block_light_mask,
true,
additional_light_data.sky_light_mask,
&mut additional_light_data.arrays,
);
}
if new && read_biomes {
// read biomes is always true (as param) except for load_chunk_19
data.read_exact(&mut chunk.biomes)?;
}
chunk.calculate_heightmap();
}
self.dirty_chunks_by_bitmask(x, z, mask);
Ok(())
}
fn prep_section_19(&self, chunk: &mut Chunk, data: &mut Cursor<Vec<u8>>, section_id: usize) {
use crate::protocol::{LenPrefixed, Serializable, VarInt};
if self.protocol_version >= 451 {
let _block_count = data.read_u16::<byteorder::LittleEndian>().unwrap();
// TODO: use block_count
}
let section = chunk.sections[section_id].as_mut().unwrap();
let mut bit_size = data.read_u8().unwrap();
let mut mappings: HashMap<usize, block::Block, BuildHasherDefault<FNVHash>> =
HashMap::with_hasher(BuildHasherDefault::default());
if bit_size == 0 {
bit_size = 13;
} else {
let count = VarInt::read_from(data).unwrap().0;
for i in 0..count {
let id = VarInt::read_from(data).unwrap().0;
let bl = self
.id_map
.by_vanilla_id(id as usize, self.modded_block_ids.clone());
mappings.insert(i as usize, bl);
}
}
let bits = LenPrefixed::<VarInt, u64>::read_from(data).unwrap().data;
let padded = self.protocol_version >= 736;
let m = bit::Map::from_raw(bits, bit_size as usize, padded);
for bi in 0..4096 {
let id = m.get(bi);
section.blocks.set(
bi,
mappings
.get(&id)
.cloned()
// TODO: fix or_fun_call, but do not re-borrow self
.unwrap_or_else(|| {
self.id_map.by_vanilla_id(id, self.modded_block_ids.clone())
}),
);
// Spawn block entities
let b = section.blocks.get(bi);
if block_entity::BlockEntityType::get_block_entity(b).is_some() {
let pos = Position::new(
(bi & 0xF) as i32,
(bi >> 8) as i32,
((bi >> 4) & 0xF) as i32,
) + (
chunk.position.0 << 4,
(section_id << 4) as i32,
chunk.position.1 << 4,
);
if chunk.block_entities.contains_key(&pos) {
self.block_entity_actions
.0
.send(BlockEntityAction::Remove(pos))
.unwrap();
}
self.block_entity_actions
.0
.send(BlockEntityAction::Create(pos))
.unwrap();
}
}
if self.protocol_version >= 451 {
// Skylight in update skylight packet for 1.14+
} else {
data.read_exact(&mut section.block_light.data).unwrap();
data.read_exact(&mut section.sky_light.data).unwrap();
}
}
fn prep_section_18(&self, chunk: &mut Chunk, data: &mut Cursor<Vec<u8>>, section_id: usize) {
let section = chunk.sections[section_id].as_mut().unwrap();
for bi in 0..4096 {
let id = data.read_u16::<byteorder::LittleEndian>().unwrap();
section.blocks.set(
bi,
self.id_map
.by_vanilla_id(id as usize, self.modded_block_ids.clone()),
);
// Spawn block entities
let b = section.blocks.get(bi);
if block_entity::BlockEntityType::get_block_entity(b).is_some() {
let pos = Position::new(
(bi & 0xF) as i32,
(bi >> 8) as i32,
((bi >> 4) & 0xF) as i32,
) + (
chunk.position.0 << 4,
(section_id << 4) as i32,
chunk.position.1 << 4,
);
if chunk.block_entities.contains_key(&pos) {
self.block_entity_actions
.0
.send(BlockEntityAction::Remove(pos))
.unwrap();
}
self.block_entity_actions
.0
.send(BlockEntityAction::Create(pos))
.unwrap();
}
}
}
fn read_light(&self, chunk: &mut Chunk, mask: u16, skylight: bool, data: &mut Cursor<Vec<u8>>) {
// Block light array - half byte per block
for i in 0..16 {
if mask & (1 << i) == 0 {
continue;
}
let section = chunk.sections[i as usize].as_mut().unwrap();
data.read_exact(&mut section.block_light.data).unwrap();
}
// Sky light array - half byte per block - only if 'skylight' is true
if skylight {
for i in 0..16 {
if mask & (1 << i) == 0 {
continue;
}
let section = chunk.sections[i as usize].as_mut().unwrap();
data.read_exact(&mut section.sky_light.data).unwrap();
}
}
}
fn finish_17(
&self,
chunk: &mut Chunk,
mask: u16,
mask_add: u16,
skylight: bool,
data: &mut Cursor<Vec<u8>>,
block_types: [[u8; 4096]; 16],
) {
// Block metadata array - half byte per block
let mut block_meta: [nibble::Array; 16] = [
// TODO: cleanup this initialization
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
];
for (i, meta) in block_meta.iter_mut().enumerate() {
if mask & (1 << i) == 0 {
continue;
}
data.read_exact(&mut meta.data).unwrap();
}
self.read_light(chunk, mask, skylight, data);
// Add array - half byte per block - uses secondary bitmask
let block_add: [nibble::Array; 16] = [
// TODO: cleanup this initialization
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
nibble::Array::new(16 * 16 * 16),
];
for (i, meta) in block_meta.iter_mut().enumerate() {
if mask_add & (1 << i) == 0 {
continue;
}
data.read_exact(&mut meta.data).unwrap();
}
// Now that we have the block types, metadata, and add, combine to initialize the blocks
for i in 0..16 {
if mask & (1 << i) == 0 {
continue;
}
let section = chunk.sections[i as usize].as_mut().unwrap();
for bi in 0..4096 {
let id = ((block_add[i].get(bi) as u16) << 12)
| ((block_types[i][bi] as u16) << 4)
| (block_meta[i].get(bi) as u16);
section.blocks.set(
bi,
self.id_map
.by_vanilla_id(id as usize, self.modded_block_ids.clone()),
);
// Spawn block entities
let b = section.blocks.get(bi);
if block_entity::BlockEntityType::get_block_entity(b).is_some() {
let pos = Position::new(
(bi & 0xF) as i32,
(bi >> 8) as i32,
((bi >> 4) & 0xF) as i32,
) + (
chunk.position.0 << 4,
(i << 4) as i32,
chunk.position.1 << 4,
);
if chunk.block_entities.contains_key(&pos) {
self.block_entity_actions
.0
.send(BlockEntityAction::Remove(pos))
.unwrap();
}
self.block_entity_actions
.0
.send(BlockEntityAction::Create(pos))
.unwrap();
}
}
}
}
/*
pub fn load_chunks(&self,
skylight: bool,
chunk_column_count: u16, // 17
data_length: i32, // 17
new: bool, // 18, 19
read_biomes: bool, // 19
chunk_metas: &[crate::protocol::packet::ChunkMeta], // 18
mask: u16, // 19
data: Vec<u8>) -> Result<(), protocol::Error> { // Vec<u8> | &[u8]
}*/
pub fn load_chunks18(
&self,
new: bool,
skylight: bool,
chunk_metas: &[crate::protocol::packet::ChunkMeta],
data: Vec<u8>,
) -> Result<(), protocol::Error> {
let mut data = std::io::Cursor::new(data);
for chunk_meta in chunk_metas {
let x = chunk_meta.x;
let z = chunk_meta.z;
let mask = chunk_meta.bitmask;
self.load_chunk18(x, z, new, skylight, mask, &mut data)?;
}
Ok(())
}
fn dirty_chunks_by_bitmask(&self, x: i32, z: i32, mask: u16) {
for i in 0..16 {
if mask & (1 << i) == 0 {
continue;
}
for pos in [
(-1, 0, 0),
(1, 0, 0),
(0, -1, 0),
(0, 1, 0),
(0, 0, -1),
(0, 0, 1),
]
.iter()
{
self.flag_section_dirty(x + pos.0, i as i32 + pos.1, z + pos.2);
}
self.update_range(
(x << 4) - 1,
(i << 4) - 1,
(z << 4) - 1,
(x << 4) + 17,
(i << 4) + 17,
(z << 4) + 17,
);
}
}
pub fn load_chunk18(
&self,
x: i32,
z: i32,
new: bool,
_skylight: bool, // unused!
mask: u16,
data: &mut std::io::Cursor<Vec<u8>>,
) -> Result<(), protocol::Error> {
self.load_chunk(x, z, new, true, new, mask, 0, data, 18)
}
pub fn load_chunks17(
&self,
chunk_column_count: u16,
data_length: i32,
skylight: bool,
data: &[u8],
) -> Result<(), protocol::Error> {
let compressed_chunk_data = &data[0..data_length as usize];
let metadata = &data[data_length as usize..];
let mut zlib = ZlibDecoder::new(std::io::Cursor::new(compressed_chunk_data.to_vec()));
let mut chunk_data = Vec::new();
zlib.read_to_end(&mut chunk_data)?;
let mut chunk_data = std::io::Cursor::new(chunk_data);
// Chunk metadata
let mut metadata = std::io::Cursor::new(metadata);
for _i in 0..chunk_column_count {
let x = metadata.read_i32::<byteorder::BigEndian>()?;
let z = metadata.read_i32::<byteorder::BigEndian>()?;
let mask = metadata.read_u16::<byteorder::BigEndian>()?;
let mask_add = metadata.read_u16::<byteorder::BigEndian>()?;
let new = true;
self.load_uncompressed_chunk17(x, z, new, skylight, mask, mask_add, &mut chunk_data)?;
}
Ok(())
}
pub fn load_chunk17(
&self,
x: i32,
z: i32,
new: bool,
mask: u16,
mask_add: u16,
compressed_data: Vec<u8>,
) -> Result<(), protocol::Error> {
let mut zlib = ZlibDecoder::new(std::io::Cursor::new(compressed_data.to_vec()));
let mut data = Vec::new();
zlib.read_to_end(&mut data)?;
let skylight = true;
self.load_uncompressed_chunk17(
x,
z,
new,
skylight,
mask,
mask_add,
&mut std::io::Cursor::new(data),
)
}
#[allow(clippy::needless_range_loop)]
fn load_uncompressed_chunk17(
&self,
x: i32,
z: i32,
new: bool,
skylight: bool,
mask: u16,
mask_add: u16,
data: &mut std::io::Cursor<Vec<u8>>,
) -> Result<(), protocol::Error> {
self.load_chunk(x, z, new, skylight, new, mask, mask_add, data, 17)
}
pub fn load_light_with_loc(
&self,
_x: i32,
_z: i32,
_block_light_mask: i32,
_sky_light: bool,
_sky_light_mask: i32,
_data: &mut Cursor<Vec<u8>>,
) {
// debug!("x {} z {}", x, z);
// TODO: Insert chunks with light data only or cache them until the real data arrives!
/*let cpos = CPos(x, z);
let chunks = self.chunks.clone();
let mut chunks = chunks.write();
let chunk = chunks.get_mut(&cpos).unwrap(); // TODO: Fix this panic!
self.load_light(chunk, block_light_mask, sky_light, sky_light_mask, data);*/
}
fn load_light(
&self,
chunk: &mut Chunk,
block_light_mask: i32,
sky_light: bool,
sky_light_mask: i32,
data: &mut Cursor<Vec<u8>>,
) {
for i in 0..16 {
if block_light_mask & (1 << i) == 0 {
continue;
}
if chunk.sections[i as usize].as_ref().is_none() {
chunk.sections[i as usize].replace(Section::new(i, false));
}
let section = chunk.sections[i as usize].as_mut().unwrap();
data.read_exact(&mut section.block_light.data).unwrap();
}
if sky_light {
for i in 0..16 {
if sky_light_mask & (1 << i) == 0 {
continue;
}
if chunk.sections[i as usize].as_ref().is_none() {
chunk.sections[i as usize].replace(Section::new(i, false));
}
let section = chunk.sections[i as usize].as_mut().unwrap();
data.read_exact(&mut section.sky_light.data).unwrap();
}
}
}
pub fn load_chunk19(
&self,
x: i32,
z: i32,
new: bool,
mask: u16,
data: Vec<u8>,
) -> Result<(), protocol::Error> {
self.load_chunk19_or_115(true, x, z, new, mask, data)
}
pub fn load_chunk115(
&self,
x: i32,
z: i32,
new: bool,
mask: u16,
data: Vec<u8>,
) -> Result<(), protocol::Error> {
self.load_chunk19_or_115(false, x, z, new, mask, data)
}
#[allow(clippy::or_fun_call)]
fn load_chunk19_or_115(
&self,
read_biomes: bool,
x: i32,
z: i32,
new: bool,
mask: u16,
data: Vec<u8>,
) -> Result<(), protocol::Error> {
self.load_chunk(
x,
z,
new,
true,
read_biomes,
mask,
0,
&mut Cursor::new(data),
19,
)
}
fn flag_section_dirty(&self, x: i32, y: i32, z: i32) {
if !(0..=15).contains(&y) {
return;
}
let cpos = CPos(x, z);
if let Some(mut chunk) = self.chunks.clone().get_mut(&cpos) {
if let Some(sec) = chunk.sections[y as usize].as_mut() {
sec.dirty = true;
}
}
}
}
impl block::WorldAccess for World {
fn get_block(&self, pos: Position) -> block::Block {
World::get_block(self, pos)
}
}
#[derive(PartialEq, Eq, Hash, Clone, Copy)]
pub struct CPos(pub i32, pub i32);
pub struct Chunk {
position: CPos,
pub(crate) sections: [Option<Section>; 16],
sections_rendered_on: [u32; 16],
biomes: [u8; 16 * 16],
heightmap: [u8; 16 * 16],
heightmap_dirty: bool,
block_entities: HashMap<Position, ecs::Entity, BuildHasherDefault<FNVHash>>,
}
impl Chunk {
fn new(pos: CPos) -> Chunk {
Chunk {
position: pos,
sections: [
None, None, None, None, None, None, None, None, None, None, None, None, None, None,
None, None,
],
sections_rendered_on: [0; 16],
biomes: [0; 16 * 16],
heightmap: [0; 16 * 16],
heightmap_dirty: true,
block_entities: HashMap::with_hasher(BuildHasherDefault::default()),
}
}
fn calculate_heightmap(&mut self) {
for x in 0..16 {
for z in 0..16 {
let idx = ((z << 4) | x) as usize;
for yy in 0..256 {
let sy = 255 - yy;
if let block::Air { .. } = self.get_block(x, sy, z) {
continue;
}
self.heightmap[idx] = sy as u8;
break;
}
}
}
self.heightmap_dirty = true;
}
fn set_block(&mut self, x: i32, y: i32, z: i32, b: block::Block) -> bool {
let s_idx = y >> 4;
if !(0..=15).contains(&s_idx) {
return false;
}
let s_idx = s_idx as usize;
if self.sections[s_idx].is_none() {
if let block::Air {} = b {
return false;
}
let fill_sky = self.sections.iter().skip(s_idx).all(|v| v.is_none());
self.sections[s_idx] = Some(Section::new(s_idx as u8, fill_sky));
}
{
let section = self.sections[s_idx as usize].as_mut().unwrap();
if !section.set_block(x, y & 0xF, z, b) {
return false;
}
}
let idx = ((z << 4) | x) as usize;
match self.heightmap[idx].cmp(&(y as u8)) {
Ordering::Less => {
self.heightmap[idx] = y as u8;
self.heightmap_dirty = true;
}
Ordering::Equal => {
// Find a new lowest
for yy in 0..y {
let sy = y - yy - 1;
if let block::Air { .. } = self.get_block(x, sy, z) {
continue;
}
self.heightmap[idx] = sy as u8;
break;
}
self.heightmap_dirty = true;
}
Ordering::Greater => (),
}
true
}
fn get_block(&self, x: i32, y: i32, z: i32) -> block::Block {
let s_idx = y >> 4;
if !(0..=15).contains(&s_idx) {
return block::Missing {};
}
match self.sections[s_idx as usize].as_ref() {
Some(sec) => sec.get_block(x, y & 0xF, z),
None => block::Air {},
}
}
fn get_block_light(&self, x: i32, y: i32, z: i32) -> u8 {
let s_idx = y >> 4;
if !(0..=15).contains(&s_idx) {
return 0;
}
match self.sections[s_idx as usize].as_ref() {
Some(sec) => sec.get_block_light(x, y & 0xF, z),
None => 0,
}
}
fn set_block_light(&mut self, x: i32, y: i32, z: i32, light: u8) {
let s_idx = y >> 4;
if !(0..=15).contains(&s_idx) {
return;
}
let s_idx = s_idx as usize;
if self.sections[s_idx].is_none() {
if light == 0 {
return;
}
let fill_sky = self.sections.iter().skip(s_idx).all(|v| v.is_none());
self.sections[s_idx] = Some(Section::new(s_idx as u8, fill_sky));
}
if let Some(sec) = self.sections[s_idx].as_mut() {
sec.set_block_light(x, y & 0xF, z, light)
}
}
fn get_sky_light(&self, x: i32, y: i32, z: i32) -> u8 {
let s_idx = y >> 4;
if !(0..=15).contains(&s_idx) {
return 15;
}
match self.sections[s_idx as usize].as_ref() {
Some(sec) => sec.get_sky_light(x, y & 0xF, z),
None => 15,
}
}
fn set_sky_light(&mut self, x: i32, y: i32, z: i32, light: u8) {
let s_idx = y >> 4;
if !(0..=15).contains(&s_idx) {
return;
}
let s_idx = s_idx as usize;
if self.sections[s_idx].is_none() {
if light == 15 {
return;
}
let fill_sky = self.sections.iter().skip(s_idx).all(|v| v.is_none());
self.sections[s_idx] = Some(Section::new(s_idx as u8, fill_sky));
}
if let Some(sec) = self.sections[s_idx as usize].as_mut() {
sec.set_sky_light(x, y & 0xF, z, light)
}
}
// TODO: make use of "get_biome"
#[allow(dead_code)]
fn get_biome(&self, x: i32, z: i32) -> biome::Biome {
biome::Biome::by_id(self.biomes[((z << 4) | x) as usize] as usize)
}
pub fn capture_snapshot(&self) -> ChunkSnapshot {
let mut snapshot_sections = [
None, None, None, None, None, None, None, None, None, None, None, None, None, None,
None, None,
];
for section in self.sections.iter().enumerate() {
if section.1.is_some() {
snapshot_sections[section.0] =
Some(section.1.as_ref().unwrap().capture_snapshot(self.biomes));
}
}
ChunkSnapshot {
position: self.position,
sections: snapshot_sections,
biomes: self.biomes,
heightmap: self.heightmap,
}
}
}
pub struct ChunkSnapshot {
pub position: CPos,
pub sections: [Option<SectionSnapshot>; 16],
pub biomes: [u8; 16 * 16],
pub heightmap: [u8; 16 * 16],
}
pub struct Section {
pub cull_info: chunk_builder::CullInfo,
pub render_buffer: Arc<RwLock<render::ChunkBuffer>>,
y: u8,
blocks: storage::BlockStorage,
block_light: nibble::Array,
sky_light: nibble::Array,
dirty: bool,
building: bool,
}
impl Section {
fn new(y: u8, fill_sky: bool) -> Self {
let sky_light = if fill_sky {
nibble::Array::new_def(16 * 16 * 16, 0xF)
} else {
nibble::Array::new(16 * 16 * 16)
};
Section {
cull_info: chunk_builder::CullInfo::all_vis(),
render_buffer: Arc::new(RwLock::new(render::ChunkBuffer::new())),
y,
blocks: storage::BlockStorage::new(16 * 16 * 16),
block_light: nibble::Array::new(16 * 16 * 16),
sky_light,
dirty: false,
building: false,
}
}
pub fn capture_snapshot(&self, biomes: [u8; 16 * 16]) -> SectionSnapshot {
SectionSnapshot {
y: self.y,
blocks: self.blocks.clone(),
block_light: self.block_light.clone(),
sky_light: self.sky_light.clone(),
biomes,
}
}
fn get_block(&self, x: i32, y: i32, z: i32) -> block::Block {
self.blocks.get(((y << 8) | (z << 4) | x) as usize)
}
fn set_block(&mut self, x: i32, y: i32, z: i32, b: block::Block) -> bool {
if self.blocks.set(((y << 8) | (z << 4) | x) as usize, b) {
self.dirty = true;
self.set_sky_light(x, y, z, 0); // TODO: Do we have to set this every time?
self.set_block_light(x, y, z, 0);
true
} else {
false
}
}
fn get_block_light(&self, x: i32, y: i32, z: i32) -> u8 {
self.block_light.get(((y << 8) | (z << 4) | x) as usize)
}
fn set_block_light(&mut self, x: i32, y: i32, z: i32, l: u8) {
self.block_light.set(((y << 8) | (z << 4) | x) as usize, l);
}
fn get_sky_light(&self, x: i32, y: i32, z: i32) -> u8 {
self.sky_light.get(((y << 8) | (z << 4) | x) as usize)
}
fn set_sky_light(&mut self, x: i32, y: i32, z: i32, l: u8) {
self.sky_light.set(((y << 8) | (z << 4) | x) as usize, l);
}
}
#[derive(Clone)]
pub struct SectionSnapshot {
pub y: u8,
pub blocks: storage::BlockStorage,
pub block_light: nibble::Array,
pub sky_light: nibble::Array,
pub biomes: [u8; 16 * 16], // TODO: Remove this by using the chunk's biome!
}
lazy_static! {
static ref EMPTY_SECTION: SectionSnapshot = SectionSnapshot {
y: 255, // TODO: Check
blocks: storage::BlockStorage::new(16 * 16 * 16),
block_light: nibble::Array::new(16 * 16 * 16),
sky_light: nibble::Array::new_def(16 * 16 * 16, 0xF),
biomes: [0; 16 * 16], // TODO: Verify this!
};
}
impl SectionSnapshot {
pub fn get_block(&self, x: i32, y: i32, z: i32) -> block::Block {
self.blocks.get(((y << 8) | (z << 4) | x) as usize)
}
pub fn get_block_light(&self, x: i32, y: i32, z: i32) -> u8 {
self.block_light.get(((y << 8) | (z << 4) | x) as usize)
}
pub fn get_sky_light(&self, x: i32, y: i32, z: i32) -> u8 {
self.sky_light.get(((y << 8) | (z << 4) | x) as usize)
}
pub fn get_biome(&self, x: i32, z: i32) -> biome::Biome {
biome::Biome::by_id(self.biomes[((z << 4) | x) as usize] as usize)
}
}
// TODO: make use of "x: i32", "y: i32" and "z: i32"
#[allow(dead_code)]
pub struct ComposedSection {
sections: [Option<SectionSnapshot>; 27],
x: i32,
y: i32,
z: i32,
}
impl ComposedSection {
// NOTE: This only supports up to 15 blocks in expansion
pub fn new(world: Arc<World>, x: i32, z: i32, y: i32, expand_by: u8) -> Self {
let chunk_lookup = world.chunks.clone();
let mut sections = [
None, None, None, None, None, None, None, None, None, None, None, None, None, None,
None, None, None, None, None, None, None, None, None, None, None, None, None,
];
for xo in -1..2 {
for zo in -1..2 {
let chunk = chunk_lookup.get(&CPos(x + xo, z + zo));
let chunk = chunk.as_ref();
for yo in -1..2 {
let section = if let Some(chunk) = chunk {
if y + yo != (y + yo) & 15 {
None
} else {
let section = &chunk.sections[(y + yo) as usize].as_ref();
if let Some(section) = section {
Some(section.capture_snapshot(chunk.biomes))
} else {
Some(EMPTY_SECTION.clone())
}
}
} else {
None
};
sections[((xo + 1) + (zo + 1) * 3 + (yo + 1) * 3 * 3) as usize] = section;
}
}
}
ComposedSection {
sections,
x: -(expand_by as i32),
y: -(expand_by as i32),
z: -(expand_by as i32),
}
}
pub fn get_block(&self, x: i32, y: i32, z: i32) -> block::Block {
let chunk_x = ComposedSection::cmp(x & !15, 0);
let chunk_z = ComposedSection::cmp(z & !15, 0);
let chunk_y = ComposedSection::cmp(y & !15, 0);
let section = self.sections
[((chunk_x + 1) + (chunk_z + 1) * 3 + (chunk_y + 1) * 3 * 3) as usize]
.as_ref();
let x = if x < 0 { 16 + x } else { x & 15 };
let y = if y < 0 { 16 + y } else { y & 15 };
let z = if z < 0 { 16 + z } else { z & 15 };
section.map_or(block::Missing {}, |s| s.get_block(x, y, z))
}
pub fn get_block_light(&self, x: i32, y: i32, z: i32) -> u8 {
let chunk_x = ComposedSection::cmp(x & !15, 0);
let chunk_z = ComposedSection::cmp(z & !15, 0);
let chunk_y = ComposedSection::cmp(y & !15, 0);
let section = self.sections
[((chunk_x + 1) + (chunk_z + 1) * 3 + (chunk_y + 1) * 3 * 3) as usize]
.as_ref();
let x = if x < 0 { 16 + x } else { x & 15 };
let y = if y < 0 { 16 + y } else { y & 15 };
let z = if z < 0 { 16 + z } else { z & 15 };
section.map_or(16, |s| s.get_block_light(x, y, z))
}
pub fn get_sky_light(&self, x: i32, y: i32, z: i32) -> u8 {
let chunk_x = ComposedSection::cmp(x & !15, 0);
let chunk_z = ComposedSection::cmp(z & !15, 0);
let chunk_y = ComposedSection::cmp(y & !15, 0);
let section = self.sections
[((chunk_x + 1) + (chunk_z + 1) * 3 + (chunk_y + 1) * 3 * 3) as usize]
.as_ref();
let x = if x < 0 { 16 + x } else { x & 15 };
let y = if y < 0 { 16 + y } else { y & 15 };
let z = if z < 0 { 16 + z } else { z & 15 };
section.map_or(16, |s| s.get_sky_light(x, y, z))
}
pub fn get_biome(&self, x: i32, z: i32) -> biome::Biome {
let chunk_x = ComposedSection::cmp(x & !15, 0);
let chunk_z = ComposedSection::cmp(z & !15, 0);
let section = self.sections[((chunk_x + 1) + (chunk_z + 1) * 3) as usize].as_ref();
let x = if x < 0 { 16 + x } else { x & 15 };
let z = if z < 0 { 16 + z } else { z & 15 };
section.map_or(Biome::by_id(0), |s| s.get_biome(x, z))
}
#[inline]
fn cmp(first: i32, second: i32) -> i32 {
// copied from rust's ordering enum's src code
// The order here is important to generate more optimal assembly.
match first.cmp(&second) {
Ordering::Less => -1,
Ordering::Equal => 0,
Ordering::Greater => 1,
}
}
}
|
{
if self.set_block_raw(pos, b) {
self.update_block(pos);
}
}
|
database.py
|
from sqllite import CDataBase as sqllite
from kademliaGetSet import CDataBase
import socket
from isolated_functions import *
instance_kade = None
class CSQLLite():
|
global instance_kade
if instance_kade is None:
node_identifier = socket.gethostbyname(socket.gethostname())
self.sqllite = sqllite()
self.nodes = ["3.113.39.120", "192.168.0.38", "192.168.56.1", "10.0.2.2", "10.0.2.15", "127.0.0.1", node_identifier]
self.kade = CDataBase()
self.kade.initiate()
instance_kade = self
else:
self.nodes = instance_kade.nodes
self.sqllite = instance_kade.sqllite
self.kade = instance_kade.kade
def save(self, key, value, announce=''):
if isinstance(key, str) == False:
key = str(key)
if announce == 'EXTERNAL':
_current = self.sqllite.get(announce)
if _current is None:
self.sqllite.set(key=announce, value=[key, ])
else:
_current.append(key)
_current = list(set(_current))
self.sqllite.set(key=announce, value=_current)
else:
_not_save_local = self.sqllite.get('EXTERNAL')
if _not_save_local is None: _not_save_local = []
if not (key in _not_save_local and announce == 'Account:'):
self.sqllite.set(key=announce + key, value=value)
if announce != '':
self.announce(announce + key, value)
return self.sqllite.get(announce + key)
def get(self, key):
if isinstance(key, str) == False: key = str(key)
return self.sqllite.get(key=key)
def announce(self, key, value):
print('KADEMLIA SET: ',key,' = ',self.kade.set(key=key, value=str(value)))
def look_at(self, key):
if isinstance(key, str) == False: key = str(key)
response = self.kade.get(key=key)
if response is not None:
#self.save(key, response)
try:
response = str2obj(response)
except:
pass
return response
def close(self):
self.sqllite.close()
def register_node(self, address):
if address not in self.nodes:
self.nodes.append(address)
def bootstrapNodes(self):
self.kade.bootstrap(self.nodes)
|
def __init__(self):
|
skicka.go
|
//
// skicka.go
// Copyright(c)2014-2016 Google, Inc.
//
// Tool for transferring files to/from Google Drive and related operations.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//
package main
import (
"bytes"
"crypto/aes"
"crypto/cipher"
"crypto/md5"
"crypto/rand"
"crypto/sha256"
"encoding/hex"
"encoding/json"
"flag"
"fmt"
"github.com/google/skicka/gdrive"
"golang.org/x/crypto/pbkdf2"
"golang.org/x/net/context"
"golang.org/x/oauth2"
"gopkg.in/gcfg.v1"
"io"
"io/ioutil"
"log"
"net/http"
"net/http/httptest"
"os"
"os/exec"
"path/filepath"
"regexp"
"runtime"
"strconv"
"strings"
"sync"
"sync/atomic"
"time"
)
const timeFormat = "2006-01-02T15:04:05.000000000Z07:00"
const encryptionSuffix = ".aes256"
const resumableUploadMinSize = 64 * 1024 * 1024
const passphraseEnvironmentVariable = "SKICKA_PASSPHRASE"
///////////////////////////////////////////////////////////////////////////
// Global Variables
type debugging bool
var (
gd *gdrive.GDrive
// The key is only set if encryption is needed (i.e. if -encrypt is
// provided for an upload, or if an encrypted file is encountered
// during 'download' or 'cat').
key []byte
debug debugging
verbose debugging
quiet bool
// Configuration read in from the skicka config file.
config struct {
Google struct {
ClientId string
ClientSecret string
// If set, is appended to all http requests via ?key=XXX.
ApiKey string
}
Encryption struct {
Salt string
Passphrase_hash string
Encrypted_key string
Encrypted_key_iv string
}
Upload struct {
Ignored_Regexp []string
Bytes_per_second_limit int
}
Download struct {
Bytes_per_second_limit int
}
}
// Various statistics gathered along the way. These all should be
// updated using atomic operations since we often have multiple threads
// working concurrently for uploads and downloads.
stats struct {
DiskReadBytes int64
DiskWriteBytes int64
UploadBytes int64
DownloadBytes int64
LocalFilesUpdated int64
DriveFilesUpdated int64
}
// Smaller files will be handled with multiple threads going at once;
// doing so improves bandwidth utilization since round-trips to the
// Drive APIs take a while. (However, we don't want too have too many
// workers; this would both lead to lots of 403 rate limit errors...)
nWorkers int
)
///////////////////////////////////////////////////////////////////////////
// Utility types
var authre = regexp.MustCompile("Authorization: Bearer [^\\s]*")
// sanitize attempts to remove sensitive values like authorization key
// values from debugging output so that it can be shared without also
// compromising the login credentials, etc.
func sanitize(s string) string {
if config.Google.ClientId != "" {
s = strings.Replace(s, config.Google.ClientId, "[***ClientId***]", -1)
}
if config.Google.ClientSecret != "" {
s = strings.Replace(s, config.Google.ClientSecret, "[***ClientSecret***]", -1)
}
if config.Google.ApiKey != "" {
s = strings.Replace(s, config.Google.ApiKey, "[***ApiKey***]", -1)
}
s = authre.ReplaceAllLiteralString(s, "Authorization: Bearer [***AuthToken***]")
return s
}
func debugNoPrint(s string, args ...interface{}) {
}
func debugPrint(s string, args ...interface{}) {
debug.Printf(s, args...)
}
func (d debugging) Printf(format string, args ...interface{}) {
if d {
log.Print(sanitize(fmt.Sprintf(format, args...)))
}
}
func message(format string, args ...interface{}) {
if !quiet {
log.Print(sanitize(fmt.Sprintf(format, args...)))
}
}
// byteCountingReader keeps tracks of how many bytes are actually read via
// Read() calls.
type byteCountingReader struct {
R io.Reader
bytesRead int64
}
func (bcr *byteCountingReader) Read(dst []byte) (int, error) {
read, err := bcr.R.Read(dst)
bcr.bytesRead += int64(read)
return read, err
}
///////////////////////////////////////////////////////////////////////////
// Small utility functions
// Utility function to decode hex-encoded bytes; treats any encoding errors
// as fatal errors (we assume that checkConfigValidity has already made
// sure the strings in the config file are reasonable.)
func decodeHexString(s string) []byte {
r, err := hex.DecodeString(s)
checkFatalError(err, "unable to decode hex string")
return r
}
// Returns a string that gives the given number of bytes with reasonable
// units. If 'fixedWidth' is true, the returned string will always be the same
// length, which makes it easier to line things up in columns.
func fmtbytes(n int64, fixedWidth bool) string {
if fixedWidth {
if n >= 1024*1024*1024*1024 {
return fmt.Sprintf("%6.2f TiB", float64(n)/(1024.*1024.*
1024.*1024.))
} else if n >= 1024*1024*1024 {
return fmt.Sprintf("%6.2f GiB", float64(n)/(1024.*1024.*
1024.))
} else if n > 1024*1024 {
return fmt.Sprintf("%6.2f MiB", float64(n)/(1024.*1024.))
} else if n > 1024 {
return fmt.Sprintf("%6.2f kiB", float64(n)/1024.)
} else {
return fmt.Sprintf("%6d B ", n)
}
} else {
if n >= 1024*1024*1024*1024 {
return fmt.Sprintf("%.2f TiB", float64(n)/(1024.*1024.*
1024.*1024.))
} else if n >= 1024*1024*1024 {
return fmt.Sprintf("%.2f GiB", float64(n)/(1024.*1024.*
1024.))
} else if n > 1024*1024 {
return fmt.Sprintf("%.2f MiB", float64(n)/(1024.*1024.))
} else if n > 1024 {
return fmt.Sprintf("%.2f kiB", float64(n)/1024.)
} else {
return fmt.Sprintf("%d B", n)
}
}
}
func fmtDuration(d time.Duration) string {
seconds := int(d.Seconds())
hours := seconds / 3600
minutes := (seconds % 3600) / 60
var str string
if hours > 0 {
str += fmt.Sprintf("%dh ", hours)
}
if minutes > 0 {
str += fmt.Sprintf("%dm ", minutes)
}
return str + fmt.Sprintf("%ds", seconds%60)
}
func normalizeModTime(modTime time.Time) time.Time {
// Google Drive supports millisecond resolution for modification time,
// but some filesystems (e.g., NTFS) support nanosecond resolution.
// We truncate the modification date to the nearest millisecond to avoid
// spurious differences when comparing file modification dates.
return modTime.UTC().Truncate(time.Millisecond)
}
// A few values that printFinalStats() uses to do its work
var startTime = time.Now()
var syncStartTime time.Time
var statsMutex sync.Mutex
var lastStatsTime = time.Now()
var lastStatsBytes int64
var maxActiveBytes int64
func updateActiveMemory() {
statsMutex.Lock()
defer statsMutex.Unlock()
var memstats runtime.MemStats
runtime.ReadMemStats(&memstats)
activeBytes := int64(memstats.Alloc)
if activeBytes > maxActiveBytes {
maxActiveBytes = activeBytes
}
}
// Called to print overall statistics after an upload or download is finished.
func printFinalStats() {
updateActiveMemory()
statsMutex.Lock()
defer statsMutex.Unlock()
syncTime := time.Now().Sub(syncStartTime)
message("Preparation time %s, sync time %s\n",
fmtDuration(syncStartTime.Sub(startTime)), fmtDuration(syncTime))
message("Updated %d Drive files, %d local files\n",
stats.DriveFilesUpdated, stats.LocalFilesUpdated)
message("%s read from disk, %s written to disk\n",
fmtbytes(stats.DiskReadBytes, false),
fmtbytes(stats.DiskWriteBytes, false))
message("%s uploaded (%s/s), %s downloaded (%s/s)\n",
fmtbytes(stats.UploadBytes, false),
fmtbytes(int64(float64(stats.UploadBytes)/syncTime.Seconds()),
false),
fmtbytes(stats.DownloadBytes, false),
fmtbytes(int64(float64(stats.DownloadBytes)/syncTime.Seconds()),
false))
message("%s peak memory used\n",
fmtbytes(maxActiveBytes, false))
}
// Return the MD5 hash of the file at the given path in the form of a
// string. If encryption is enabled, use the encrypted file contents when
// computing the hash.
func localFileMD5Contents(path string, encrypt bool, iv []byte) (string, error) {
contentsReader, _, err := getFileContentsReaderForUpload(path, encrypt, iv)
if contentsReader != nil {
defer contentsReader.Close()
}
if err != nil {
return "", err
}
md5 := md5.New()
n, err := io.Copy(md5, contentsReader)
atomic.AddInt64(&stats.DiskReadBytes, n)
if err != nil {
return "", err
}
return fmt.Sprintf("%x", md5.Sum(nil)), nil
}
// Returns an io.ReadCloser for given file, such that the bytes read are
// ready for upload: specifically, if encryption is enabled, the contents
// are encrypted with the given key and the initialization vector is
// prepended to the returned bytes. Otherwise, the contents of the file are
// returned directly.
func getFileContentsReaderForUpload(path string, encrypt bool,
iv []byte) (io.ReadCloser, int64, error) {
f, err := os.Open(path)
if err != nil {
return f, 0, err
}
stat, err := os.Stat(path)
if err != nil {
return nil, 0, err
}
fileSize := stat.Size()
if encrypt {
if key == nil {
key = decryptEncryptionKey()
}
r := makeEncrypterReader(key, iv, f)
// Prepend the initialization vector to the returned bytes.
r = io.MultiReader(bytes.NewReader(iv[:aes.BlockSize]), r)
readCloser := struct {
io.Reader
io.Closer
}{r, f}
return readCloser, fileSize + aes.BlockSize, nil
}
return f, fileSize, nil
}
///////////////////////////////////////////////////////////////////////////
// Encryption/decryption
// Encrypt the given plaintext using the given encryption key 'key' and
// initialization vector 'iv'. The initialization vector should be 16 bytes
// (the AES block-size), and should be randomly generated and unique for
// each file that's encrypted.
func encryptBytes(key []byte, iv []byte, plaintext []byte) []byte {
r, _ := ioutil.ReadAll(makeEncrypterReader(key, iv, bytes.NewReader(plaintext)))
return r
}
// Returns an io.Reader that encrypts the byte stream from the given io.Reader
// using the given key and initialization vector.
func makeEncrypterReader(key []byte, iv []byte, reader io.Reader) io.Reader
|
// Decrypt the given cyphertext using the given encryption key and
// initialization vector 'iv'.
func decryptBytes(key []byte, iv []byte, ciphertext []byte) []byte {
r, _ := ioutil.ReadAll(makeDecryptionReader(key, iv, bytes.NewReader(ciphertext)))
return r
}
func makeDecryptionReader(key []byte, iv []byte, reader io.Reader) io.Reader {
if key == nil {
printErrorAndExit(fmt.Errorf("uninitialized key in makeDecryptionReader()"))
}
block, err := aes.NewCipher(key)
checkFatalError(err, "unable to create AES cypher")
if len(iv) != aes.BlockSize {
printErrorAndExit(fmt.Errorf("IV length %d != aes.BlockSize %d", len(iv),
aes.BlockSize))
}
stream := cipher.NewCFBDecrypter(block, iv)
return &cipher.StreamReader{S: stream, R: reader}
}
// Return the given number of bytes of random values, using a
// cryptographically-strong random number source.
func getRandomBytes(n int) []byte {
bytes := make([]byte, n)
_, err := io.ReadFull(rand.Reader, bytes)
checkFatalError(err, "unable to get random bytes")
return bytes
}
// Create a new encryption key and encrypt it using the user-provided
// passphrase. Prints output to stdout that gives text to add to the
// ~/.skicka.config file to store the encryption key.
func generateKey() {
passphrase := os.Getenv(passphraseEnvironmentVariable)
if passphrase == "" {
printErrorAndExit(fmt.Errorf(passphraseEnvironmentVariable +
" environment variable not set."))
}
// Derive a 64-byte hash from the passphrase using PBKDF2 with 65536
// rounds of SHA256.
salt := getRandomBytes(32)
hash := pbkdf2.Key([]byte(passphrase), salt, 65536, 64, sha256.New)
if len(hash) != 64 {
printErrorAndExit(fmt.Errorf("incorrect key size returned by pbkdf2 %d", len(hash)))
}
// We'll store the first 32 bytes of the hash to use to confirm the
// correct passphrase is given on subsequent runs.
passHash := hash[:32]
// And we'll use the remaining 32 bytes as a key to encrypt the actual
// encryption key. (These bytes are *not* stored).
keyEncryptKey := hash[32:]
// Generate a random encryption key and encrypt it using the key
// derived from the passphrase.
key := getRandomBytes(32)
iv := getRandomBytes(16)
encryptedKey := encryptBytes(keyEncryptKey, iv, key)
fmt.Printf("; Add the following lines to the [encryption] section\n")
fmt.Printf("; of your ~/.skicka.config file.\n")
fmt.Printf("\tsalt=%s\n", hex.EncodeToString(salt))
fmt.Printf("\tpassphrase-hash=%s\n", hex.EncodeToString(passHash))
fmt.Printf("\tencrypted-key=%s\n", hex.EncodeToString(encryptedKey))
fmt.Printf("\tencrypted-key-iv=%s\n", hex.EncodeToString(iv))
}
// Decrypts the encrypted encryption key using values from the config file
// and the user's passphrase.
func decryptEncryptionKey() []byte {
if key != nil {
panic("key aready decrypted!")
}
salt := decodeHexString(config.Encryption.Salt)
passphraseHash := decodeHexString(config.Encryption.Passphrase_hash)
encryptedKey := decodeHexString(config.Encryption.Encrypted_key)
encryptedKeyIv := decodeHexString(config.Encryption.Encrypted_key_iv)
passphrase := os.Getenv(passphraseEnvironmentVariable)
if passphrase == "" {
fmt.Fprintf(os.Stderr, "skicka: "+passphraseEnvironmentVariable+
" environment variable not set")
os.Exit(1)
}
derivedKey := pbkdf2.Key([]byte(passphrase), salt, 65536, 64, sha256.New)
// Make sure the first 32 bytes of the derived key match the bytes stored
// when we first generated the key; if they don't, the user gave us
// the wrong passphrase.
if !bytes.Equal(derivedKey[:32], passphraseHash) {
fmt.Fprintf(os.Stderr, "skicka: incorrect passphrase")
os.Exit(1)
}
// Use the last 32 bytes of the derived key to decrypt the actual
// encryption key.
keyEncryptKey := derivedKey[32:]
return decryptBytes(keyEncryptKey, encryptedKeyIv, encryptedKey)
}
///////////////////////////////////////////////////////////////////////////
// Google Drive utility functions
// Returns the initialization vector (for encryption) for the given file.
// We store the initialization vector as a hex-encoded property in the
// file so that we don't need to download the file's contents to find the
// IV.
func getInitializationVector(driveFile *gdrive.File) ([]byte, error) {
ivhex, err := driveFile.GetProperty("IV")
if err != nil {
return nil, err
}
iv, err := hex.DecodeString(ivhex)
if err != nil {
return nil, err
}
if len(iv) != aes.BlockSize {
return nil, fmt.Errorf("unexpected length of IV %d", len(iv))
}
return iv, nil
}
func getPermissions(driveFile *gdrive.File) (os.FileMode, error) {
permStr, err := driveFile.GetProperty("Permissions")
if err != nil {
return 0, err
}
perm, err := strconv.ParseInt(permStr, 8, 16)
return os.FileMode(perm), err
}
///////////////////////////////////////////////////////////////////////////
// Error handling
func checkFatalError(err error, message string) {
if err != nil {
if message != "" {
printErrorAndExit(fmt.Errorf("%s: %v", message, err))
} else {
printErrorAndExit(err)
}
}
}
func addErrorAndPrintMessage(totalErrors *int32, message string, err error) {
fmt.Fprintf(os.Stderr, "skicka: "+message+": %s\n", err)
atomic.AddInt32(totalErrors, 1)
}
func printErrorAndExit(err error) {
fmt.Fprintf(os.Stderr, "\rskicka: %s\n", err)
os.Exit(1)
}
func printUsageAndExit() {
usage()
os.Exit(1)
}
///////////////////////////////////////////////////////////////////////////
// OAuth
const clientId = "139650692643-en68l7r28gmmnb4coiag0n61k9g4cr28.apps.googleusercontent.com"
func getOAuthClient(tokenCacheFilename string, tryBrowserAuth bool,
transport http.RoundTripper) (*http.Client, error) {
if config.Google.ApiKey != "" {
transport = addKeyTransport{transport: transport, key: config.Google.ApiKey}
}
oauthConfig := &oauth2.Config{
ClientID: clientId,
Endpoint: oauth2.Endpoint{
AuthURL: "https://accounts.google.com/o/oauth2/auth",
TokenURL: "https://accounts.google.com/o/oauth2/token",
},
RedirectURL: "urn:ietf:wg:oauth:2.0:oob",
Scopes: []string{"https://www.googleapis.com/auth/drive"},
}
if config.Google.ClientId != "" {
oauthConfig.ClientID = config.Google.ClientId
oauthConfig.ClientSecret = config.Google.ClientSecret
}
// Have the http.Client that oauth2 ends up returning use our
// http.RoundTripper (so that -dump-http, etc., all works.)
ctx := context.WithValue(oauth2.NoContext, oauth2.HTTPClient,
&http.Client{Transport: transport})
var err error
var token *oauth2.Token
// Try to read a token from the cache.
if token, err = readCachedToken(tokenCacheFilename, oauthConfig.ClientID); err != nil {
// If no token, or if the token isn't legit, have the user authorize.
if token, err = authorizeAndGetToken(oauthConfig, tryBrowserAuth); err != nil {
return nil, err
}
saveToken(tokenCacheFilename, token, oauthConfig.ClientID)
}
return oauthConfig.Client(ctx, token), nil
}
// Structure used for serializing oauth2.Tokens to disk. We also include
// the oauth2 client id that was used when the token was generated; this
// allows us to detect when reauthorization is necessary due to a change in
// client id.
type token struct {
ClientId string
oauth2.Token
}
func readCachedToken(tokenCacheFilename string, clientId string) (*oauth2.Token, error) {
b, err := ioutil.ReadFile(tokenCacheFilename)
if err != nil {
return nil, err
}
var t token
if err = json.Unmarshal(b, &t); err != nil {
return nil, err
}
if t.ClientId != clientId {
return nil, fmt.Errorf("token client id mismatch")
}
return &t.Token, nil
}
// Save the given oauth2.Token to disk so that the user doesn't have to
// reauthorize skicka next time.
func saveToken(tokenCacheFilename string, t *oauth2.Token, clientId string) {
tok := token{ClientId: clientId, Token: *t}
var err error
var b []byte
if b, err = json.Marshal(&tok); err == nil {
if err = ioutil.WriteFile(tokenCacheFilename, b, 0600); err == nil {
return
}
}
// Report the error but don't exit; we can continue along with the current
// command and the user will have to re-authorize next time.
fmt.Fprintf(os.Stderr, "skicka: %s: %s", tokenCacheFilename, err)
}
// Have the user authorize skicka and return the resulting token. tryBrowser
// controls whether the function tries to open a tab in a web browser or
// prints instructions to tell the user how to authorize manually.
func authorizeAndGetToken(oauthConfig *oauth2.Config, tryBrowser bool) (*oauth2.Token, error) {
var code string
var err error
if tryBrowser {
fmt.Printf("skicka: attempting to launch browser to authorize.\n")
fmt.Printf("(Re-run skicka with the -no-browser-auth option to authorize directly.)\n")
if code, err = codeFromWeb(oauthConfig); err != nil {
return nil, err
}
} else {
randState := fmt.Sprintf("st%d", time.Now().UnixNano())
url := oauthConfig.AuthCodeURL(randState)
fmt.Printf("Go to the following link in your browser:\n%v\n", url)
fmt.Printf("Enter verification code: ")
fmt.Scanln(&code)
}
return oauthConfig.Exchange(oauth2.NoContext, code)
}
// Get an authorization code by opening up the authorization page in a web
// browser.
func codeFromWeb(oauthConfig *oauth2.Config) (string, error) {
ch := make(chan string)
randState := fmt.Sprintf("st%d", time.Now().UnixNano())
// Launch a local web server to receive the authorization code.
ts := httptest.NewServer(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
if req.URL.Path == "/favicon.ico" {
http.Error(rw, "", 404)
return
}
if req.FormValue("state") != randState {
log.Printf("State doesn't match: req = %#v", req)
http.Error(rw, "", 500)
return
}
if code := req.FormValue("code"); code != "" {
fmt.Fprintf(rw, "<h1>Success!</h1>Skicka is now authorized.")
rw.(http.Flusher).Flush()
ch <- code
return
}
http.Error(rw, "", 500)
}))
defer ts.Close()
oauthConfig.RedirectURL = ts.URL
url := oauthConfig.AuthCodeURL(randState)
errs := make(chan error)
go func() {
err := openURL(url)
errs <- err
}()
err := <-errs
if err == nil {
// The URL open was apparently successful; wait for our server to
// receive the code and send it back.
code := <-ch
return code, nil
}
return "", err
}
// Attempt to open the given URL in a web browser.
func openURL(url string) error {
try := []string{"xdg-open", "google-chrome", "open"}
for _, bin := range try {
if err := exec.Command(bin, url).Run(); err == nil {
return nil
}
}
return fmt.Errorf("Error opening URL in browser.")
}
///////////////////////////////////////////////////////////////////////////
// main (and its helpers)
// Create an empty configuration file for the user to use as a starting-point.
func createConfigFile(filename string) {
contents := `; Default .skicka.config file. See
; https://github.com/google/skicka/blob/master/README.md for more
; information about setting up skicka.
[google]
;Override the default application client id used by skicka.
;clientid=YOUR_GOOGLE_APP_CLIENT_ID
;clientsecret=YOUR_GOOGLE_APP_SECRET
;An API key may optionally be provided.
;apikey=YOUR_API_KEY
[encryption]
; Run 'skicka genkey' to generate an encyption key.
;salt=
;passphrase-hash=
;encrypted-key=
;encrypted-key-iv=
[upload]
; You may want to specify regular expressions to match local filenames
; that you want to be ignored by 'skicka upload'. Use one ignored-regexp
; line for each such regular expression.
;ignored-regexp="\\.o$"
;ignored-regexp=~$
;ignored-regexp="\\._"
;ignored-regexp="RECYCLE\\.BIN"
;ignored-regexp="Thumbs\\.db$"
;ignored-regexp="\\.git"
;ignored-regexp="\\.(mp3|wma|aiff)$"
;ignored-regexp="\\.~lock\\..*$"
;ignored-regexp="~\\$"
;ignored-regexp="\\.DS_Store$"
;ignored-regexp="desktop.ini"
;
; To limit upload bandwidth, you can set the maximum (average)
; bytes per second that will be used for uploads
;bytes-per-second-limit=524288 ; 512kB
`
// Don't overwrite an already-existing configuration file.
if _, err := os.Stat(filename); os.IsNotExist(err) {
err := ioutil.WriteFile(filename, []byte(contents), 0600)
if err != nil {
printErrorAndExit(fmt.Errorf("%s: %v", filename, err))
}
message("created configuration file %s.\n", filename)
} else {
printErrorAndExit(fmt.Errorf("%s: file already exists; "+
"leaving it alone.", filename))
}
}
func checkEncryptionConfig(value string, name string, bytes int) int {
if value == "" {
return 0
}
if num, err := hex.DecodeString(value); err != nil || len(num) != bytes {
fmt.Fprintf(os.Stderr, "skicka: missing or invalid "+
"[encryption]/%s value (expecting %d hex "+
"characters).\n", name, 2*bytes)
return 1
}
return 0
}
// Check that the configuration read from the config file isn't obviously
// missing needed entries so that we can give better error messages at startup
// while folks are first getting things setup.
func checkConfigValidity() {
nerrs := 0
if config.Google.ClientId == "YOUR_GOOGLE_APP_CLIENT_ID" {
config.Google.ClientId = ""
}
if config.Google.ClientSecret == "YOUR_GOOGLE_APP_SECRET" {
config.Google.ClientSecret = ""
}
// It's ok if the encryption stuff isn't present (if encryption
// isn't being used), but if it is present, it must be valid...
nerrs += checkEncryptionConfig(config.Encryption.Salt, "salt", 32)
nerrs += checkEncryptionConfig(config.Encryption.Passphrase_hash,
"passphrase-hash", 32)
nerrs += checkEncryptionConfig(config.Encryption.Encrypted_key,
"encrypted-key", 32)
nerrs += checkEncryptionConfig(config.Encryption.Encrypted_key_iv,
"encrypted-key-iv", 16)
if nerrs > 0 {
os.Exit(1)
}
}
func readConfigFile(filename string) {
if runtime.GOOS != "windows" {
if info, err := os.Stat(filename); err != nil {
printErrorAndExit(fmt.Errorf("%s: %v", filename, err))
} else if goperms := info.Mode() & ((1 << 6) - 1); goperms != 0 {
printErrorAndExit(fmt.Errorf("%s: permissions of configuration file "+
"allow group/other access. Your secrets are at risk.",
filename))
}
}
err := gcfg.ReadFileInto(&config, filename)
if err != nil {
printErrorAndExit(fmt.Errorf("%s: %v. (You may want to run \"skicka "+
"init\" to create an initial configuration file.)", filename, err))
}
checkConfigValidity()
}
func usage() {
fmt.Printf(
`skicka is a tool for working with files and folders on Google Drive.
See http://github.com/google/skicka/README.md for information about getting started.
usage: skicka [common options] <command> [command options]
Commands and their options are:
cat Print the contents of the Google Drive file to standard output.
Arguments: drive_path ...
download Recursively download either a single file, or all files from a
Google Drive folder to a local directory. If the corresponding
local file already exists and has the same contents as the its
Google Drive file, the download is skipped.
Arguments: [-ignore-times] [-download-google-apps-files]
drive_path local_path
df Prints the total space used and amount of available space on
Google Drive.
du Print the space used by the Google Drive folder and its children.
Arguments: [drive_path ...]
fsck [EXPERIMENTAL/NEW] Use at your own risk.
Perform a number of consistency checks on files stored in Google
Drive, including verifying metadata and removing duplicate files
with the same name.
Arguments: [--trash-duplicates] [drive_path]
help Print this help text.
genkey Generate a new key for encrypting files.
init Create an initial ~/.skicka.config configuration file. (You
will need to edit it before using skicka; see comments in the
configuration file for details.)
ls List the files and directories in the given Google Drive folder.
Arguments: [-d, -l, -ll, -r] [drive_path ...],
where -l and -ll specify long (including sizes and update
times) and really long output (also including MD5 checksums),
respectively. The -r argument causes ls to recursively list
all files in the hierarchy rooted at the base directory, and
-d causes directories specified on the command line to be
listed as files (i.e., their contents aren't listed.)
mkdir Create a new directory (folder) at the given Google Drive path.
Arguments: [-p] drive_path ...,
where intermediate directories in the path are created if -p is
specified.
rm Remove a file or directory at the given Google Drive path.
Arguments: [-r, -s] drive_path ...,
where files and directories are recursively removed if -r is
specified and the google drive trash is skipped if -s is
specified. The default behavior is to fail if the drive path
specified is a directory and -r is not specified, and to send
files to the trash instead of permanently deleting them.
upload Uploads all files in the local directory and its children to the
given Google Drive path. Skips files that have already been
uploaded.
Arguments: [-ignore-times] [-encrypt] [-follow-symlinks <maxdepth>]
local_path drive_path
Options valid for both "upload" and "download":
-dry-run Don't actually upload or download, but print the paths of
all files that would be transferred.
-ignore-times Normally, skicka assumes that if the timestamp of a local
file matches the timestamp of the file on Drive and the
files have the same size, then it isn't necessary to
confirm that the file contents match. The -ignore-times
flag can be used to force checking file contents in this
case.
General options valid for all commands:
-config <filename> General skicka configuration file. Default: ~/.skicka.config.
-debug Enable debugging output.
-dump-http Dump http traffic.
-metadata-cache-file <filename>
File to store metadata about Google Drive contents.
Default: ~/.skicka.metadata.cache
-no-browser-auth Disables attempting to open the authorization URL in a web
browser when initially authorizing skicka to access Google Drive.
-quiet Suppress non-error messages.
-tokencache <filename> OAuth2 token cache file. Default: ~/.skicka.tokencache.json.
-verbose Enable verbose output.
`)
}
func shortUsage() {
fmt.Fprintf(os.Stderr, `usage: skicka [skicka options] <command> [command options]
Supported commands are:
cat Print the contents of the given file
download Download a file or folder hierarchy from Drive to the local disk
df Display free space on Drive
du Report disk usage for a folder hierarchy on Drive
fsck Check consistency of files in Drive and local metadata cache
genkey Generate a new encryption key
init Create an initial skicka configuration file
ls List the contents of a folder on Google Drive
mkdir Create a new folder or folder hierarchy on Drive
rm Remove a file or folder on Google Drive
upload Upload a local file or directory hierarchy to Drive
'skicka help' prints more detailed documentation.
`)
}
func userHomeDir() string {
if runtime.GOOS == "windows" {
home := os.Getenv("HOMEDRIVE") + os.Getenv("HOMEPATH")
if home == "" {
home = os.Getenv("USERPROFILE")
}
return home
}
return os.Getenv("HOME")
}
func main() {
home := userHomeDir()
tokenCacheFilename := flag.String("tokencache",
filepath.Join(home, ".skicka.tokencache.json"),
"OAuth2 token cache file")
configFilename := flag.String("config",
filepath.Join(home, ".skicka.config"),
"Configuration file")
metadataCacheFilename := flag.String("metadata-cache-file",
filepath.Join(home, "/.skicka.metadata.cache"),
"Filename for local cache of Google Drive file metadata")
nw := flag.Int("num-threads", 4, "Number of threads to use for uploads/downloads")
vb := flag.Bool("verbose", false, "Enable verbose output")
dbg := flag.Bool("debug", false, "Enable debugging output")
qt := flag.Bool("quiet", false, "Suppress non-error messages")
dumpHTTP := flag.Bool("dump-http", false, "Dump http traffic")
flakyHTTP := flag.Bool("flaky-http", false, "Add flakiness to http traffic")
noBrowserAuth := flag.Bool("no-browser-auth", false,
"Don't try launching browser for authorization")
flag.Usage = usage
flag.Parse()
if len(flag.Args()) == 0 {
shortUsage()
os.Exit(0)
}
nWorkers = *nw
debug = debugging(*dbg)
verbose = debugging(*vb || bool(debug))
quiet = *qt
cmd := flag.Arg(0)
// Commands that don't need the config file to be read or to use
// the cached OAuth2 token.
switch cmd {
case "genkey":
generateKey()
return
case "init":
createConfigFile(*configFilename)
return
case "help":
usage()
return
}
readConfigFile(*configFilename)
// Choose the appropriate callback function for the GDrive object to
// use for debugging output.
var dpf func(s string, args ...interface{})
if debug {
dpf = debugPrint
} else {
dpf = debugNoPrint
}
// Check this before creating the GDrive object so that we don't spend
// a lot of time updating the cache if we were just going to print the
// usage message.
if cmd != "cat" && cmd != "download" && cmd != "df" && cmd != "du" &&
cmd != "fsck" && cmd != "ls" && cmd != "mkdir" && cmd != "rm" &&
cmd != "upload" {
shortUsage()
os.Exit(1)
}
// Set up the basic http.Transport.
transport := http.DefaultTransport
if tr, ok := transport.(*http.Transport); ok {
// Increase the default number of open connections per destination host
// to be enough for the number of goroutines we run concurrently for
// uploads/downloads; this gives some benefit especially for uploading
// small files.
tr.MaxIdleConnsPerHost = 4
} else {
printErrorAndExit(fmt.Errorf("DefaultTransport not an *http.Transport?"))
}
if *flakyHTTP {
transport = newFlakyTransport(transport)
}
if *dumpHTTP {
transport = loggingTransport{transport: transport}
}
// And now upgrade to the OAuth Transport *http.Client.
client, err := getOAuthClient(*tokenCacheFilename, !*noBrowserAuth,
transport)
if err != nil {
printErrorAndExit(fmt.Errorf("error with OAuth2 Authorization: %v ", err))
}
// Update the current active memory statistics every half second.
ticker := time.NewTicker(500 * time.Millisecond)
go func() {
for {
<-ticker.C
updateActiveMemory()
}
}()
gd, err = gdrive.New(config.Upload.Bytes_per_second_limit,
config.Download.Bytes_per_second_limit, dpf, client,
*metadataCacheFilename, quiet)
if err != nil {
printErrorAndExit(fmt.Errorf("error creating Google Drive "+
"client: %v", err))
}
args := flag.Args()[1:]
errs := 0
switch cmd {
case "cat":
errs = cat(args)
case "download":
errs = download(args)
case "df":
errs = df(args)
case "du":
errs = du(args)
case "fsck":
errs = fsck(args, *metadataCacheFilename)
case "ls":
errs = ls(args)
case "mkdir":
errs = mkdir(args)
case "rm":
errs = rm(args)
case "upload":
errs = upload(args)
gd.UpdateMetadataCache(*metadataCacheFilename)
default:
errs = 1
}
os.Exit(errs)
}
|
{
if key == nil {
printErrorAndExit(fmt.Errorf("uninitialized key in makeEncrypterReader()"))
}
block, err := aes.NewCipher(key)
checkFatalError(err, "unable to create AES cypher")
if len(iv) != aes.BlockSize {
printErrorAndExit(fmt.Errorf("IV length %d != aes.BlockSize %d", len(iv),
aes.BlockSize))
}
stream := cipher.NewCFBEncrypter(block, iv)
return &cipher.StreamReader{S: stream, R: reader}
}
|
utils.go
|
package app
import (
"io/ioutil"
"os"
"regexp"
)
func listFiles(dir, pattern string) ([]os.FileInfo, error)
|
{
files, err := ioutil.ReadDir(dir)
if err != nil {
return nil, err
}
filteredFiles := []os.FileInfo{}
for _, file := range files {
if file.IsDir() {
continue
}
matched, err := regexp.MatchString(pattern, file.Name())
if err != nil {
return nil, err
}
if matched {
filteredFiles = append(filteredFiles, file)
}
}
return filteredFiles, nil
}
|
|
conf.py
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# morphforge documentation build configuration file, created by
# sphinx-quickstart on Fri Mar 23 14:01:08 2012.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys
import os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# To allow stuff to build on RTD
import sys
class MockType(type):
def __init__(cls, name, bases, dct ):
super(MockType, cls).__init__(name, bases, dct)
pass
def __getattr__(cls, name):
return Mock()
def __str__(cls):
return 'custom str for %s' % (cls.__name__,)
class Mock(object):
__metaclass__ = MockType
def __init__(self, *args, **kwargs):
pass
def __call__(self, *args, **kwargs):
return Mock()
#@classmethod
def __getattr__(cls, name):
if name in ('__file__', '__path__'):
return '/dev/null'
elif name[0] == name[0].upper():
mockType = type(name, (), {})
mockType.__module__ = __name__
return mockType
else:
return Mock()
def __getitem__(self, key):
return Mock()
def __add__(self, rhs):
return Mock()
def __sub__(self, rhs):
return Mock()
def
|
(self, rhs):
return Mock()
def __div__(self, rhs):
return Mock()
def __radd__(self, rhs):
return Mock()
def __rsub__(self, rhs):
return Mock()
def __rmul__(self, rhs):
return Mock()
def __rdiv__(self, rhs):
return Mock()
def __pow__(self, rhs):
return Mock()
MOCK_MODULES = ['numpy', 'pylab', 'scipy', 'mredoc', 'mreorg', 'quantities', 'matplotlib']
for mod_name in MOCK_MODULES:
sys.modules[mod_name] = Mock
#URL to clean Read-The_Docs build dir:
#https://readthedocs.org/wipe/morphforge/latest/
sys.path.append('../src/')
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.autosummary',
'sphinx.ext.doctest',
'sphinx.ext.todo',
'sphinx.ext.pngmath',
'sphinx.ext.ifconfig',
'sphinx.ext.viewcode',
'sphinx.ext.inheritance_diagram',
]
inheritance_graph_attrs = dict(rankdir="LR", size='"9.0, 8.0"',
fontsize=10, ratio='compress')
autodoc_default_flags =['undoc-members', 'members']
add_module_names = False
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'morphforge'
copyright = u'2012, Mike Hull'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '0.1-alpha'
# The full version, including alpha/beta/rc tags.
release = '0.1-alpha'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# .
html_title = "morphforge"
html_add_permalinks = False
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
html_show_sphinx = False
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
html_use_opensearch = False
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'morphforgedoc'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
#latex_paper_size = 'letter'
# The font size ('10pt', '11pt' or '12pt').
#latex_font_size = '10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'morphforge.tex', u'morphforge Documentation',
u'Mike Hull', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'morphforge', u'morphforge Documentation',
[u'Mike Hull'], 1)
]
html_theme = "haiku"
todo_include_todos=True
autosummary_generate = True
# AutoDoc:
def maybe_skip_member(app, what, name, obj, skip, options):
# Since we add 'toSWC', etc to MorphologyTree, we
# don't want this to show up in the documentation.
if 'members' in options:
if name.startswith('to') or name.startswith('from'):
return True
if name == "__weakref__":
return True
#print name
if name in ['__weakref__' ,'__dict__','__doc__','__module__']:
return True
return False
def setup(app):
app.connect('autodoc-skip-member', maybe_skip_member)
templates_path = ["_templates",]
rst_prolog = r"""
.. |MHThesis| replace:: :download:`Mike Hull's Ph.D Thesis </static/ThesisReducedToTools.pdf>`
"""
|
__mul__
|
DevNav.tsx
|
import { Box } from '@material-ui/core';
import { Link } from 'react-router-dom';
export default function DevNav() {
return (
<Box p={3} display="flex" flexDirection="column" alignItems="center">
<h1>You're in DevNav</h1>
{/* With react-router-dom it's not a good practive do don't use
<Link /> but Rsuits alredy render a link */}
<NavLink to="/">Dev nav</NavLink>
<NavLink to="/r/start">Go Room Start</NavLink>
<NavLink to="/r/write">Go Room Write</NavLink>
<NavLink to="/r/draw">Go Room Draw</NavLink>
<NavLink to="/r/lobby">Go Room Lobby</NavLink>
<NavLink to="/r">Go Room</NavLink>
<NavLink to="/home">Go home</NavLink>
<NavLink to="/t">Go to TESTS pages</NavLink>
</Box>
);
}
function NavLink({ to, children }: any) {
return (
<Box
p={1}
|
bgcolor="background.default"
m={1}
borderRadius={2}
border={1}
borderColor="primary.main">
<Link to={to} style={{ color: 'white' }}>
{children}
</Link>
</Box>
);
}
| |
index.js
|
const overlay = require('@pirxpilot/overlay');
const Popover = require('@pirxpilot/confirmation-popover');
const Emitter = require('component-emitter');
function id2el(id) {
return document.querySelector(`[data-tour-id="${id}"]`) || document.querySelector(id);
}
function coerce(selectorOrNode) {
if (typeof selectorOrNode !== 'string') {
// node or empty
return selectorOrNode;
}
const el = document.querySelector(selectorOrNode);
if (el.nodeName === 'TEMPLATE' && el.content) {
return el.content;
}
return el;
}
// parse HTML to create a list of steps - tour-id / tour-content need to match
function steps(container) {
const result = [];
container.querySelectorAll('[data-tour-content]').forEach(function(el) {
const id = el.dataset.tourContent;
const refEl = id2el(id);
const absent = el.dataset.contentAbsent !== undefined;
// only consider steps for which referenceEl is found
if (!refEl && !absent) {
return;
}
result.push({
id,
contentEl: el,
position: el.dataset.position || 'bottom',
delay: el.dataset.delay || 0,
absent,
refEl
});
});
return result;
}
function createPopover(step) {
const self = this;
if (step.absent) {
step.refEl = id2el(step.id);
self.markStep(true);
}
self.popover = new Popover(step.contentEl.cloneNode(true));
self.popover.classname += ' tour-popover';
self.popover.el.classList.add('tour-popover');
self.updateNext();
self.popover
.cancel(self.labels.cancel)
.ok(self.labels.ok)
.focus('ok')
.on('show', () => self.emit('show', self.current))
.on('hide', () => self.emit('hide', self.current))
.on('cancel', () => self.end())
.on('ok', () => self.next())
.position(step.position)
.show(step.refEl);
}
class Tour extends Emitter {
static of(...args) {
return new Tour(...args);
}
constructor(container, { labels } = {}) {
super();
this.steps = steps(coerce(container));
this.current = 0;
this.labels = Object.assign({
ok: 'Next',
cancel: 'Close'
}, labels);
}
overlay(options) {
this._overlay = overlay(options);
this._overlay.el.classList.add('tour-overlay');
return this;
}
play(index) {
const self = this;
self.emit('begin');
if (self._overlay) {
self._overlay.show();
}
if (typeof index === 'number') {
self.current = index;
}
self.showStep();
}
// hides next button for last step
updateNext() {
this.popover.el.querySelector('.ok').classList.toggle('hidden', this.current + 1 >= this.steps.length);
}
// marks element associated with active step
markStep(on) {
const step = this.steps[this.current];
if (step) {
step.refEl.classList.toggle('tour-active-step', on);
}
}
hideStep() {
if (this.popover) {
this.popover.hide();
this.popover = undefined;
}
}
showStep() {
let step;
this.current %= this.steps.length;
step = this.steps[this.current];
if (!step) {
return;
}
if (!step.absent) {
this.markStep(true);
}
this.hideStep();
setTimeout(createPopover.bind(this, step), step.delay);
}
// called when user acted upon a suggestion in a Tour step
react(delay) {
const step = this.steps[this.current];
if (!step) {
return;
}
if (!this.popover) {
return;
}
if (this.popover.el.classList.contains('tour-reacted')) {
return;
}
if (typeof delay !== 'number') {
delay = step.delay;
|
setTimeout(function() {
popover.show(step.refEl);
popover.classname += ' tour-reacted';
popover.el.classList.add('tour-reacted');
}, delay);
}
next() {
this.markStep(false);
this.emit('next', ++this.current);
setTimeout(() => this.showStep(), 0);
}
end() {
this.markStep(false);
if (this._overlay) {
this._overlay.hide();
}
this.hideStep();
++this.current;
this.emit('end');
}
}
module.exports = Tour;
|
}
const popover = this.popover.hide();
|
joiner.go
|
// Copyright 2017 PingCAP, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// See the License for the specific language governing permissions and
// limitations under the License.
package executor
import (
"github.com/pingcap/tidb/expression"
"github.com/pingcap/tidb/plan"
"github.com/pingcap/tidb/sessionctx"
"github.com/pingcap/tidb/types"
"github.com/pingcap/tidb/util/chunk"
"github.com/pkg/errors"
)
var (
_ joiner = &semiJoiner{}
_ joiner = &antiSemiJoiner{}
_ joiner = &leftOuterSemiJoiner{}
_ joiner = &antiLeftOuterSemiJoiner{}
_ joiner = &leftOuterJoiner{}
_ joiner = &rightOuterJoiner{}
_ joiner = &innerJoiner{}
)
// joiner is used to generate join results according to the join type.
// A typical instruction flow is:
//
// hasMatch := false
// for innerIter.Current() != innerIter.End() {
// matched, err := j.tryToMatch(outer, innerIter, chk)
// // handle err
// hasMatch = hasMatch || matched
// }
// if !hasMatch {
// j.onMissMatch(outer)
// }
//
// NOTE: This interface is **not** thread-safe.
type joiner interface {
// tryToMatch tries to join an outer row with a batch of inner rows. When
// 'inners.Len != 0' but all the joined rows are filtered, the outer row is
// considered unmatched. Otherwise, the outer row is matched and some joined
// rows are appended to `chk`. The size of `chk` is limited to MaxChunkSize.
//
// NOTE: Callers need to call this function multiple times to consume all
// the inner rows for an outer row, and dicide whether the outer row can be
// matched with at lease one inner row.
tryToMatch(outer chunk.Row, inners chunk.Iterator, chk *chunk.Chunk) (bool, error)
// onMissMatch operates on the unmatched outer row according to the join
// type. An outer row can be considered miss matched if:
// 1. it can not pass the filter on the outer table side.
// 2. there is no inner row with the same join key.
// 3. all the joined rows can not pass the filter on the join result.
//
// On these conditions, the caller calls this function to handle the
// unmatched outer rows according to the current join type:
// 1. 'SemiJoin': ignores the unmatched outer row.
// 2. 'AntiSemiJoin': appends the unmatched outer row to the result buffer.
// 3. 'LeftOuterSemiJoin': concats the unmatched outer row with 0 and
// appends it to the result buffer.
// 4. 'AntiLeftOuterSemiJoin': concats the unmatched outer row with 0 and
// appends it to the result buffer.
// 5. 'LeftOuterJoin': concats the unmatched outer row with a row of NULLs
// and appends it to the result buffer.
// 6. 'RightOuterJoin': concats the unmatched outer row with a row of NULLs
// and appends it to the result buffer.
// 7. 'InnerJoin': ignores the unmatched outer row.
onMissMatch(outer chunk.Row, chk *chunk.Chunk)
}
func newJoiner(ctx sessionctx.Context, joinType plan.JoinType,
outerIsRight bool, defaultInner []types.Datum, filter []expression.Expression,
lhsColTypes, rhsColTypes []*types.FieldType) joiner {
base := baseJoiner{
ctx: ctx,
conditions: filter,
outerIsRight: outerIsRight,
maxChunkSize: ctx.GetSessionVars().MaxChunkSize,
}
colTypes := make([]*types.FieldType, 0, len(lhsColTypes)+len(rhsColTypes))
colTypes = append(colTypes, lhsColTypes...)
colTypes = append(colTypes, rhsColTypes...)
base.selected = make([]bool, 0, chunk.InitialCapacity)
if joinType == plan.LeftOuterJoin || joinType == plan.RightOuterJoin {
innerColTypes := lhsColTypes
if !outerIsRight {
innerColTypes = rhsColTypes
}
base.initDefaultInner(innerColTypes, defaultInner)
}
switch joinType {
case plan.SemiJoin:
base.shallowRow = chunk.MutRowFromTypes(colTypes)
return &semiJoiner{base}
|
case plan.LeftOuterSemiJoin:
base.shallowRow = chunk.MutRowFromTypes(colTypes)
return &leftOuterSemiJoiner{base}
case plan.AntiLeftOuterSemiJoin:
base.shallowRow = chunk.MutRowFromTypes(colTypes)
return &antiLeftOuterSemiJoiner{base}
case plan.LeftOuterJoin:
base.chk = chunk.NewChunkWithCapacity(colTypes, ctx.GetSessionVars().MaxChunkSize)
return &leftOuterJoiner{base}
case plan.RightOuterJoin:
base.chk = chunk.NewChunkWithCapacity(colTypes, ctx.GetSessionVars().MaxChunkSize)
return &rightOuterJoiner{base}
case plan.InnerJoin:
base.chk = chunk.NewChunkWithCapacity(colTypes, ctx.GetSessionVars().MaxChunkSize)
return &innerJoiner{base}
}
panic("unsupported join type in func newJoiner()")
}
type baseJoiner struct {
ctx sessionctx.Context
conditions []expression.Expression
defaultInner chunk.Row
outerIsRight bool
chk *chunk.Chunk
shallowRow chunk.MutRow
selected []bool
maxChunkSize int
}
func (j *baseJoiner) initDefaultInner(innerTypes []*types.FieldType, defaultInner []types.Datum) {
mutableRow := chunk.MutRowFromTypes(innerTypes)
mutableRow.SetDatums(defaultInner[:len(innerTypes)]...)
j.defaultInner = mutableRow.ToRow()
}
func (j *baseJoiner) makeJoinRowToChunk(chk *chunk.Chunk, lhs, rhs chunk.Row) {
// Call AppendRow() first to increment the virtual rows.
// Fix: https://github.com/pingcap/tidb/issues/5771
chk.AppendRow(lhs)
chk.AppendPartialRow(lhs.Len(), rhs)
}
// makeShallowJoinRow shallow copies `inner` and `outer` into `shallowRow`.
func (j *baseJoiner) makeShallowJoinRow(isRightJoin bool, inner, outer chunk.Row) {
if !isRightJoin {
inner, outer = outer, inner
}
j.shallowRow.ShallowCopyPartialRow(0, inner)
j.shallowRow.ShallowCopyPartialRow(inner.Len(), outer)
}
func (j *baseJoiner) filter(input, output *chunk.Chunk, outerColsLen int) (bool, error) {
var err error
j.selected, err = expression.VectorizedFilter(j.ctx, j.conditions, chunk.NewIterator4Chunk(input), j.selected)
if err != nil {
return false, errors.Trace(err)
}
// Batch copies selected rows to output chunk.
innerColOffset, outerColOffset := 0, input.NumCols()-outerColsLen
if !j.outerIsRight {
innerColOffset, outerColOffset = outerColsLen, 0
}
return chunk.CopySelectedJoinRows(input, innerColOffset, outerColOffset, j.selected, output), nil
}
type semiJoiner struct {
baseJoiner
}
func (j *semiJoiner) tryToMatch(outer chunk.Row, inners chunk.Iterator, chk *chunk.Chunk) (matched bool, err error) {
if inners.Len() == 0 {
return false, nil
}
if len(j.conditions) == 0 {
chk.AppendPartialRow(0, outer)
inners.ReachEnd()
return true, nil
}
for inner := inners.Current(); inner != inners.End(); inner = inners.Next() {
j.makeShallowJoinRow(j.outerIsRight, inner, outer)
matched, err = expression.EvalBool(j.ctx, j.conditions, j.shallowRow.ToRow())
if err != nil {
return false, errors.Trace(err)
}
if matched {
chk.AppendPartialRow(0, outer)
inners.ReachEnd()
return true, nil
}
}
return false, nil
}
func (j *semiJoiner) onMissMatch(outer chunk.Row, chk *chunk.Chunk) {
}
type antiSemiJoiner struct {
baseJoiner
}
// tryToMatch implements joiner interface.
func (j *antiSemiJoiner) tryToMatch(outer chunk.Row, inners chunk.Iterator, chk *chunk.Chunk) (matched bool, err error) {
if inners.Len() == 0 {
return false, nil
}
if len(j.conditions) == 0 {
inners.ReachEnd()
return true, nil
}
for inner := inners.Current(); inner != inners.End(); inner = inners.Next() {
j.makeShallowJoinRow(j.outerIsRight, inner, outer)
matched, err = expression.EvalBool(j.ctx, j.conditions, j.shallowRow.ToRow())
if err != nil {
return false, errors.Trace(err)
}
if matched {
inners.ReachEnd()
return true, nil
}
}
return false, nil
}
func (j *antiSemiJoiner) onMissMatch(outer chunk.Row, chk *chunk.Chunk) {
chk.AppendRow(outer)
}
type leftOuterSemiJoiner struct {
baseJoiner
}
// tryToMatch implements joiner interface.
func (j *leftOuterSemiJoiner) tryToMatch(outer chunk.Row, inners chunk.Iterator, chk *chunk.Chunk) (matched bool, err error) {
if inners.Len() == 0 {
return false, nil
}
if len(j.conditions) == 0 {
j.onMatch(outer, chk)
inners.ReachEnd()
return true, nil
}
for inner := inners.Current(); inner != inners.End(); inner = inners.Next() {
j.makeShallowJoinRow(false, inner, outer)
matched, err = expression.EvalBool(j.ctx, j.conditions, j.shallowRow.ToRow())
if err != nil {
return false, errors.Trace(err)
}
if matched {
j.onMatch(outer, chk)
inners.ReachEnd()
return true, nil
}
}
return false, nil
}
func (j *leftOuterSemiJoiner) onMatch(outer chunk.Row, chk *chunk.Chunk) {
chk.AppendPartialRow(0, outer)
chk.AppendInt64(outer.Len(), 1)
}
func (j *leftOuterSemiJoiner) onMissMatch(outer chunk.Row, chk *chunk.Chunk) {
chk.AppendPartialRow(0, outer)
chk.AppendInt64(outer.Len(), 0)
}
type antiLeftOuterSemiJoiner struct {
baseJoiner
}
// tryToMatch implements joiner interface.
func (j *antiLeftOuterSemiJoiner) tryToMatch(outer chunk.Row, inners chunk.Iterator, chk *chunk.Chunk) (matched bool, err error) {
if inners.Len() == 0 {
return false, nil
}
if len(j.conditions) == 0 {
j.onMatch(outer, chk)
inners.ReachEnd()
return true, nil
}
for inner := inners.Current(); inner != inners.End(); inner = inners.Next() {
j.makeShallowJoinRow(false, inner, outer)
matched, err := expression.EvalBool(j.ctx, j.conditions, j.shallowRow.ToRow())
if err != nil {
return false, errors.Trace(err)
}
if matched {
j.onMatch(outer, chk)
inners.ReachEnd()
return true, nil
}
}
return false, nil
}
func (j *antiLeftOuterSemiJoiner) onMatch(outer chunk.Row, chk *chunk.Chunk) {
chk.AppendPartialRow(0, outer)
chk.AppendInt64(outer.Len(), 0)
}
func (j *antiLeftOuterSemiJoiner) onMissMatch(outer chunk.Row, chk *chunk.Chunk) {
chk.AppendPartialRow(0, outer)
chk.AppendInt64(outer.Len(), 1)
}
type leftOuterJoiner struct {
baseJoiner
}
// tryToMatch implements joiner interface.
func (j *leftOuterJoiner) tryToMatch(outer chunk.Row, inners chunk.Iterator, chk *chunk.Chunk) (bool, error) {
if inners.Len() == 0 {
return false, nil
}
j.chk.Reset()
chkForJoin := j.chk
if len(j.conditions) == 0 {
chkForJoin = chk
}
numToAppend := j.maxChunkSize - chk.NumRows()
for ; inners.Current() != inners.End() && numToAppend > 0; numToAppend-- {
j.makeJoinRowToChunk(chkForJoin, outer, inners.Current())
inners.Next()
}
if len(j.conditions) == 0 {
return true, nil
}
// reach here, chkForJoin is j.chk
matched, err := j.filter(chkForJoin, chk, outer.Len())
if err != nil {
return false, errors.Trace(err)
}
return matched, nil
}
func (j *leftOuterJoiner) onMissMatch(outer chunk.Row, chk *chunk.Chunk) {
chk.AppendPartialRow(0, outer)
chk.AppendPartialRow(outer.Len(), j.defaultInner)
}
type rightOuterJoiner struct {
baseJoiner
}
// tryToMatch implements joiner interface.
func (j *rightOuterJoiner) tryToMatch(outer chunk.Row, inners chunk.Iterator, chk *chunk.Chunk) (bool, error) {
if inners.Len() == 0 {
return false, nil
}
j.chk.Reset()
chkForJoin := j.chk
if len(j.conditions) == 0 {
chkForJoin = chk
}
numToAppend := j.maxChunkSize - chk.NumRows()
for ; inners.Current() != inners.End() && numToAppend > 0; numToAppend-- {
j.makeJoinRowToChunk(chkForJoin, inners.Current(), outer)
inners.Next()
}
if len(j.conditions) == 0 {
return true, nil
}
matched, err := j.filter(chkForJoin, chk, outer.Len())
if err != nil {
return false, errors.Trace(err)
}
return matched, nil
}
func (j *rightOuterJoiner) onMissMatch(outer chunk.Row, chk *chunk.Chunk) {
chk.AppendPartialRow(0, j.defaultInner)
chk.AppendPartialRow(j.defaultInner.Len(), outer)
}
type innerJoiner struct {
baseJoiner
}
// tryToMatch implements joiner interface.
func (j *innerJoiner) tryToMatch(outer chunk.Row, inners chunk.Iterator, chk *chunk.Chunk) (bool, error) {
if inners.Len() == 0 {
return false, nil
}
j.chk.Reset()
chkForJoin := j.chk
if len(j.conditions) == 0 {
chkForJoin = chk
}
inner, numToAppend := inners.Current(), j.maxChunkSize-chk.NumRows()
for ; inner != inners.End() && numToAppend > 0; inner, numToAppend = inners.Next(), numToAppend-1 {
if j.outerIsRight {
j.makeJoinRowToChunk(chkForJoin, inner, outer)
} else {
j.makeJoinRowToChunk(chkForJoin, outer, inner)
}
}
if len(j.conditions) == 0 {
return true, nil
}
// reach here, chkForJoin is j.chk
matched, err := j.filter(chkForJoin, chk, outer.Len())
if err != nil {
return false, errors.Trace(err)
}
return matched, nil
}
func (j *innerJoiner) onMissMatch(outer chunk.Row, chk *chunk.Chunk) {
}
|
case plan.AntiSemiJoin:
base.shallowRow = chunk.MutRowFromTypes(colTypes)
return &antiSemiJoiner{base}
|
iterators4.rs
|
// iterators4.rs
pub fn factorial(num: u64) -> u64 {
// Complete this function to return the factorial of num
// Do not use:
// - return
// Try not to use:
// - imperative style loops (for, while)
// - additional variables
// For an extra challenge, don't use:
// - recursion
// Execute `rustlings hint iterators4` for hints.
if num > 1
|
else {
num
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn factorial_of_1() {
assert_eq!(1, factorial(1));
}
#[test]
fn factorial_of_2() {
assert_eq!(2, factorial(2));
}
#[test]
fn factorial_of_4() {
assert_eq!(24, factorial(4));
}
}
|
{
num * factorial(num - 1)
}
|
command_ext.rs
|
use crate::{ioctl, IntoStd, PtyMaster, Result};
use nix::{libc, unistd};
use std::{fs::OpenOptions, os::unix::fs::OpenOptionsExt};
use tokio::process::{Child, Command};
pub trait CommandExt {
fn spawn_with_pty(&mut self, pty_master: &PtyMaster) -> Result<Child>;
}
impl CommandExt for Command {
fn spawn_with_pty(&mut self, pty_master: &PtyMaster) -> Result<Child> {
let slave = OpenOptions::new()
.read(true)
.write(true)
.custom_flags(libc::O_NOCTTY)
.open(pty_master.slave_name())?;
self.stdin(slave.try_clone()?);
self.stdout(slave.try_clone()?);
self.stderr(slave.try_clone()?);
|
let _pid = unistd::setsid().into_std()?;
ioctl::tiocsctty(libc::STDIN_FILENO, 1).into_std()?;
Ok(())
});
}
self.spawn()
}
}
|
unsafe {
self.pre_exec(move || {
|
inputsTcpSplunkTcpToken.ts
|
// *** WARNING: this file was generated by the Pulumi Terraform Bridge (tfgen) Tool. ***
// *** Do not edit by hand unless you're certain you know what you are doing! ***
import * as pulumi from "@pulumi/pulumi";
import { input as inputs, output as outputs } from "./types";
import * as utilities from "./utilities";
/**
* ## # Resource: splunk.InputsTcpSplunkTcpToken
*
* Manage receiver access using tokens.
*
* ## Example Usage
*
* ```typescript
* import * as pulumi from "@pulumi/pulumi";
* import * as splunk from "@pulumi/splunk";
*
* const tcpSplunkTcpToken = new splunk.InputsTcpSplunkTcpToken("tcp_splunk_tcp_token", {
* token: "D66C45B3-7C28-48A1-A13A-027914146501",
* });
* ```
*/
export class InputsTcpSplunkTcpToken extends pulumi.CustomResource {
/**
* Get an existing InputsTcpSplunkTcpToken resource's state with the given name, ID, and optional extra
* properties used to qualify the lookup.
*
* @param name The _unique_ name of the resulting resource.
* @param id The _unique_ provider ID of the resource to lookup.
|
* @param state Any extra arguments used during the lookup.
* @param opts Optional settings to control the behavior of the CustomResource.
*/
public static get(name: string, id: pulumi.Input<pulumi.ID>, state?: InputsTcpSplunkTcpTokenState, opts?: pulumi.CustomResourceOptions): InputsTcpSplunkTcpToken {
return new InputsTcpSplunkTcpToken(name, <any>state, { ...opts, id: id });
}
/** @internal */
public static readonly __pulumiType = 'splunk:index/inputsTcpSplunkTcpToken:InputsTcpSplunkTcpToken';
/**
* Returns true if the given object is an instance of InputsTcpSplunkTcpToken. This is designed to work even
* when multiple copies of the Pulumi SDK have been loaded into the same process.
*/
public static isInstance(obj: any): obj is InputsTcpSplunkTcpToken {
if (obj === undefined || obj === null) {
return false;
}
return obj['__pulumiType'] === InputsTcpSplunkTcpToken.__pulumiType;
}
/**
* The app/user context that is the namespace for the resource
*/
public readonly acl!: pulumi.Output<outputs.InputsTcpSplunkTcpTokenAcl>;
/**
* Required. Name for the token to create.
*/
public readonly name!: pulumi.Output<string>;
/**
* Optional. Token value to use. If unspecified, a token is generated automatically.
*/
public readonly token!: pulumi.Output<string>;
/**
* Create a InputsTcpSplunkTcpToken resource with the given unique name, arguments, and options.
*
* @param name The _unique_ name of the resource.
* @param args The arguments to use to populate this resource's properties.
* @param opts A bag of options that control this resource's behavior.
*/
constructor(name: string, args?: InputsTcpSplunkTcpTokenArgs, opts?: pulumi.CustomResourceOptions)
constructor(name: string, argsOrState?: InputsTcpSplunkTcpTokenArgs | InputsTcpSplunkTcpTokenState, opts?: pulumi.CustomResourceOptions) {
let inputs: pulumi.Inputs = {};
opts = opts || {};
if (opts.id) {
const state = argsOrState as InputsTcpSplunkTcpTokenState | undefined;
inputs["acl"] = state ? state.acl : undefined;
inputs["name"] = state ? state.name : undefined;
inputs["token"] = state ? state.token : undefined;
} else {
const args = argsOrState as InputsTcpSplunkTcpTokenArgs | undefined;
inputs["acl"] = args ? args.acl : undefined;
inputs["name"] = args ? args.name : undefined;
inputs["token"] = args ? args.token : undefined;
}
if (!opts.version) {
opts = pulumi.mergeOptions(opts, { version: utilities.getVersion()});
}
super(InputsTcpSplunkTcpToken.__pulumiType, name, inputs, opts);
}
}
/**
* Input properties used for looking up and filtering InputsTcpSplunkTcpToken resources.
*/
export interface InputsTcpSplunkTcpTokenState {
/**
* The app/user context that is the namespace for the resource
*/
readonly acl?: pulumi.Input<inputs.InputsTcpSplunkTcpTokenAcl>;
/**
* Required. Name for the token to create.
*/
readonly name?: pulumi.Input<string>;
/**
* Optional. Token value to use. If unspecified, a token is generated automatically.
*/
readonly token?: pulumi.Input<string>;
}
/**
* The set of arguments for constructing a InputsTcpSplunkTcpToken resource.
*/
export interface InputsTcpSplunkTcpTokenArgs {
/**
* The app/user context that is the namespace for the resource
*/
readonly acl?: pulumi.Input<inputs.InputsTcpSplunkTcpTokenAcl>;
/**
* Required. Name for the token to create.
*/
readonly name?: pulumi.Input<string>;
/**
* Optional. Token value to use. If unspecified, a token is generated automatically.
*/
readonly token?: pulumi.Input<string>;
}
| |
__init__.py
|
from flask import Blueprint
from flask_restful import Api
from .get_topic_schema import GetTopicSchemaResource
from .get_topic_names import GetTopicNamesResource
rest_topic_bp = Blueprint("rest_topic", __name__)
rest_topic_api = Api(rest_topic_bp, prefix="/api")
|
rest_topic_api.add_resource(GetTopicSchemaResource, GetTopicSchemaResource.API_PATH)
rest_topic_api.add_resource(GetTopicNamesResource, GetTopicNamesResource.API_PATH)
|
|
forms.py
|
#
# Copyright (c) 2013-2015 Wind River Systems, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# vim: tabstop=4 shiftwidth=4 softtabstop=4
import logging
from cgtsclient import exc
from django.core.urlresolvers import reverse # noqa
from django import shortcuts
from django.utils.translation import ugettext_lazy as _
from horizon import exceptions
from horizon import forms
from horizon import messages
from starlingx_dashboard.api import sysinv
LOG = logging.getLogger(__name__)
class UpdateCpuFunctions(forms.SelfHandlingForm):
host = forms.CharField(label=_("host"),
required=False,
widget=forms.widgets.HiddenInput)
host_id = forms.CharField(label=_("host_id"),
required=False,
widget=forms.widgets.HiddenInput)
platform = forms.CharField(
label=_("------------------------ Function ------------------------"),
required=False,
widget=forms.TextInput(attrs={'readonly': 'readonly'}))
platform_processor0 = forms.DynamicIntegerField(
label=_("# of Platform Physical Cores on Processor 0:"),
min_value=0, max_value=99,
required=False)
platform_processor1 = forms.DynamicIntegerField(
label=_("# of Platform Physical Cores on Processor 1:"),
min_value=0, max_value=99,
required=False)
platform_processor2 = forms.DynamicIntegerField(
label=_("# of Platform Physical Cores on Processor 2:"),
min_value=0, max_value=99,
required=False)
platform_processor3 = forms.DynamicIntegerField(
label=_("# of Platform Physical Cores on Processor 3:"),
min_value=0, max_value=99,
required=False)
vswitch = forms.CharField(
label=_("------------------------ Function ------------------------"),
required=False,
widget=forms.TextInput(attrs={'readonly': 'readonly'}))
num_cores_on_processor0 = forms.DynamicIntegerField(
label=_("# of vSwitch Physical Cores on Processor 0:"),
min_value=0, max_value=99,
required=False)
num_cores_on_processor1 = forms.DynamicIntegerField(
label=_("# of vSwitch Physical Cores on Processor 1:"),
min_value=0, max_value=99,
required=False)
num_cores_on_processor2 = forms.DynamicIntegerField(
label=_("# of vSwitch Physical Cores on Processor 2:"),
min_value=0, max_value=99,
required=False)
num_cores_on_processor3 = forms.DynamicIntegerField(
label=_("# of vSwitch Physical Cores on Processor 3:"),
min_value=0, max_value=99,
required=False)
shared_vcpu = forms.CharField(
label=_("------------------------ Function ------------------------"),
required=False,
widget=forms.TextInput(attrs={'readonly': 'readonly'}))
num_shared_on_processor0 = forms.DynamicIntegerField(
label=_("# of Shared Physical Cores on Processor 0:"),
min_value=0, max_value=99,
required=False)
num_shared_on_processor1 = forms.DynamicIntegerField(
label=_("# of Shared Physical Cores on Processor 1:"),
min_value=0, max_value=99,
required=False)
num_shared_on_processor2 = forms.DynamicIntegerField(
label=_("# of Shared Physical Cores on Processor 2:"),
min_value=0, max_value=99,
required=False)
num_shared_on_processor3 = forms.DynamicIntegerField(
label=_("# of Shared Physical Cores on Processor 3:"),
min_value=0, max_value=99,
required=False)
failure_url = 'horizon:admin:inventory:detail'
def __init__(self, *args, **kwargs):
super(UpdateCpuFunctions, self).__init__(*args, **kwargs)
self.host = kwargs['initial']['host']
if kwargs['initial']['platform_processor0'] == 99: # No Processor
self.fields[
'platform_processor0'].widget = forms.widgets.HiddenInput()
else:
avail_socket_cores = self.host.physical_cores.get(0, 0)
self.fields['platform_processor0'].set_max_value(
avail_socket_cores)
self.fields[
'platform_processor0'].help_text = \
"Processor 0 has %s physical cores." % avail_socket_cores
if kwargs['initial']['platform_processor1'] == 99: # No Processor
self.fields[
'platform_processor1'].widget = forms.widgets.HiddenInput()
else:
avail_socket_cores = self.host.physical_cores.get(1, 0)
self.fields['platform_processor1'].set_max_value(
avail_socket_cores)
self.fields[
'platform_processor1'].help_text =\
"Processor 1 has %s physical cores." % avail_socket_cores
if kwargs['initial']['platform_processor2'] == 99: # No Processor
self.fields[
'platform_processor2'].widget = forms.widgets.HiddenInput()
else:
avail_socket_cores = self.host.physical_cores.get(2, 0)
self.fields['platform_processor2'].set_max_value(
avail_socket_cores)
self.fields[
'platform_processor2'].help_text = \
"Processor 2 has %s physical cores." % avail_socket_cores
if kwargs['initial']['platform_processor3'] == 99: # No Processor
self.fields[
'platform_processor3'].widget = forms.widgets.HiddenInput()
else:
avail_socket_cores = self.host.physical_cores.get(3, 0)
self.fields['platform_processor3'].set_max_value(
avail_socket_cores)
self.fields[
'platform_processor3'].help_text = \
"Processor 3 has %s physical cores." % avail_socket_cores
if 'compute' not in self.host.subfunctions:
self.fields['vswitch'].widget = forms.widgets.HiddenInput()
self.fields[
'num_cores_on_processor0'].widget = forms.widgets.HiddenInput()
self.fields[
'num_cores_on_processor1'].widget = forms.widgets.HiddenInput()
self.fields[
'num_cores_on_processor2'].widget = forms.widgets.HiddenInput()
self.fields[
'num_cores_on_processor3'].widget = forms.widgets.HiddenInput()
else:
if kwargs['initial'][
'num_cores_on_processor0'] == 99: # No Processor
self.fields[
'num_cores_on_processor0'].widget =\
forms.widgets.HiddenInput()
else:
avail_socket_cores = self.host.physical_cores.get(0, 0)
self.fields[
'num_cores_on_processor0'].set_max_value(
avail_socket_cores)
self.fields[
'num_cores_on_processor0'].help_text = \
"Processor 0 has %s physical cores." % avail_socket_cores
if kwargs['initial'][
'num_cores_on_processor1'] == 99: # No Processor
self.fields[
'num_cores_on_processor1'].widget =\
forms.widgets.HiddenInput()
else:
avail_socket_cores = self.host.physical_cores.get(1, 0)
self.fields[
'num_cores_on_processor1'].set_max_value(
avail_socket_cores)
self.fields[
'num_cores_on_processor1'].help_text =\
"Processor 1 has %s physical cores." % avail_socket_cores
if kwargs['initial'][
'num_cores_on_processor2'] == 99: # No Processor
self.fields[
'num_cores_on_processor2'].widget =\
forms.widgets.HiddenInput()
else:
avail_socket_cores = self.host.physical_cores.get(2, 0)
self.fields[
'num_cores_on_processor2'].set_max_value(
avail_socket_cores)
self.fields[
'num_cores_on_processor2'].help_text =\
"Processor 2 has %s physical cores." % avail_socket_cores
if kwargs['initial'][
'num_cores_on_processor3'] == 99: # No Processor
self.fields[
'num_cores_on_processor3'].widget =\
forms.widgets.HiddenInput()
else:
avail_socket_cores = self.host.physical_cores.get(3, 0)
self.fields[
'num_cores_on_processor3'].set_max_value(
avail_socket_cores)
self.fields[
'num_cores_on_processor3'].help_text =\
"Processor 3 has %s physical cores." % avail_socket_cores
for s in range(0, 4):
processor = 'num_shared_on_processor{0}'.format(s)
if ('compute' not in self.host.subfunctions or
kwargs['initial'][processor] == 99): # No Processor
self.fields[processor].widget = forms.widgets.HiddenInput()
else:
self.fields[processor].set_max_value(1)
self.fields[processor].help_text =\
"Each processor can have at most one shared core."
def clean(self):
cleaned_data = super(UpdateCpuFunctions, self).clean()
# host_id = cleaned_data.get('host_id')
try:
cleaned_data['platform_processor0'] = str(
cleaned_data['platform_processor0'])
cleaned_data['platform_processor1'] = str(
cleaned_data['platform_processor1'])
cleaned_data['platform_processor2'] = str(
cleaned_data['platform_processor2'])
cleaned_data['platform_processor3'] = str(
cleaned_data['platform_processor3'])
cleaned_data['num_cores_on_processor0'] = str(
cleaned_data['num_cores_on_processor0'])
cleaned_data['num_cores_on_processor1'] = str(
cleaned_data['num_cores_on_processor1'])
cleaned_data['num_cores_on_processor2'] = str(
cleaned_data['num_cores_on_processor2'])
cleaned_data['num_cores_on_processor3'] = str(
cleaned_data['num_cores_on_processor3'])
cleaned_data['num_shared_on_processor0'] = str(
cleaned_data['num_shared_on_processor0'])
cleaned_data['num_shared_on_processor1'] = str(
cleaned_data['num_shared_on_processor1'])
cleaned_data['num_shared_on_processor2'] = str(
cleaned_data['num_shared_on_processor2'])
cleaned_data['num_shared_on_processor3'] = str(
cleaned_data['num_shared_on_processor3'])
num_platform_cores = {}
num_platform_cores[0] = cleaned_data.get('platform_processor0',
'None')
num_platform_cores[1] = cleaned_data.get('platform_processor1',
'None')
num_platform_cores[2] = cleaned_data.get('platform_processor2',
'None')
num_platform_cores[3] = cleaned_data.get('platform_processor3',
'None')
num_vswitch_cores = {}
num_vswitch_cores[0] = cleaned_data.get('num_cores_on_processor0',
'None')
num_vswitch_cores[1] = cleaned_data.get('num_cores_on_processor1',
'None')
num_vswitch_cores[2] = cleaned_data.get('num_cores_on_processor2',
'None')
num_vswitch_cores[3] = cleaned_data.get('num_cores_on_processor3',
'None')
num_shared_on_map = {}
num_shared_on_map[0] = cleaned_data.get('num_shared_on_processor0',
'None')
num_shared_on_map[1] = cleaned_data.get('num_shared_on_processor1',
'None')
num_shared_on_map[2] = cleaned_data.get('num_shared_on_processor2',
'None')
num_shared_on_map[3] = cleaned_data.get('num_shared_on_processor3',
'None')
if ('None' in num_platform_cores.values() or
'None' in num_vswitch_cores.values() or
'None' in num_shared_on_map.values()):
raise forms.ValidationError(_("Invalid entry."))
except Exception as e:
LOG.error(e)
raise forms.ValidationError(_("Invalid entry."))
# Since only vswitch is allowed to be modified
cleaned_data['function'] = 'vswitch'
# NOTE: shared_vcpu can be changed
return cleaned_data
def handle(self, request, data):
host_id = data['host_id']
del data['host_id']
del data['host']
try:
host = sysinv.host_get(self.request, host_id)
cpudata = {}
sharedcpudata = {}
platformcpudata = {}
for key, val in data.items():
if 'num_cores_on_processor' in key or 'function' in key:
if key not in self.fields:
cpudata[key] = val
elif not type(self.fields[key].widget) is\
forms.widgets.HiddenInput:
cpudata[key] = val
if 'platform_processor' in key:
update_key = 'num_cores_on_processor' + key[-1:]
if key not in self.fields:
platformcpudata[update_key] = val
elif not type(self.fields[key].widget) is\
forms.widgets.HiddenInput:
platformcpudata[update_key] = val
if 'num_shared_on_processor' in key:
key2 = key.replace('shared', 'cores')
if key not in self.fields:
sharedcpudata[key2] = val
elif not type(self.fields[key].widget) is\
forms.widgets.HiddenInput:
sharedcpudata[key2] = val
sharedcpudata['function'] = 'shared'
platformcpudata['function'] = 'platform'
sysinv.host_cpus_modify(request, host.uuid,
platformcpudata,
cpudata,
sharedcpudata)
msg = _('CPU Assignments were successfully updated.')
LOG.debug(msg)
messages.success(request, msg)
return self.host.cpus
except exc.ClientException as ce:
# Display REST API error message on UI
messages.error(request, ce)
LOG.error(ce)
# Redirect to failure pg
redirect = reverse(self.failure_url, args=[host_id])
return shortcuts.redirect(redirect)
except Exception as e:
LOG.exception(e)
msg = _('Failed to update CPU Assignments.')
LOG.info(msg)
redirect = reverse(self.failure_url, args=[host_id])
exceptions.handle(request, msg, redirect=redirect)
class AddCpuProfile(forms.SelfHandlingForm):
host_id = forms.CharField(widget=forms.widgets.HiddenInput)
profilename = forms.CharField(label=_("Cpu Profile Name"),
required=True)
failure_url = 'horizon:admin:inventory:detail'
def __init__(self, *args, **kwargs):
super(AddCpuProfile, self).__init__(*args, **kwargs)
def clean(self):
|
def handle(self, request, data):
cpuProfileName = data['profilename']
try:
cpuProfile = sysinv.host_cpuprofile_create(request, **data)
msg = _(
'Cpu Profile "%s" was successfully created.') % cpuProfileName
LOG.debug(msg)
messages.success(request, msg)
return cpuProfile
except exc.ClientException as ce:
# Display REST API error message on UI
messages.error(request, ce)
LOG.error(ce)
# Redirect to failure pg
redirect = reverse(self.failure_url, args=[data['host_id']])
return shortcuts.redirect(redirect)
except Exception:
msg = _('Failed to create cpu profile "%s".') % cpuProfileName
LOG.info(msg)
redirect = reverse(self.failure_url,
args=[data['host_id']])
exceptions.handle(request, msg, redirect=redirect)
|
cleaned_data = super(AddCpuProfile, self).clean()
# host_id = cleaned_data.get('host_id')
return cleaned_data
|
tokens_test.py
|
from unittest import mock
import pytest
from oauthlib.common import Request as OAuthRequest
from h.oauth.tokens import BearerToken
class TestBearerToken:
@pytest.mark.parametrize(
"attr",
[
"request_validator",
"token_generator",
"expires_in",
"refresh_token_generator",
"refresh_token_expires_in",
],
)
def test_init_sets_instance_vars(self, attr):
value = mock.Mock()
token = BearerToken(**{attr: value})
assert getattr(token, attr) == value
def test_create_token_sets_refresh_token_expires_in(self, oauth_request):
value = mock.Mock()
token = BearerToken(
request_validator=mock.Mock(), refresh_token_expires_in=value
)
assert oauth_request.extra_credentials is None
token.create_token(oauth_request)
assert oauth_request.extra_credentials.get("refresh_token_expires_in") == value
|
token = BearerToken(
request_validator=mock.Mock(), refresh_token_expires_in=value
)
oauth_request.extra_credentials = {"foo": "bar"}
token.create_token(oauth_request)
assert oauth_request.extra_credentials.get("refresh_token_expires_in") == value
assert oauth_request.extra_credentials.get("foo") == "bar"
@pytest.fixture
def oauth_request(self):
return OAuthRequest("/")
|
def test_create_token_does_not_override_extras(self, oauth_request):
value = mock.Mock()
|
function.rs
|
internal_methods::{InternalObjectMethods, ORDINARY_INTERNAL_METHODS},
JsObject,
},
Context, JsResult, JsValue,
};
/// Definitions of the internal object methods for function objects.
///
/// More information:
/// - [ECMAScript reference][spec]
///
/// [spec]: https://tc39.es/ecma262/#sec-ecmascript-function-objects
pub(crate) static FUNCTION_INTERNAL_METHODS: InternalObjectMethods = InternalObjectMethods {
__call__: Some(function_call),
__construct__: None,
..ORDINARY_INTERNAL_METHODS
};
pub(crate) static CONSTRUCTOR_INTERNAL_METHODS: InternalObjectMethods = InternalObjectMethods {
__call__: Some(function_call),
__construct__: Some(function_construct),
..ORDINARY_INTERNAL_METHODS
};
/// Call this object.
///
/// # Panics
///
/// Panics if the object is currently mutably borrowed.
// <https://tc39.es/ecma262/#sec-prepareforordinarycall>
// <https://tc39.es/ecma262/#sec-ecmascript-function-objects-call-thisargument-argumentslist>
#[track_caller]
#[inline]
fn function_call(
obj: &JsObject,
this: &JsValue,
args: &[JsValue],
context: &mut Context,
) -> JsResult<JsValue> {
obj.call_internal(this, args, context)
}
/// Construct an instance of this object with the specified arguments.
///
/// # Panics
///
/// Panics if the object is currently mutably borrowed.
// <https://tc39.es/ecma262/#sec-ecmascript-function-objects-construct-argumentslist-newtarget>
#[track_caller]
#[inline]
fn function_construct(
obj: &JsObject,
args: &[JsValue],
new_target: &JsValue,
context: &mut Context,
) -> JsResult<JsValue> {
obj.construct_internal(args, new_target, context)
}
|
use crate::{
object::{
|
|
test_iconic_matcher.py
|
#!/usr/bin/env python
from nipy.testing import assert_equal, assert_almost_equal, assert_raises
import numpy as np
from nipy.neurospin.register.iconic_matcher import IconicMatcher
class Image(object):
"""
Empty object to easily create image objects independently from any I/O package.
"""
def __init__(self, array, toworld=None, voxsize=[1, 1, 1]):
self.array = array
self.voxsize = np.asarray(voxsize)
if toworld == None:
toworld = np.diag(np.concatenate((self.voxsize, [1])))
self.toworld = toworld
def make_data_uint8(dx=100, dy=100, dz=50):
return (256*(np.random.rand(dx, dy, dz) - np.random.rand())).astype('uint8')
def make_data_int16(dx=100, dy=100, dz=50):
return (256*(np.random.rand(dx, dy, dz) - np.random.rand())).astype('int16')
def make_data_float64(dx=100, dy=100, dz=50):
return (256*(np.random.rand(dx, dy, dz) - np.random.rand())).astype('float64')
def _test_clamping(I, thI=0.0, clI=256):
IM = IconicMatcher(I.array, I.array, I.toworld, I.toworld, thI, thI, source_bins=clI, target_bins=clI)
Ic = IM.source_clamped
Ic2 = IM.target_clamped[1:I.array.shape[0]+1,1:I.array.shape[1]+1,1:I.array.shape[2]+1].squeeze()
assert_equal(Ic, Ic2)
dyn = Ic.max() + 1
assert_equal(dyn, IM.joint_hist.shape[0])
assert_equal(dyn, IM.joint_hist.shape[1])
assert_equal(dyn, IM.source_hist.shape[0])
assert_equal(dyn, IM.target_hist.shape[0])
def test_clamping_uint8():
I = Image(make_data_uint8())
_test_clamping(I)
def test_clamping_uint8_nonstd():
I = Image(make_data_uint8())
_test_clamping(I, 10, 165)
def test_clamping_int16():
I = Image(make_data_int16())
_test_clamping(I)
def test_clamping_int16_nonstd():
I = Image(make_data_int16())
_test_clamping(I, 10, 165)
def test_clamping_float64():
I = Image(make_data_float64())
_test_clamping(I)
def
|
():
I = Image(make_data_float64())
_test_clamping(I, 10, 165)
def _test_similarity_measure(simi, val):
I = Image(make_data_int16())
J = Image(I.array.copy())
IM = IconicMatcher(I.array, J.array, I.toworld, J.toworld)
IM.set_field_of_view(subsampling=[2,1,3])
IM.set_similarity(simi)
assert_almost_equal(IM.eval(np.eye(4)), val)
def test_correlation_coefficient():
_test_similarity_measure('cc', 1.0)
def test_correlation_ratio():
_test_similarity_measure('cr', 1.0)
def test_normalized_mutual_information():
_test_similarity_measure('nmi', 1.0)
def test_explore():
I = Image(make_data_int16())
J = Image(make_data_int16())
IM = IconicMatcher(I.array, J.array, I.toworld, J.toworld)
T = np.eye(4)
T[0:3,3] = np.random.rand(3)
simi, params = IM.explore(ux=[-1,0,1],uy=[-1,0,1])
def test_iconic():
""" Test the iconic class.
"""
I = Image(make_data_int16())
J = Image(I.array.copy())
IM = IconicMatcher(I.array, J.array, I.toworld, J.toworld)
assert_raises(ValueError, IM.set_field_of_view, subsampling=[0,1,3])
if __name__ == "__main__":
import nose
nose.run(argv=['', __file__])
|
test_clamping_float64_nonstd
|
transaction_repo.go
|
package inmemory
import (
"github.com/mniak/Alkanoid/domain"
)
type _TransactionRepository struct {
data map[int]domain.Transaction
maxId int
}
func NewTransactionRepository() domain.TransactionRepository
|
func (r *_TransactionRepository) Save(acc domain.Transaction) (int, error) {
if acc.ID == 0 {
r.maxId++
acc.ID = r.maxId
} else if acc.ID > r.maxId {
r.maxId = acc.ID
}
r.data[acc.ID] = acc
return acc.ID, nil
}
func (r *_TransactionRepository) Load(id int) (domain.Transaction, error) {
acc, ok := r.data[id]
if !ok {
return domain.Transaction{}, domain.ErrNotFound
}
return acc, nil
}
|
{
return &_TransactionRepository{
data: make(map[int]domain.Transaction),
}
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.