code
stringlengths 22
1.05M
| apis
listlengths 1
3.31k
| extract_api
stringlengths 75
3.25M
|
---|---|---|
from typing import Callable, TypeVar, List
T = TypeVar('T')
class Registrable(object):
reg_list = dict()
@classmethod
def register(cls, class_name: str) -> Callable:
def register_inner(class_type: T) -> None:
cls.reg_list[class_name] = class_type
return register_inner
@classmethod
def list_available(cls) -> List[str]:
return list(cls.reg_list.keys())
@classmethod
def by_name(cls, class_name: str) -> T:
return cls.reg_list.get(class_name, None)
|
[
"typing.TypeVar"
] |
[((48, 60), 'typing.TypeVar', 'TypeVar', (['"""T"""'], {}), "('T')\n", (55, 60), False, 'from typing import Callable, TypeVar, List\n')]
|
#!/usr/bin/env python
# coding: utf-8
# # Self-Driving Car Engineer Nanodegree
#
#
# ## Project: **Finding Lane Lines on the Road**
# ***
# In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
#
# Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
#
# In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
#
# ---
# Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
#
# **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
#
# ---
# **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
#
# ---
#
# <figure>
# <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
# </figcaption>
# </figure>
# <p></p>
# <figure>
# <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
# </figcaption>
# </figure>
# **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
# ## Import Packages
# In[1]:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
# ## Read in an Image
# In[2]:
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
# ## Ideas for Lane Detection Pipeline
# **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
#
# `cv2.inRange()` for color selection
# `cv2.fillPoly()` for regions selection
# `cv2.line()` to draw lines on an image given endpoints
# `cv2.addWeighted()` to coadd / overlay two images
# `cv2.cvtColor()` to grayscale or change color
# `cv2.imwrite()` to output images to file
# `cv2.bitwise_and()` to apply a mask to an image
#
# **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
# ## Helper Functions
# Below are some helper functions to help get you started. They should look familiar from the lesson!
# In[3]:
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
`vertices` should be a numpy array of integer points.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines_new(img, lines, color=[255, 0, 0], thickness=6):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
## create an empty array with all the line slope
all_slopes = np.zeros((len(lines)))
## create an empty array for left lines
left_line_slope = []
## create an empty array for right lines
right_line_slope = []
# keep each line slope in the array
for index,line in enumerate(lines):
for x1,y1,x2,y2 in line:
all_slopes[index] = (y2-y1)/(x2-x1)
# get all left line slope if it is positive
left_line_slope = all_slopes[all_slopes > 0]
# get all left line slope if it is negetive
right_line_slope = all_slopes[all_slopes < 0]
## mean value of left slope and right slope
m_l = left_line_slope.mean()
m_r = right_line_slope.mean()
# Create empty list for all the left points and right points
final_x4_l = []
final_x3_l = []
final_x4_r = []
final_x3_r = []
## get fixed y-cordinate in both top and bottom point
y4 = 320
y3 = img.shape[0]
## Go for each line to calculate left top x-cordinate, right top x-cordinate,
## left buttom x-cordinate, right bottom top x-cordinate
for index,line in enumerate(lines):
for x1,y1,x2,y2 in line:
m = (y2-y1)/(x2-x1)
if m > 0 :
final_x4_l.append(int(((x1 + (y4 - y1) / m_l) + (x2 + (y4 - y2) / m_l))/ 2))
final_x3_l.append(int(((x1 + (y3 - y1) / m_l) + (x2 + (y3 - y2) / m_l))/ 2))
else:
final_x4_r.append(int(((x1 + (y4 - y1) / m_r) + (x2 + (y4 - y2) / m_r))/ 2))
final_x3_r.append(int(((x1 + (y3 - y1) / m_r) + (x2 + (y3 - y2) / m_r))/ 2))
try :
## taking average of each points
x4_l = int(sum(final_x4_l)/ len(final_x4_l))
x4_r = int(sum(final_x4_r)/ len(final_x4_r))
x3_l = int(sum(final_x3_l)/ len(final_x3_l))
x3_r = int(sum(final_x3_r)/ len(final_x3_r))
## Draw the left line and right line
cv2.line(img, (x4_l, y4), (x3_l, y3), color, thickness)
cv2.line(img, (x4_r, y4), (x3_r, y3), color, thickness)
except:
pass
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines_new(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., γ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + γ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, γ)
# ## Test Images
#
# Build your pipeline to work on the images in the directory "test_images"
# **You should make sure your pipeline works well on these images before you try the videos.**
# In[4]:
import os
os.listdir("test_images/")
# ## Build a Lane Finding Pipeline
#
#
# Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
#
# Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
# In[18]:
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
def preprocess_image(image_path):
image = mpimg.imread(image_path)
gray_image = grayscale(image)
blured_image = gaussian_blur(gray_image, 5)
canny_image = canny(gray_image, low_threshold=100, high_threshold=170)
vertices = np.array([[(80,image.shape[0]),(450, 320), (490, 320), (image.shape[1],image.shape[0])]], dtype=np.int32)
roi_image = region_of_interest(canny_image, vertices)
hough_img = hough_lines(roi_image, rho=2, theta=np.pi/180, threshold=50, min_line_len=100, max_line_gap=160)
final_img= weighted_img(hough_img, image, α=0.8, β=1., γ=0.)
return final_img
def process_test_images(source_folder,destination_folder):
## create destination folder if not present
if not os.path.exists(destination_folder):
os.makedirs(destination_folder)
## Get all input files from the source folder
list_test_files = os.listdir(source_folder)
## process all the input files
for file in list_test_files:
output = preprocess_image(source_folder+ '/' + file)
cv2.imwrite(destination_folder+'/'+ file, cv2.cvtColor(output, cv2.COLOR_RGB2BGR))
process_test_images('test_images','test_images_output')
# In[19]:
# In[20]:
os.listdir('test_images')
# In[21]:
# Checking in an image
plt.figure(figsize=(15,8))
plt.subplot(121)
image = mpimg.imread('test_images/solidYellowCurve.jpg')
plt.imshow(image)
plt.title('Original image')
plt.subplot(122)
image = mpimg.imread('test_images_output/whiteCarLaneSwitch.jpg')
plt.imshow(image)
plt.title('Output image')
plt.show()
# ## Test on Videos
#
# You know what's cooler than drawing lanes over images? Drawing lanes over video!
#
# We can test our solution on two provided videos:
#
# `solidWhiteRight.mp4`
#
# `solidYellowLeft.mp4`
#
# **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
#
# **If you get an error that looks like this:**
# ```
# NeedDownloadError: Need ffmpeg exe.
# You can download it by calling:
# imageio.plugins.ffmpeg.download()
# ```
# **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
# In[9]:
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
# In[10]:
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
gray_image = grayscale(image)
blured_image = gaussian_blur(gray_image, 5)
canny_image = canny(gray_image, low_threshold=100, high_threshold=170)
vertices = np.array([[(80,image.shape[0]),(450, 320), (490, 320), (image.shape[1],image.shape[0])]], dtype=np.int32)
roi_image = region_of_interest(canny_image, vertices)
hough_img = hough_lines(roi_image, rho=2, theta=np.pi/180, threshold=50, min_line_len=100, max_line_gap=160)
result= weighted_img(hough_img, image, α=0.8, β=1., γ=0.)
return result
# Let's try the one with the solid white lane on the right first ...
# In[11]:
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
white_clip.write_videofile(white_output, audio=False)
# ## Improve the draw_lines() function
#
# **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
#
# **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
# Now for the one with the solid yellow lane on the left. This one's more tricky!
# In[13]:
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
yellow_clip.write_videofile(yellow_output, audio=False)
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# TODO: put your pipeline here,
# you should return the final output (image where lines are drawn on lanes)
cv2.imwrite('image_test.jpg',image)
gray_image = grayscale(image)
blured_image = gaussian_blur(gray_image, 5)
canny_image = canny(gray_image, low_threshold=100, high_threshold=170)
cv2.imwrite('image_test_canny.jpg',canny_image)
x_size = image.shape[1]
y_size = image.shape[0]
left_bottom = (80, y_size)
left_top = (x_size / 2 - 50, y_size / 2 + 50)
right_bottom = (x_size - 80, y_size)
right_top = (x_size / 2 + 50, y_size / 2 + 50)
#vertices = np.array([[left_bottom, left_top, right_top, right_bottom]], dtype=np.int32)
#vertices = np.array([[(280,image.shape[0]),(450, 320), (490, 320), (image.shape[1],image.shape[0])]], dtype=np.int32)
vertices = np.array([[(300,680),(620, 460), (720, 460), (1085,673)]], dtype=np.int32)
roi_image = region_of_interest(canny_image, vertices)
try:
hough_img = hough_lines(roi_image, rho=2, theta=np.pi/180, threshold=50, min_line_len=100, max_line_gap=160)
result= weighted_img(hough_img, image, α=0.8, β=1., γ=0.)
return result
except:
return image
# In[16]:
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
challenge_clip.write_videofile(challenge_output, audio=False)
|
[
"matplotlib.image.imread",
"numpy.array",
"matplotlib.pyplot.imshow",
"os.path.exists",
"os.listdir",
"cv2.line",
"cv2.addWeighted",
"moviepy.editor.VideoFileClip",
"cv2.fillPoly",
"matplotlib.pyplot.subplot",
"cv2.cvtColor",
"matplotlib.pyplot.title",
"cv2.GaussianBlur",
"cv2.Canny",
"matplotlib.pyplot.show",
"cv2.imwrite",
"os.makedirs",
"cv2.bitwise_and",
"matplotlib.pyplot.figure",
"numpy.zeros",
"numpy.zeros_like"
] |
[((3572, 3619), 'matplotlib.image.imread', 'mpimg.imread', (['"""test_images/solidWhiteRight.jpg"""'], {}), "('test_images/solidWhiteRight.jpg')\n", (3584, 3619), True, 'import matplotlib.image as mpimg\n'), ((3729, 3746), 'matplotlib.pyplot.imshow', 'plt.imshow', (['image'], {}), '(image)\n', (3739, 3746), True, 'import matplotlib.pyplot as plt\n'), ((10679, 10705), 'os.listdir', 'os.listdir', (['"""test_images/"""'], {}), "('test_images/')\n", (10689, 10705), False, 'import os\n'), ((12431, 12456), 'os.listdir', 'os.listdir', (['"""test_images"""'], {}), "('test_images')\n", (12441, 12456), False, 'import os\n'), ((12494, 12521), 'matplotlib.pyplot.figure', 'plt.figure', ([], {'figsize': '(15, 8)'}), '(figsize=(15, 8))\n', (12504, 12521), True, 'import matplotlib.pyplot as plt\n'), ((12521, 12537), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(121)'], {}), '(121)\n', (12532, 12537), True, 'import matplotlib.pyplot as plt\n'), ((12546, 12594), 'matplotlib.image.imread', 'mpimg.imread', (['"""test_images/solidYellowCurve.jpg"""'], {}), "('test_images/solidYellowCurve.jpg')\n", (12558, 12594), True, 'import matplotlib.image as mpimg\n'), ((12595, 12612), 'matplotlib.pyplot.imshow', 'plt.imshow', (['image'], {}), '(image)\n', (12605, 12612), True, 'import matplotlib.pyplot as plt\n'), ((12613, 12640), 'matplotlib.pyplot.title', 'plt.title', (['"""Original image"""'], {}), "('Original image')\n", (12622, 12640), True, 'import matplotlib.pyplot as plt\n'), ((12642, 12658), 'matplotlib.pyplot.subplot', 'plt.subplot', (['(122)'], {}), '(122)\n', (12653, 12658), True, 'import matplotlib.pyplot as plt\n'), ((12667, 12724), 'matplotlib.image.imread', 'mpimg.imread', (['"""test_images_output/whiteCarLaneSwitch.jpg"""'], {}), "('test_images_output/whiteCarLaneSwitch.jpg')\n", (12679, 12724), True, 'import matplotlib.image as mpimg\n'), ((12725, 12742), 'matplotlib.pyplot.imshow', 'plt.imshow', (['image'], {}), '(image)\n', (12735, 12742), True, 'import matplotlib.pyplot as plt\n'), ((12743, 12768), 'matplotlib.pyplot.title', 'plt.title', (['"""Output image"""'], {}), "('Output image')\n", (12752, 12768), True, 'import matplotlib.pyplot as plt\n'), ((12769, 12779), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (12777, 12779), True, 'import matplotlib.pyplot as plt\n'), ((15152, 15200), 'moviepy.editor.VideoFileClip', 'VideoFileClip', (['"""test_videos/solidWhiteRight.mp4"""'], {}), "('test_videos/solidWhiteRight.mp4')\n", (15165, 15200), False, 'from moviepy.editor import VideoFileClip\n'), ((16952, 17000), 'moviepy.editor.VideoFileClip', 'VideoFileClip', (['"""test_videos/solidYellowLeft.mp4"""'], {}), "('test_videos/solidYellowLeft.mp4')\n", (16965, 17000), False, 'from moviepy.editor import VideoFileClip\n'), ((18950, 18992), 'moviepy.editor.VideoFileClip', 'VideoFileClip', (['"""test_videos/challenge.mp4"""'], {}), "('test_videos/challenge.mp4')\n", (18963, 18992), False, 'from moviepy.editor import VideoFileClip\n'), ((4914, 4951), 'cv2.cvtColor', 'cv2.cvtColor', (['img', 'cv2.COLOR_RGB2GRAY'], {}), '(img, cv2.COLOR_RGB2GRAY)\n', (4926, 4951), False, 'import cv2\n'), ((5165, 5210), 'cv2.Canny', 'cv2.Canny', (['img', 'low_threshold', 'high_threshold'], {}), '(img, low_threshold, high_threshold)\n', (5174, 5210), False, 'import cv2\n'), ((5302, 5354), 'cv2.GaussianBlur', 'cv2.GaussianBlur', (['img', '(kernel_size, kernel_size)', '(0)'], {}), '(img, (kernel_size, kernel_size), 0)\n', (5318, 5354), False, 'import cv2\n'), ((5682, 5700), 'numpy.zeros_like', 'np.zeros_like', (['img'], {}), '(img)\n', (5695, 5700), True, 'import numpy as np\n'), ((6099, 6146), 'cv2.fillPoly', 'cv2.fillPoly', (['mask', 'vertices', 'ignore_mask_color'], {}), '(mask, vertices, ignore_mask_color)\n', (6111, 6146), False, 'import cv2\n'), ((6231, 6257), 'cv2.bitwise_and', 'cv2.bitwise_and', (['img', 'mask'], {}), '(img, mask)\n', (6246, 6257), False, 'import cv2\n'), ((9824, 9881), 'numpy.zeros', 'np.zeros', (['(img.shape[0], img.shape[1], 3)'], {'dtype': 'np.uint8'}), '((img.shape[0], img.shape[1], 3), dtype=np.uint8)\n', (9832, 9881), True, 'import numpy as np\n'), ((10420, 10462), 'cv2.addWeighted', 'cv2.addWeighted', (['initial_img', 'α', 'img', 'β', 'γ'], {}), '(initial_img, α, img, β, γ)\n', (10435, 10462), False, 'import cv2\n'), ((11229, 11253), 'matplotlib.image.imread', 'mpimg.imread', (['image_path'], {}), '(image_path)\n', (11241, 11253), True, 'import matplotlib.image as mpimg\n'), ((11426, 11538), 'numpy.array', 'np.array', (['[[(80, image.shape[0]), (450, 320), (490, 320), (image.shape[1], image.\n shape[0])]]'], {'dtype': 'np.int32'}), '([[(80, image.shape[0]), (450, 320), (490, 320), (image.shape[1],\n image.shape[0])]], dtype=np.int32)\n', (11434, 11538), True, 'import numpy as np\n'), ((12073, 12098), 'os.listdir', 'os.listdir', (['source_folder'], {}), '(source_folder)\n', (12083, 12098), False, 'import os\n'), ((14204, 14316), 'numpy.array', 'np.array', (['[[(80, image.shape[0]), (450, 320), (490, 320), (image.shape[1], image.\n shape[0])]]'], {'dtype': 'np.int32'}), '([[(80, image.shape[0]), (450, 320), (490, 320), (image.shape[1],\n image.shape[0])]], dtype=np.int32)\n', (14212, 14316), True, 'import numpy as np\n'), ((17347, 17383), 'cv2.imwrite', 'cv2.imwrite', (['"""image_test.jpg"""', 'image'], {}), "('image_test.jpg', image)\n", (17358, 17383), False, 'import cv2\n'), ((17544, 17592), 'cv2.imwrite', 'cv2.imwrite', (['"""image_test_canny.jpg"""', 'canny_image'], {}), "('image_test_canny.jpg', canny_image)\n", (17555, 17592), False, 'import cv2\n'), ((18062, 18139), 'numpy.array', 'np.array', (['[[(300, 680), (620, 460), (720, 460), (1085, 673)]]'], {'dtype': 'np.int32'}), '([[(300, 680), (620, 460), (720, 460), (1085, 673)]], dtype=np.int32)\n', (18070, 18139), True, 'import numpy as np\n'), ((9339, 9394), 'cv2.line', 'cv2.line', (['img', '(x4_l, y4)', '(x3_l, y3)', 'color', 'thickness'], {}), '(img, (x4_l, y4), (x3_l, y3), color, thickness)\n', (9347, 9394), False, 'import cv2\n'), ((9405, 9460), 'cv2.line', 'cv2.line', (['img', '(x4_r, y4)', '(x3_r, y3)', 'color', 'thickness'], {}), '(img, (x4_r, y4), (x3_r, y3), color, thickness)\n', (9413, 9460), False, 'import cv2\n'), ((9742, 9754), 'numpy.array', 'np.array', (['[]'], {}), '([])\n', (9750, 9754), True, 'import numpy as np\n'), ((11920, 11954), 'os.path.exists', 'os.path.exists', (['destination_folder'], {}), '(destination_folder)\n', (11934, 11954), False, 'import os\n'), ((11964, 11995), 'os.makedirs', 'os.makedirs', (['destination_folder'], {}), '(destination_folder)\n', (11975, 11995), False, 'import os\n'), ((12283, 12322), 'cv2.cvtColor', 'cv2.cvtColor', (['output', 'cv2.COLOR_RGB2BGR'], {}), '(output, cv2.COLOR_RGB2BGR)\n', (12295, 12322), False, 'import cv2\n')]
|
from django import forms
from zentral.core.probes.forms import BaseCreateProbeForm
from zentral.utils.forms import validate_sha256
from .probes import (OsqueryProbe, OsqueryComplianceProbe,
OsqueryDistributedQueryProbe, OsqueryFileCarveProbe,
OsqueryFIMProbe)
# OsqueryProbe
class DiscoveryForm(forms.Form):
query = forms.CharField(widget=forms.Textarea(attrs={'rows': 5}))
def get_item_d(self):
return self.cleaned_data["query"]
@staticmethod
def get_initial(discovery):
return {"query": discovery}
class QueryForm(forms.Form):
query = forms.CharField(widget=forms.Textarea(attrs={'rows': 5}))
description = forms.CharField(required=False,
help_text="Description of what this query does. Can be left empty",
widget=forms.Textarea(attrs={'rows': 3}))
value = forms.CharField(required=False,
help_text="Why is this query relevant. Can be left empty",
widget=forms.Textarea(attrs={'rows': 3}))
removed = forms.BooleanField(label='Include {"action": "removed"} results?',
help_text='If False, only {"action": "added"} results will be in the logs',
initial=True,
required=False)
snapshot = forms.BooleanField(label='Run this query in "snapshot" mode?',
help_text=('If True, osquery will not store differentials '
'and will not emulate an event stream'),
initial=False,
required=False)
interval = forms.IntegerField(min_value=10, # 10 seconds
max_value=2678400, # 31 days
initial=3600)
shard = forms.IntegerField(min_value=1, max_value=100, required=False,
help_text="Restrict this query to a percentage (1-100) of target hosts")
def clean_removed(self):
remove = self.cleaned_data.get("removed")
if not remove:
remove = False
return remove
def clean_snapshot(self):
snapshot = self.cleaned_data.get("snapshot")
if not snapshot:
snapshot = False
return snapshot
def clean_description(self):
description = self.cleaned_data.get("description")
if not description:
return None
else:
return description
def clean_value(self):
value = self.cleaned_data.get("value")
if not value:
return None
else:
return value
def clean(self):
cleaned_data = super().clean()
removed = cleaned_data["removed"]
snapshot = cleaned_data["snapshot"]
if removed and snapshot:
raise forms.ValidationError('{"action": "removed"} results are not available in "snapshot" mode')
return cleaned_data
def get_item_d(self):
return {f: v for f, v in self.cleaned_data.items() if v is not None}
@staticmethod
def get_initial(query):
initial = {}
for attr in ("query", "description", "value", "interval", "removed", "shard"):
value = getattr(query, attr, None)
if value is not None:
initial[attr] = value
return initial
class CreateProbeForm(BaseCreateProbeForm, QueryForm):
model = OsqueryProbe
field_order = ("name", "query", "description", "value", "removed", "snapshot", "interval", "shard")
def get_body(self):
return {"queries": [self.get_item_d()]}
# OsqueryComplianceProbe
class PreferenceFileForm(forms.Form):
rel_path = forms.CharField(label="Relative path")
type = forms.ChoiceField(label='Location',
choices=(('USERS', '/Users/%/Library/Preferences/'),
('GLOBAL', '/Library/Preferences/')))
description = forms.CharField(required=False,
widget=forms.Textarea(attrs={'rows': 3}))
interval = forms.IntegerField(min_value=10, # 10 seconds
max_value=2678400, # 31 days
initial=3600)
def clean_description(self):
description = self.cleaned_data.get("description")
if not description:
return None
else:
return description
def get_item_d(self):
return {f: v for f, v in self.cleaned_data.items() if v is not None}
@staticmethod
def get_initial(query):
initial = {}
for attr in ("rel_path", "type", "description", "interval"):
value = getattr(query, attr, None)
if value is not None:
initial[attr] = value
return initial
class KeyForm(forms.Form):
key = forms.CharField()
test = forms.ChoiceField(choices=(('EQ', ' = '),
('INT_LTE', 'integer ≤'),
('INT_GTE', 'integer ≥'),
('INT_GTE_LTE', '≤ integer ≤')),
initial='STR',
widget=forms.Select(attrs={'class': 'key-test-sel'}))
arg_l = forms.CharField(required=False)
arg_r = forms.CharField(required=True)
def clean(self):
cd = self.cleaned_data
test = cd.get('test')
arg_l = cd.get('arg_l')
arg_r = cd.get('arg_r')
if test and test != 'EQ':
if arg_r:
try:
cd['arg_r'] = int(arg_r)
except ValueError:
self.add_error('arg_r', 'not an integer')
if test == 'INT_GTE_LTE':
if arg_l is None:
self.add_error('arg_l', 'missing value')
else:
try:
cd['arg_l'] = int(arg_l)
except ValueError:
self.add_error('arg_l', 'not an integer')
return cd
class BaseKeyFormSet(forms.BaseFormSet):
def clean(self):
"""Checks that no two keys are the same"""
if any(self.errors):
# Don't bother validating the formset unless each form is valid on its own
return
keys = []
for form in self.forms:
key = form.cleaned_data['key']
if key in keys:
raise forms.ValidationError("Articles in a set must have distinct titles.")
keys.append(key)
def get_keys(self):
keys = []
for kcd in self.cleaned_data:
if not kcd.get("DELETE"):
k = {'key': kcd['key']}
test = kcd['test']
arg_r = kcd['arg_r']
if test == 'EQ':
k['value'] = arg_r
elif test == 'INT_LTE':
k['max_value'] = arg_r
elif test == 'INT_GTE':
k['min_value'] = arg_r
else:
k['min_value'] = kcd['arg_l']
k['max_value'] = arg_r
keys.append(k)
return sorted(keys, key=lambda k: k['key'])
@staticmethod
def get_initial(preference_file):
initial = []
for k in preference_file.keys:
key = {'key': k.key}
if k.value is not None:
key['arg_r'] = k.value
key['test'] = 'EQ'
else:
min_value = k.min_value
max_value = k.max_value
if min_value is not None and max_value is not None:
key['test'] = 'INT_GTE_LTE'
key['arg_l'] = min_value
key['arg_r'] = max_value
elif min_value is not None:
key['test'] = 'INT_GTE'
key['arg_r'] = min_value
elif max_value is not None:
key['test'] = 'INT_LTE'
key['arg_r'] = max_value
initial.append(key)
return sorted(initial, key=lambda d: d['key'])
KeyFormSet = forms.formset_factory(KeyForm,
formset=BaseKeyFormSet,
min_num=1, max_num=10, extra=0, can_delete=True)
class FileChecksumForm(forms.Form):
path = forms.CharField()
sha256 = forms.CharField(validators=[validate_sha256],
help_text="The result of shasum -a 256 /path/to/file")
description = forms.CharField(required=False,
widget=forms.Textarea(attrs={'rows': 3}))
interval = forms.IntegerField(min_value=10, # 10 seconds
max_value=2678400, # 31 days
initial=3600)
def clean_description(self):
description = self.cleaned_data.get("description")
if not description:
return None
else:
return description
def get_item_d(self):
return {f: v for f, v in self.cleaned_data.items() if v is not None}
@staticmethod
def get_initial(file_checksum):
initial = {}
for field in ("path", "sha256", "description", "interval"):
val = getattr(file_checksum, field, None)
if val:
initial[field] = val
return initial
class CreateComplianceProbeForm(BaseCreateProbeForm):
model = OsqueryComplianceProbe
def get_body(self):
return {}
# OsqueryDistributedQueryProbe
class DistributedQueryForm(forms.Form):
query = forms.CharField(widget=forms.Textarea(attrs={'class': 'form-control',
'rows': 5}))
def get_body(self):
return {'distributed_query': self.cleaned_data['query']}
class CreateDistributedQueryProbeForm(BaseCreateProbeForm, DistributedQueryForm):
model = OsqueryDistributedQueryProbe
field_order = ("name", "query")
# OsqueryFileCarveProbe
class FileCarveForm(forms.Form):
path = forms.CharField(help_text="Example: /Users/%/Downloads/%.jpg or /etc/hosts")
def get_body(self):
return {'path': self.cleaned_data['path']}
class CreateFileCarveProbeForm(BaseCreateProbeForm, FileCarveForm):
model = OsqueryFileCarveProbe
field_order = ("name", "path")
# FIM probes
class FilePathForm(forms.Form):
file_path = forms.CharField(help_text="Example: /Users/%/Library or /Users/%/Library/ or /Users/%/Library/%%")
file_access = forms.BooleanField(label="Observe file access events ?", initial=False, required=False,
help_text="File accesses on Linux using inotify may induce "
"unexpected and unwanted performance reduction.")
def clean_file_access(self):
file_access = self.cleaned_data.get("file_access")
if not file_access:
file_access = False
return file_access
def get_item_d(self):
return self.cleaned_data
@staticmethod
def get_initial(file_path):
return {"file_path": file_path.file_path,
"file_access": file_path.file_access}
class CreateFIMProbeForm(BaseCreateProbeForm, FilePathForm):
model = OsqueryFIMProbe
field_order = ("name", "file_path", "file_access")
def get_body(self):
return {'file_paths': [self.get_item_d()]}
|
[
"django.forms.BooleanField",
"django.forms.CharField",
"django.forms.Select",
"django.forms.ValidationError",
"django.forms.ChoiceField",
"django.forms.IntegerField",
"django.forms.Textarea",
"django.forms.formset_factory"
] |
[((8279, 8387), 'django.forms.formset_factory', 'forms.formset_factory', (['KeyForm'], {'formset': 'BaseKeyFormSet', 'min_num': '(1)', 'max_num': '(10)', 'extra': '(0)', 'can_delete': '(True)'}), '(KeyForm, formset=BaseKeyFormSet, min_num=1, max_num=\n 10, extra=0, can_delete=True)\n', (8300, 8387), False, 'from django import forms\n'), ((1124, 1309), 'django.forms.BooleanField', 'forms.BooleanField', ([], {'label': '"""Include {"action": "removed"} results?"""', 'help_text': '"""If False, only {"action": "added"} results will be in the logs"""', 'initial': '(True)', 'required': '(False)'}), '(label=\'Include {"action": "removed"} results?\',\n help_text=\n \'If False, only {"action": "added"} results will be in the logs\',\n initial=True, required=False)\n', (1142, 1309), False, 'from django import forms\n'), ((1411, 1610), 'django.forms.BooleanField', 'forms.BooleanField', ([], {'label': '"""Run this query in "snapshot" mode?"""', 'help_text': '"""If True, osquery will not store differentials and will not emulate an event stream"""', 'initial': '(False)', 'required': '(False)'}), '(label=\'Run this query in "snapshot" mode?\', help_text=\n \'If True, osquery will not store differentials and will not emulate an event stream\'\n , initial=False, required=False)\n', (1429, 1610), False, 'from django import forms\n'), ((1768, 1833), 'django.forms.IntegerField', 'forms.IntegerField', ([], {'min_value': '(10)', 'max_value': '(2678400)', 'initial': '(3600)'}), '(min_value=10, max_value=2678400, initial=3600)\n', (1786, 1833), False, 'from django import forms\n'), ((1939, 2079), 'django.forms.IntegerField', 'forms.IntegerField', ([], {'min_value': '(1)', 'max_value': '(100)', 'required': '(False)', 'help_text': '"""Restrict this query to a percentage (1-100) of target hosts"""'}), "(min_value=1, max_value=100, required=False, help_text=\n 'Restrict this query to a percentage (1-100) of target hosts')\n", (1957, 2079), False, 'from django import forms\n'), ((3830, 3868), 'django.forms.CharField', 'forms.CharField', ([], {'label': '"""Relative path"""'}), "(label='Relative path')\n", (3845, 3868), False, 'from django import forms\n'), ((3880, 4010), 'django.forms.ChoiceField', 'forms.ChoiceField', ([], {'label': '"""Location"""', 'choices': "(('USERS', '/Users/%/Library/Preferences/'), ('GLOBAL',\n '/Library/Preferences/'))"}), "(label='Location', choices=(('USERS',\n '/Users/%/Library/Preferences/'), ('GLOBAL', '/Library/Preferences/')))\n", (3897, 4010), False, 'from django import forms\n'), ((4215, 4280), 'django.forms.IntegerField', 'forms.IntegerField', ([], {'min_value': '(10)', 'max_value': '(2678400)', 'initial': '(3600)'}), '(min_value=10, max_value=2678400, initial=3600)\n', (4233, 4280), False, 'from django import forms\n'), ((4986, 5003), 'django.forms.CharField', 'forms.CharField', ([], {}), '()\n', (5001, 5003), False, 'from django import forms\n'), ((5395, 5426), 'django.forms.CharField', 'forms.CharField', ([], {'required': '(False)'}), '(required=False)\n', (5410, 5426), False, 'from django import forms\n'), ((5439, 5469), 'django.forms.CharField', 'forms.CharField', ([], {'required': '(True)'}), '(required=True)\n', (5454, 5469), False, 'from django import forms\n'), ((8502, 8519), 'django.forms.CharField', 'forms.CharField', ([], {}), '()\n', (8517, 8519), False, 'from django import forms\n'), ((8533, 8638), 'django.forms.CharField', 'forms.CharField', ([], {'validators': '[validate_sha256]', 'help_text': '"""The result of shasum -a 256 /path/to/file"""'}), "(validators=[validate_sha256], help_text=\n 'The result of shasum -a 256 /path/to/file')\n", (8548, 8638), False, 'from django import forms\n'), ((8804, 8869), 'django.forms.IntegerField', 'forms.IntegerField', ([], {'min_value': '(10)', 'max_value': '(2678400)', 'initial': '(3600)'}), '(min_value=10, max_value=2678400, initial=3600)\n', (8822, 8869), False, 'from django import forms\n'), ((10219, 10295), 'django.forms.CharField', 'forms.CharField', ([], {'help_text': '"""Example: /Users/%/Downloads/%.jpg or /etc/hosts"""'}), "(help_text='Example: /Users/%/Downloads/%.jpg or /etc/hosts')\n", (10234, 10295), False, 'from django import forms\n'), ((10576, 10679), 'django.forms.CharField', 'forms.CharField', ([], {'help_text': '"""Example: /Users/%/Library or /Users/%/Library/ or /Users/%/Library/%%"""'}), "(help_text=\n 'Example: /Users/%/Library or /Users/%/Library/ or /Users/%/Library/%%')\n", (10591, 10679), False, 'from django import forms\n'), ((10693, 10902), 'django.forms.BooleanField', 'forms.BooleanField', ([], {'label': '"""Observe file access events ?"""', 'initial': '(False)', 'required': '(False)', 'help_text': '"""File accesses on Linux using inotify may induce unexpected and unwanted performance reduction."""'}), "(label='Observe file access events ?', initial=False,\n required=False, help_text=\n 'File accesses on Linux using inotify may induce unexpected and unwanted performance reduction.'\n )\n", (10711, 10902), False, 'from django import forms\n'), ((389, 422), 'django.forms.Textarea', 'forms.Textarea', ([], {'attrs': "{'rows': 5}"}), "(attrs={'rows': 5})\n", (403, 422), False, 'from django import forms\n'), ((646, 679), 'django.forms.Textarea', 'forms.Textarea', ([], {'attrs': "{'rows': 5}"}), "(attrs={'rows': 5})\n", (660, 679), False, 'from django import forms\n'), ((874, 907), 'django.forms.Textarea', 'forms.Textarea', ([], {'attrs': "{'rows': 3}"}), "(attrs={'rows': 3})\n", (888, 907), False, 'from django import forms\n'), ((1075, 1108), 'django.forms.Textarea', 'forms.Textarea', ([], {'attrs': "{'rows': 3}"}), "(attrs={'rows': 3})\n", (1089, 1108), False, 'from django import forms\n'), ((2968, 3064), 'django.forms.ValidationError', 'forms.ValidationError', (['"""{"action": "removed"} results are not available in "snapshot" mode"""'], {}), '(\n \'{"action": "removed"} results are not available in "snapshot" mode\')\n', (2989, 3064), False, 'from django import forms\n'), ((4165, 4198), 'django.forms.Textarea', 'forms.Textarea', ([], {'attrs': "{'rows': 3}"}), "(attrs={'rows': 3})\n", (4179, 4198), False, 'from django import forms\n'), ((5336, 5381), 'django.forms.Select', 'forms.Select', ([], {'attrs': "{'class': 'key-test-sel'}"}), "(attrs={'class': 'key-test-sel'})\n", (5348, 5381), False, 'from django import forms\n'), ((8754, 8787), 'django.forms.Textarea', 'forms.Textarea', ([], {'attrs': "{'rows': 3}"}), "(attrs={'rows': 3})\n", (8768, 8787), False, 'from django import forms\n'), ((9779, 9837), 'django.forms.Textarea', 'forms.Textarea', ([], {'attrs': "{'class': 'form-control', 'rows': 5}"}), "(attrs={'class': 'form-control', 'rows': 5})\n", (9793, 9837), False, 'from django import forms\n'), ((6581, 6650), 'django.forms.ValidationError', 'forms.ValidationError', (['"""Articles in a set must have distinct titles."""'], {}), "('Articles in a set must have distinct titles.')\n", (6602, 6650), False, 'from django import forms\n')]
|
import numpy
from scipy.spatial import distance
import matplotlib.pyplot as plt
import math
import matplotlib.ticker as mtick
freqs = [20, 25, 31.5, 40, 50, 63, 80, 100, 125, 160, 200, 250, 315, 400, 500, 630, 800, 1000, 1250, 1600, 2000, 2500, 3150, 4000, 5000, 6300, 8000, 10000, 12500]
def cosine_distance(a, b, weight = None):
assert len(a) == len(b)
if weight is None:
weight = [1.0] * len(a)
ab_sum, a_sum, b_sum = 0, 0, 0
for ai, bi, wi in zip(a, b, weight):
ab_sum += ai * bi
a_sum += ai * ai
b_sum += bi * bi
return 1 - ab_sum / math.sqrt(a_sum * b_sum)
# from scipy
def _validate_weights(w, dtype=numpy.double):
w = _validate_vector(w, dtype=dtype)
if numpy.any(w < 0):
raise ValueError("Input weights should be all non-negative")
return w
# from scipy
def _validate_vector(u, dtype=None):
# XXX Is order='c' really necessary?
u = numpy.asarray(u, dtype=dtype, order='c').squeeze()
# Ensure values such as u=1 and u=[1] still return 1-D arrays.
u = numpy.atleast_1d(u)
if u.ndim > 1:
raise ValueError("Input vector should be 1-D.")
return u
# from scipy
def dist_cosine(u, v, w=None):
u = _validate_vector(u)
v = _validate_vector(v)
if w is not None:
w = _validate_weights(w)
uv = numpy.average(u * v, weights=w)
uu = numpy.average(numpy.square(u), weights=w)
vv = numpy.average(numpy.square(v), weights=w)
dist = 1.0 - uv / numpy.sqrt(uu * vv)
return dist
def autocolor(bar):
for col in bar:
if col.get_height() > 0.995:
col.set_color('r')
trigger = [40.49, 39.14, 34.47, 30.5, 39.54, 31.98, 38.37, 43.84, 36.09, 43.72, 40.55, 39.25, 39.15, 38.36, 38.3, 36.58,
39.9, 47.76, 51.64, 37.2, 44.89, 46.6, 51.08, 37.77, 28, 29.59, 30.25, 23.16, 25.74]
weight = [0.04,0.04,0.04,0.04,0.04,0.04,0.04,0.14,0.14,0.14,0.14,0.14,0.14,0.14,0.14,0.14,0.14,0.14,0.14, 0.24, 0.41,
0.60, 0.80, 0.94, 1.0, 0.94, 0.80, 0.60, 0.41]
ref_spectrum = numpy.genfromtxt('test/test2_far.csv', delimiter=',', skip_header=1, usecols=range(5, 34))
test1_spectrum = numpy.genfromtxt('test/test1_near.csv', delimiter=',', skip_header=1, usecols=range(5, 34))
test2_spectrum = numpy.genfromtxt('test/test2_far_far.csv', delimiter=',', skip_header=1, usecols=range(5, 34))
test3_spectrum = numpy.genfromtxt('test/test_background.csv', delimiter=',', skip_header=1, usecols=range(5, 34))
dist0 = numpy.ones(len(ref_spectrum)) - [distance.cosine(trigger, ref_spectrum[idfreq], w=weight) for idfreq in range(len(ref_spectrum))]
dist1 = numpy.ones(len(ref_spectrum)) - [distance.cosine(trigger, test1_spectrum[idfreq], w=weight) for idfreq in range(len(ref_spectrum))]
dist2 = numpy.ones(len(ref_spectrum)) - [distance.cosine(trigger, test2_spectrum[idfreq], w=weight) for idfreq in range(len(ref_spectrum))]
dist3 = numpy.ones(len(ref_spectrum)) - [distance.cosine(trigger, test3_spectrum[idfreq], w=weight) for idfreq in range(len(ref_spectrum))]
dist0_bis = numpy.ones(len(ref_spectrum)) - [dist_cosine(trigger, ref_spectrum[idfreq], w=weight) for idfreq in range(len(ref_spectrum))]
#print(numpy.around(dist0_bis - dist0, 3))
ref_spectrum = numpy.rot90(ref_spectrum)
test1_spectrum = numpy.rot90(test1_spectrum)
test2_spectrum = numpy.rot90(test2_spectrum)
test3_spectrum = numpy.rot90(test3_spectrum)
fig, axes = plt.subplots(nrows=4, ncols=3, constrained_layout=True)
gs = axes[0, 0].get_gridspec()
axes[0, 1].imshow(ref_spectrum)
autocolor(axes[0, 2].bar(numpy.arange(len(dist0)), dist0))
axes[1, 1].imshow(test1_spectrum)
autocolor(axes[1, 2].bar(numpy.arange(len(dist1)), dist1))
axes[2, 1].imshow(test2_spectrum)
autocolor(axes[2, 2].bar(numpy.arange(len(dist2)), dist2))
axes[3, 1].imshow(test3_spectrum)
axes[3, 2].bar(numpy.arange(len(dist2)), dist3)
for ax in axes[0:, 0]:
ax.remove()
axbig = fig.add_subplot(gs[0:, 0])
axbig.set_title("Spectrum trigger")
axbig.imshow(numpy.rot90([trigger]))
for i in range(len(axes)):
axes[i, 2].set_ylim([0.95, 1.0])
axes[i, 1].set_yticks(range(len(freqs))[::5])
axes[i, 1].set_yticklabels([str(ylab) + " Hz" for ylab in freqs[::5]][::-1])
axes[i, 1].set_xticks(range(len(ref_spectrum[0]))[::20])
axes[i, 1].set_xticklabels([str(xlabel)+" s" % xlabel for xlabel in numpy.arange(0, 10, 0.125)][::20])
axes[i, 2].set_xticks(range(len(ref_spectrum[0]))[::20])
axes[i, 2].set_xticklabels([str(xlabel)+" s" % xlabel for xlabel in numpy.arange(0, 10, 0.125)][::20])
axes[i, 2].set_ylabel("Cosine similarity (%)")
axes[i, 2].yaxis.set_major_formatter(mtick.PercentFormatter(1.0))
axes[i, 1].set_title("Spectrogram "+str(i)+" (dB)")
axbig.set_yticks(range(len(freqs)))
axbig.set_yticklabels([str(ylab) + " Hz" for ylab in freqs][::-1])
axbig.tick_params(
axis='x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False) # labels along the bottom edge are off
plt.show()
|
[
"scipy.spatial.distance.cosine",
"numpy.sqrt",
"matplotlib.pyplot.show",
"numpy.average",
"matplotlib.ticker.PercentFormatter",
"math.sqrt",
"numpy.asarray",
"numpy.any",
"numpy.square",
"numpy.rot90",
"matplotlib.pyplot.subplots",
"numpy.arange",
"numpy.atleast_1d"
] |
[((3222, 3247), 'numpy.rot90', 'numpy.rot90', (['ref_spectrum'], {}), '(ref_spectrum)\n', (3233, 3247), False, 'import numpy\n'), ((3266, 3293), 'numpy.rot90', 'numpy.rot90', (['test1_spectrum'], {}), '(test1_spectrum)\n', (3277, 3293), False, 'import numpy\n'), ((3312, 3339), 'numpy.rot90', 'numpy.rot90', (['test2_spectrum'], {}), '(test2_spectrum)\n', (3323, 3339), False, 'import numpy\n'), ((3358, 3385), 'numpy.rot90', 'numpy.rot90', (['test3_spectrum'], {}), '(test3_spectrum)\n', (3369, 3385), False, 'import numpy\n'), ((3399, 3454), 'matplotlib.pyplot.subplots', 'plt.subplots', ([], {'nrows': '(4)', 'ncols': '(3)', 'constrained_layout': '(True)'}), '(nrows=4, ncols=3, constrained_layout=True)\n', (3411, 3454), True, 'import matplotlib.pyplot as plt\n'), ((5139, 5149), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (5147, 5149), True, 'import matplotlib.pyplot as plt\n'), ((726, 742), 'numpy.any', 'numpy.any', (['(w < 0)'], {}), '(w < 0)\n', (735, 742), False, 'import numpy\n'), ((1052, 1071), 'numpy.atleast_1d', 'numpy.atleast_1d', (['u'], {}), '(u)\n', (1068, 1071), False, 'import numpy\n'), ((1325, 1356), 'numpy.average', 'numpy.average', (['(u * v)'], {'weights': 'w'}), '(u * v, weights=w)\n', (1338, 1356), False, 'import numpy\n'), ((3981, 4003), 'numpy.rot90', 'numpy.rot90', (['[trigger]'], {}), '([trigger])\n', (3992, 4003), False, 'import numpy\n'), ((1380, 1395), 'numpy.square', 'numpy.square', (['u'], {}), '(u)\n', (1392, 1395), False, 'import numpy\n'), ((1431, 1446), 'numpy.square', 'numpy.square', (['v'], {}), '(v)\n', (1443, 1446), False, 'import numpy\n'), ((2506, 2562), 'scipy.spatial.distance.cosine', 'distance.cosine', (['trigger', 'ref_spectrum[idfreq]'], {'w': 'weight'}), '(trigger, ref_spectrum[idfreq], w=weight)\n', (2521, 2562), False, 'from scipy.spatial import distance\n'), ((2644, 2702), 'scipy.spatial.distance.cosine', 'distance.cosine', (['trigger', 'test1_spectrum[idfreq]'], {'w': 'weight'}), '(trigger, test1_spectrum[idfreq], w=weight)\n', (2659, 2702), False, 'from scipy.spatial import distance\n'), ((2784, 2842), 'scipy.spatial.distance.cosine', 'distance.cosine', (['trigger', 'test2_spectrum[idfreq]'], {'w': 'weight'}), '(trigger, test2_spectrum[idfreq], w=weight)\n', (2799, 2842), False, 'from scipy.spatial import distance\n'), ((2924, 2982), 'scipy.spatial.distance.cosine', 'distance.cosine', (['trigger', 'test3_spectrum[idfreq]'], {'w': 'weight'}), '(trigger, test3_spectrum[idfreq], w=weight)\n', (2939, 2982), False, 'from scipy.spatial import distance\n'), ((4629, 4656), 'matplotlib.ticker.PercentFormatter', 'mtick.PercentFormatter', (['(1.0)'], {}), '(1.0)\n', (4651, 4656), True, 'import matplotlib.ticker as mtick\n'), ((593, 617), 'math.sqrt', 'math.sqrt', (['(a_sum * b_sum)'], {}), '(a_sum * b_sum)\n', (602, 617), False, 'import math\n'), ((926, 966), 'numpy.asarray', 'numpy.asarray', (['u'], {'dtype': 'dtype', 'order': '"""c"""'}), "(u, dtype=dtype, order='c')\n", (939, 966), False, 'import numpy\n'), ((1481, 1500), 'numpy.sqrt', 'numpy.sqrt', (['(uu * vv)'], {}), '(uu * vv)\n', (1491, 1500), False, 'import numpy\n'), ((4334, 4360), 'numpy.arange', 'numpy.arange', (['(0)', '(10)', '(0.125)'], {}), '(0, 10, 0.125)\n', (4346, 4360), False, 'import numpy\n'), ((4502, 4528), 'numpy.arange', 'numpy.arange', (['(0)', '(10)', '(0.125)'], {}), '(0, 10, 0.125)\n', (4514, 4528), False, 'import numpy\n')]
|
from django.test.utils import override_settings
from hc.api.models import Channel
from hc.test import BaseTestCase
@override_settings(PD_VENDOR_KEY="foo")
class AddPdConnectTestCase(BaseTestCase):
def setUp(self):
super().setUp()
self.url = "/projects/%s/add_pdc/" % self.project.code
def test_it_works(self):
session = self.client.session
session["pd"] = "1234567890AB" # 12 characters
session.save()
self.client.login(username="<EMAIL>", password="password")
url = self.url + "1234567890AB/?service_key=123"
r = self.client.get(url, follow=True)
self.assertRedirects(r, self.channels_url)
c = Channel.objects.get()
self.assertEqual(c.kind, "pd")
self.assertEqual(c.pd_service_key, "123")
self.assertEqual(c.project, self.project)
@override_settings(PD_VENDOR_KEY=None)
def test_it_requires_vendor_key(self):
self.client.login(username="<EMAIL>", password="password")
r = self.client.get(self.url)
self.assertEqual(r.status_code, 404)
@override_settings(PD_ENABLED=False)
def test_it_requires_pd_enabled(self):
self.client.login(username="<EMAIL>", password="password")
r = self.client.get(self.url)
self.assertEqual(r.status_code, 404)
def test_it_requires_rw_access(self):
self.bobs_membership.rw = False
self.bobs_membership.save()
self.client.login(username="<EMAIL>", password="password")
r = self.client.get(self.url)
self.assertEqual(r.status_code, 403)
|
[
"hc.api.models.Channel.objects.get",
"django.test.utils.override_settings"
] |
[((118, 156), 'django.test.utils.override_settings', 'override_settings', ([], {'PD_VENDOR_KEY': '"""foo"""'}), "(PD_VENDOR_KEY='foo')\n", (135, 156), False, 'from django.test.utils import override_settings\n'), ((856, 893), 'django.test.utils.override_settings', 'override_settings', ([], {'PD_VENDOR_KEY': 'None'}), '(PD_VENDOR_KEY=None)\n', (873, 893), False, 'from django.test.utils import override_settings\n'), ((1094, 1129), 'django.test.utils.override_settings', 'override_settings', ([], {'PD_ENABLED': '(False)'}), '(PD_ENABLED=False)\n', (1111, 1129), False, 'from django.test.utils import override_settings\n'), ((689, 710), 'hc.api.models.Channel.objects.get', 'Channel.objects.get', ([], {}), '()\n', (708, 710), False, 'from hc.api.models import Channel\n')]
|
import sys
sys.setrecursionlimit(3000)
def check(rs, cs):
table[rs][cs] = 2
if (rs, cs) == (rg, cg): return True
if rs > 0 and table[rs - 1][cs] == 1 and check(rs - 1, cs):
return True
if cs > 0 and table[rs][cs - 1] == 1 and check(rs, cs - 1):
return True
if rs < r - 1 and table[rs + 1][cs] == 1 and check(rs + 1, cs):
return True
if cs < c - 1 and table[rs][cs + 1] == 1 and check(rs, cs + 1):
return True
return False
r, c = map(int, input().split())
table = [[0] * c for _ in range(r)]
rs, cs = map(lambda x:int(x) - 1, input().split())
rg, cg = map(lambda x:int(x) - 1, input().split())
n = int(input())
draw = [list(map(int, input().split())) for _ in range(n)]
for ri, ci, hi, wi in draw:
ri -= 1
ci -= 1
for i in range(ri, ri+hi):
for j in range(ci, ci+wi):
table[i][j] = 1
if table[rs][cs] != 1 or table[rg][cg] != 1:
print('NO')
else:
print('YES' if check(rs, cs) else 'NO')
|
[
"sys.setrecursionlimit"
] |
[((11, 38), 'sys.setrecursionlimit', 'sys.setrecursionlimit', (['(3000)'], {}), '(3000)\n', (32, 38), False, 'import sys\n')]
|
import json
import logging
from unittest import mock, TestCase
from bullet_train import BulletTrain
import os
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
TEST_API_URL = 'https://test.bullet-train.io/api'
TEST_IDENTIFIER = 'test-identity'
TEST_FEATURE = 'test-feature'
class MockResponse:
def __init__(self, data, status_code):
self.json_data = json.loads(data)
self.status_code = status_code
def json(self):
return self.json_data
def mock_response(filename, *args, status=200, **kwargs):
print('Hit URL %s with params' % args[0], kwargs.get('params'))
dir_path = os.path.dirname(os.path.realpath(__file__))
with open(os.path.join(dir_path, filename), 'rt') as f:
return MockResponse(f.read(), status)
def mocked_get_specific_feature_flag_enabled(*args, **kwargs):
return mock_response('data/get-flag-for-specific-feature-enabled.json', *args, **kwargs)
def mocked_get_specific_feature_flag_disabled(*args, **kwargs):
return mock_response('data/get-flag-for-specific-feature-disabled.json', *args, **kwargs)
def mocked_get_specific_feature_flag_not_found(*args, **kwargs):
return mock_response('data/not-found.json', *args, status=404, **kwargs)
def mocked_get_value(*args, **kwargs):
return mock_response('data/get-value-for-specific-feature.json', *args, **kwargs)
def mocked_get_identity_flags_with_trait(*args, **kwargs):
return mock_response('data/get-identity-flags-with-trait.json', *args, **kwargs)
def mocked_get_identity_flags_without_trait(*args, **kwargs):
return mock_response('data/get-identity-flags-without-trait.json', *args, **kwargs)
class BulletTrainTestCase(TestCase):
test_environment_key = 'test-env-key'
def setUp(self) -> None:
self.bt = BulletTrain(environment_id=self.test_environment_key, api=TEST_API_URL)
@mock.patch('bullet_train.bullet_train.requests.get', side_effect=mocked_get_specific_feature_flag_enabled)
def test_has_feature_returns_true_if_feature_returned(self, mock_get):
# When
result = self.bt.has_feature(TEST_FEATURE)
# Then
assert result
@mock.patch('bullet_train.bullet_train.requests.get', side_effect=mocked_get_specific_feature_flag_not_found)
def test_has_feature_returns_false_if_feature_not_returned(self, mock_get):
# When
result = self.bt.has_feature(TEST_FEATURE)
# Then
assert not result
@mock.patch('bullet_train.bullet_train.requests.get', side_effect=mocked_get_specific_feature_flag_enabled)
def test_feature_enabled_returns_true_if_feature_enabled(self, mock_get):
# When
result = self.bt.feature_enabled(TEST_FEATURE)
# Then
assert result
@mock.patch('bullet_train.bullet_train.requests.get', side_effect=mocked_get_specific_feature_flag_disabled)
def test_feature_enabled_returns_true_if_feature_disabled(self, mock_get):
# When
result = self.bt.feature_enabled(TEST_FEATURE)
# Then
assert not result
@mock.patch('bullet_train.bullet_train.requests.get', side_effect=mocked_get_value)
def test_get_value_returns_value_for_environment_if_feature_exists(self, mock_get):
# When
result = self.bt.get_value(TEST_FEATURE)
# Then
assert result == 'Test value'
@mock.patch('bullet_train.bullet_train.requests.get', side_effect=mocked_get_specific_feature_flag_not_found)
def test_get_value_returns_None_for_environment_if_feature_does_not_exist(self, mock_get):
# When
result = self.bt.get_value(TEST_FEATURE)
# Then
assert result is None
@mock.patch('bullet_train.bullet_train.requests.get', side_effect=mocked_get_identity_flags_with_trait)
def test_get_trait_returns_trait_value_if_trait_key_exists(self, mock_get):
# When
result = self.bt.get_trait('trait_key', TEST_IDENTIFIER)
# Then
assert result == 'trait_value'
@mock.patch('bullet_train.bullet_train.requests.get', side_effect=mocked_get_identity_flags_without_trait)
def test_get_trait_returns_None_if_trait_key_does_not_exist(self, mock_get):
# When
result = self.bt.get_trait('trait_key', TEST_IDENTIFIER)
# Then
assert result is None
|
[
"logging.getLogger",
"json.loads",
"bullet_train.BulletTrain",
"os.path.join",
"os.path.realpath",
"unittest.mock.patch"
] |
[((121, 148), 'logging.getLogger', 'logging.getLogger', (['__name__'], {}), '(__name__)\n', (138, 148), False, 'import logging\n'), ((1878, 1989), 'unittest.mock.patch', 'mock.patch', (['"""bullet_train.bullet_train.requests.get"""'], {'side_effect': 'mocked_get_specific_feature_flag_enabled'}), "('bullet_train.bullet_train.requests.get', side_effect=\n mocked_get_specific_feature_flag_enabled)\n", (1888, 1989), False, 'from unittest import mock, TestCase\n'), ((2170, 2283), 'unittest.mock.patch', 'mock.patch', (['"""bullet_train.bullet_train.requests.get"""'], {'side_effect': 'mocked_get_specific_feature_flag_not_found'}), "('bullet_train.bullet_train.requests.get', side_effect=\n mocked_get_specific_feature_flag_not_found)\n", (2180, 2283), False, 'from unittest import mock, TestCase\n'), ((2473, 2584), 'unittest.mock.patch', 'mock.patch', (['"""bullet_train.bullet_train.requests.get"""'], {'side_effect': 'mocked_get_specific_feature_flag_enabled'}), "('bullet_train.bullet_train.requests.get', side_effect=\n mocked_get_specific_feature_flag_enabled)\n", (2483, 2584), False, 'from unittest import mock, TestCase\n'), ((2772, 2884), 'unittest.mock.patch', 'mock.patch', (['"""bullet_train.bullet_train.requests.get"""'], {'side_effect': 'mocked_get_specific_feature_flag_disabled'}), "('bullet_train.bullet_train.requests.get', side_effect=\n mocked_get_specific_feature_flag_disabled)\n", (2782, 2884), False, 'from unittest import mock, TestCase\n'), ((3077, 3164), 'unittest.mock.patch', 'mock.patch', (['"""bullet_train.bullet_train.requests.get"""'], {'side_effect': 'mocked_get_value'}), "('bullet_train.bullet_train.requests.get', side_effect=\n mocked_get_value)\n", (3087, 3164), False, 'from unittest import mock, TestCase\n'), ((3372, 3485), 'unittest.mock.patch', 'mock.patch', (['"""bullet_train.bullet_train.requests.get"""'], {'side_effect': 'mocked_get_specific_feature_flag_not_found'}), "('bullet_train.bullet_train.requests.get', side_effect=\n mocked_get_specific_feature_flag_not_found)\n", (3382, 3485), False, 'from unittest import mock, TestCase\n'), ((3692, 3799), 'unittest.mock.patch', 'mock.patch', (['"""bullet_train.bullet_train.requests.get"""'], {'side_effect': 'mocked_get_identity_flags_with_trait'}), "('bullet_train.bullet_train.requests.get', side_effect=\n mocked_get_identity_flags_with_trait)\n", (3702, 3799), False, 'from unittest import mock, TestCase\n'), ((4016, 4126), 'unittest.mock.patch', 'mock.patch', (['"""bullet_train.bullet_train.requests.get"""'], {'side_effect': 'mocked_get_identity_flags_without_trait'}), "('bullet_train.bullet_train.requests.get', side_effect=\n mocked_get_identity_flags_without_trait)\n", (4026, 4126), False, 'from unittest import mock, TestCase\n'), ((384, 400), 'json.loads', 'json.loads', (['data'], {}), '(data)\n', (394, 400), False, 'import json\n'), ((650, 676), 'os.path.realpath', 'os.path.realpath', (['__file__'], {}), '(__file__)\n', (666, 676), False, 'import os\n'), ((1800, 1871), 'bullet_train.BulletTrain', 'BulletTrain', ([], {'environment_id': 'self.test_environment_key', 'api': 'TEST_API_URL'}), '(environment_id=self.test_environment_key, api=TEST_API_URL)\n', (1811, 1871), False, 'from bullet_train import BulletTrain\n'), ((692, 724), 'os.path.join', 'os.path.join', (['dir_path', 'filename'], {}), '(dir_path, filename)\n', (704, 724), False, 'import os\n')]
|
"""
This module contains entry points for command-line utilities provided by Plim package.
"""
import sys
import os
import argparse
import codecs
from pkg_resources import get_distribution
from pkg_resources import EntryPoint
from mako.template import Template
from mako.lookup import TemplateLookup
from .util import PY3K
def plimc(args=None, stdout=None):
"""This is the `plimc` command line utility
:param args: list of command-line arguments. If None, then ``sys.argv[1:]`` will be used.
:type args: list or None
:param stdout: file-like object representing stdout. If None, then ``sys.stdout`` will be used.
Custom stdout is used for testing purposes.
:type stdout: None or a file-like object
"""
# Parse arguments
# ------------------------------------
cli_parser = argparse.ArgumentParser(description='Compile plim source files into mako files.')
cli_parser.add_argument('source', help="path to source plim template")
cli_parser.add_argument('-o', '--output', help="write result to FILE.")
cli_parser.add_argument('-e', '--encoding', default='utf-8', help="content encoding")
cli_parser.add_argument('-p', '--preprocessor', default='plim:preprocessor',
help="Preprocessor instance that will be used for parsing the template")
cli_parser.add_argument('-H', '--html', action='store_true', help="Render HTML output instead of Mako template")
cli_parser.add_argument('-V', '--version', action='version',
version='Plim {}'.format(get_distribution("Plim").version))
if args is None:
args = sys.argv[1:]
args = cli_parser.parse_args(args)
# Get custom preprocessor, if specified
# -------------------------------------
preprocessor_path = args.preprocessor
# Add an empty string path, so modules located at the current working dir
# are reachable and considered in the first place (see issue #32).
sys.path.insert(0, '')
preprocessor = EntryPoint.parse('x={}'.format(preprocessor_path)).load(False)
# Render to html, if requested
# ----------------------------
if args.html:
root_dir = os.path.dirname(os.path.abspath(args.source))
template_file = os.path.basename(args.source)
lookup = TemplateLookup(directories=[root_dir],
input_encoding=args.encoding,
output_encoding=args.encoding,
preprocessor=preprocessor)
content = lookup.get_template(template_file).render_unicode()
else:
with codecs.open(args.source, 'rb', args.encoding) as fd:
content = preprocessor(fd.read())
# Output
# ------------------------------------
if args.output is None:
if stdout is None:
stdout = PY3K and sys.stdout.buffer or sys.stdout
fd = stdout
content = codecs.encode(content, 'utf-8')
else:
fd = codecs.open(args.output, 'wb', args.encoding)
try:
fd.write(content)
finally:
fd.close()
|
[
"sys.path.insert",
"argparse.ArgumentParser",
"os.path.basename",
"mako.lookup.TemplateLookup",
"os.path.abspath",
"codecs.open",
"codecs.encode",
"pkg_resources.get_distribution"
] |
[((832, 918), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {'description': '"""Compile plim source files into mako files."""'}), "(description=\n 'Compile plim source files into mako files.')\n", (855, 918), False, 'import argparse\n'), ((1980, 2002), 'sys.path.insert', 'sys.path.insert', (['(0)', '""""""'], {}), "(0, '')\n", (1995, 2002), False, 'import sys\n'), ((2263, 2292), 'os.path.basename', 'os.path.basename', (['args.source'], {}), '(args.source)\n', (2279, 2292), False, 'import os\n'), ((2310, 2440), 'mako.lookup.TemplateLookup', 'TemplateLookup', ([], {'directories': '[root_dir]', 'input_encoding': 'args.encoding', 'output_encoding': 'args.encoding', 'preprocessor': 'preprocessor'}), '(directories=[root_dir], input_encoding=args.encoding,\n output_encoding=args.encoding, preprocessor=preprocessor)\n', (2324, 2440), False, 'from mako.lookup import TemplateLookup\n'), ((2937, 2968), 'codecs.encode', 'codecs.encode', (['content', '"""utf-8"""'], {}), "(content, 'utf-8')\n", (2950, 2968), False, 'import codecs\n'), ((2992, 3037), 'codecs.open', 'codecs.open', (['args.output', '"""wb"""', 'args.encoding'], {}), "(args.output, 'wb', args.encoding)\n", (3003, 3037), False, 'import codecs\n'), ((2209, 2237), 'os.path.abspath', 'os.path.abspath', (['args.source'], {}), '(args.source)\n', (2224, 2237), False, 'import os\n'), ((2626, 2671), 'codecs.open', 'codecs.open', (['args.source', '"""rb"""', 'args.encoding'], {}), "(args.source, 'rb', args.encoding)\n", (2637, 2671), False, 'import codecs\n'), ((1572, 1596), 'pkg_resources.get_distribution', 'get_distribution', (['"""Plim"""'], {}), "('Plim')\n", (1588, 1596), False, 'from pkg_resources import get_distribution\n')]
|
def load_model(model_path, device_type='cuda'):
import torch
from viclassifier.utils import dev_opt
device = dev_opt.usingDevice(device_type)
model = torch.load(model_path, map_location=device)
model.to(device)
# 测试时不启用 BatchNormalization 和 Dropout
model.eval()
return model
def predict(model, image_path, idx_to_class=None, is_show=False, device_type='cuda'):
import torch
from PIL import Image, ImageDraw, ImageFont
from viclassifier.utils import dev_opt
from viclassifier.utils import trans_gen
device = dev_opt.usingDevice(device_type)
model.eval().to(device)
transform = trans_gen.genTrans('test')
image = Image.open(image_path).convert('RGB')
image_tensor = transform(image)
# pytorch中的view 功能类似于numpy中的resize() 函数 把原先tensor中的数据按照行优先的顺序排成你要的形状
# 注意原来的tensor与新的tensor是共享内存的,也就是说对其中的一个tensor进行更改的话,另外一个tensor也会自动进行相应的修改。
# 应该使用clone()函数克隆和再进行view(),而且使⽤clone还有⼀个好处是会被记录在计算图中,即梯度回传到副本时也会传到源Tensor。
image_tensor_view = image_tensor.view(1, 3, 224, 224).to(device)
with torch.no_grad():
out = model(image_tensor_view)
ps = torch.exp(out)
topk, topclass = ps.topk(1, dim=1)
# print("Prediction : ", idx_to_class[topclass.cpu().numpy()[0][0]],
# ", Score: ", topk.cpu().numpy()[0][0])
if is_show:
text = str(topclass.cpu().numpy()[0][0]) + " " + str(topk.cpu().numpy()[0][0])
if idx_to_class is not None:
text = idx_to_class[topclass.cpu().numpy()[0][0]] + " " + str(topk.cpu().numpy()[0][0])
draw = ImageDraw.Draw(image)
font = ImageFont.truetype('arial.ttf', 36)
draw.text((0, 0), text, (255, 0, 0), font=font)
image.show()
label = topclass.cpu().numpy()[0][0]
if idx_to_class is not None:
label = idx_to_class[label]
return label, topk.cpu().numpy()[0][0]
if __name__ == "__main__":
import os, sys
viclassifier_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
print(viclassifier_dir)
sys.path.append(viclassifier_dir)
model = load_model('D:\\myai\\projects\\tmp\\git\\viclassifier\\tmps\\model.pth')
print(model)
image_path = r'C:\xxx\xxx.jpg'
# ### python字典键值对互换###
# d1 = {'a': 1, 'b': 2, 'c': 3}
# # 用遍历互换键值对
# d2 = {}
# for key, value in d1.items():
# d2[value] = key
#
# # 用列表生成器
# d2 = {k: v for v, k in d1.items()}
#
# # 用zip运算符
# d2 = dict(zip(d1.value(), d1.key()))
class_to_idx = {'bad': 0, 'good': 1}
idx_to_class = {k: v for v, k in class_to_idx.items()}
predict(model, image_path, idx_to_class, is_show=False, device_type='cuda')
|
[
"PIL.Image.open",
"viclassifier.utils.trans_gen.genTrans",
"torch.load",
"PIL.ImageFont.truetype",
"torch.exp",
"PIL.ImageDraw.Draw",
"os.path.abspath",
"torch.no_grad",
"sys.path.append",
"viclassifier.utils.dev_opt.usingDevice"
] |
[((122, 154), 'viclassifier.utils.dev_opt.usingDevice', 'dev_opt.usingDevice', (['device_type'], {}), '(device_type)\n', (141, 154), False, 'from viclassifier.utils import dev_opt\n'), ((167, 210), 'torch.load', 'torch.load', (['model_path'], {'map_location': 'device'}), '(model_path, map_location=device)\n', (177, 210), False, 'import torch\n'), ((563, 595), 'viclassifier.utils.dev_opt.usingDevice', 'dev_opt.usingDevice', (['device_type'], {}), '(device_type)\n', (582, 595), False, 'from viclassifier.utils import dev_opt\n'), ((641, 667), 'viclassifier.utils.trans_gen.genTrans', 'trans_gen.genTrans', (['"""test"""'], {}), "('test')\n", (659, 667), False, 'from viclassifier.utils import trans_gen\n'), ((2057, 2090), 'sys.path.append', 'sys.path.append', (['viclassifier_dir'], {}), '(viclassifier_dir)\n', (2072, 2090), False, 'import os, sys\n'), ((1066, 1081), 'torch.no_grad', 'torch.no_grad', ([], {}), '()\n', (1079, 1081), False, 'import torch\n'), ((1135, 1149), 'torch.exp', 'torch.exp', (['out'], {}), '(out)\n', (1144, 1149), False, 'import torch\n'), ((1580, 1601), 'PIL.ImageDraw.Draw', 'ImageDraw.Draw', (['image'], {}), '(image)\n', (1594, 1601), False, 'from PIL import Image, ImageDraw, ImageFont\n'), ((1617, 1652), 'PIL.ImageFont.truetype', 'ImageFont.truetype', (['"""arial.ttf"""', '(36)'], {}), "('arial.ttf', 36)\n", (1635, 1652), False, 'from PIL import Image, ImageDraw, ImageFont\n'), ((680, 702), 'PIL.Image.open', 'Image.open', (['image_path'], {}), '(image_path)\n', (690, 702), False, 'from PIL import Image, ImageDraw, ImageFont\n'), ((1997, 2022), 'os.path.abspath', 'os.path.abspath', (['__file__'], {}), '(__file__)\n', (2012, 2022), False, 'import os, sys\n')]
|
# -*- coding: utf-8 -*-
"""
Created on Sat May 21 17:05:48 2022
@author: <NAME>
"""
import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, balanced_accuracy_score, confusion_matrix
from ibllib.atlas import BrainRegions
from joblib import load
from model_functions import load_channel_data, load_trained_model
import matplotlib.pyplot as plt
import seaborn as sns
br = BrainRegions()
# Settings
FEATURES = ['psd_delta', 'psd_theta', 'psd_alpha', 'psd_beta', 'psd_gamma', 'rms_ap', 'rms_lf',
'spike_rate', 'axial_um', 'x', 'y', 'depth']
# Load in data
chan_volt = load_channel_data()
# chan_volt = pd.read_parquet("/home/sebastian/Downloads/FlatIron/tables/channels_voltage_features.pqt")
chan_volt = chan_volt.loc[~chan_volt['rms_ap'].isnull()] # remove NaNs
# 31d8dfb1-71fd-4c53-9229-7cd48bee07e4 64d04585-67e7-4320-baad-8d4589fd18f7
if True:
test = chan_volt.loc[['31d8dfb1-71fd-4c53-9229-7cd48bee07e4', '64d04585-67e7-4320-baad-8d4589fd18f7'], : ]
else:
test = chan_volt
feature_arr = test[FEATURES].to_numpy()
regions = test['cosmos_acronyms'].values
# Load model
clf = load_trained_model('channels', 'cosmos')
# Decode brain regions
print('Decoding brain regions..')
predictions = clf.predict(feature_arr)
probs = clf.predict_proba(feature_arr)
# histogram of response probabilities
certainties = probs.max(1)
plt.hist(certainties)
plt.close()
# plot of calibration, how certain are correct versus incorrect predicitions
plt.hist(certainties[regions == predictions], label='Correct predictions')
plt.hist(certainties[regions != predictions], label='Wrong predictions')
plt.title("Model calibration", size=24)
plt.legend(frameon=False, fontsize=16)
plt.ylabel("Occurences", size=21)
plt.xlabel("Prob for predicted region", size=21)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
sns.despine()
plt.tight_layout()
plt.savefig("/home/sebastian/Pictures/calibration")
plt.close()
# compute accuracy and balanced for our highly imbalanced dataset
acc = accuracy_score(regions, predictions)
bacc = balanced_accuracy_score(regions, predictions)
print(f'Accuracy: {acc*100:.1f}%')
print(f'Balanced accuracy: {bacc*100:.1f}%')
# compute confusion matrix
names = np.unique(np.append(regions, predictions))
cm = confusion_matrix(regions, predictions, labels=names)
cm = cm / cm.sum(1)[:, None]
cm_copy = cm.copy()
# list top n classifications
n = 10
np.max(cm[~np.isnan(cm)])
cm[np.isnan(cm)] = 0
for i in range(n):
ind = np.unravel_index(np.argmax(cm, axis=None), cm.shape)
if ind[0] != ind[1]:
print("Top {} classification, mistake: {} gets classified as {}".format(i+1, names[ind[0]], names[ind[1]]))
else:
print("Top {} classification, success: {} gets classified as {}".format(i+1, names[ind[0]], names[ind[1]]))
cm[ind] = 0
# plot confusion matrix
plt.imshow(cm_copy)
plt.yticks(range(len(names)), names)
plt.xticks(range(len(names)), names, rotation='65')
plt.show()
|
[
"matplotlib.pyplot.hist",
"sklearn.metrics.balanced_accuracy_score",
"matplotlib.pyplot.ylabel",
"matplotlib.pyplot.imshow",
"seaborn.despine",
"model_functions.load_channel_data",
"matplotlib.pyplot.xlabel",
"matplotlib.pyplot.close",
"matplotlib.pyplot.yticks",
"sklearn.metrics.confusion_matrix",
"matplotlib.pyplot.savefig",
"matplotlib.pyplot.xticks",
"numpy.argmax",
"numpy.isnan",
"matplotlib.pyplot.title",
"sklearn.metrics.accuracy_score",
"matplotlib.pyplot.legend",
"matplotlib.pyplot.show",
"model_functions.load_trained_model",
"numpy.append",
"ibllib.atlas.BrainRegions",
"matplotlib.pyplot.tight_layout"
] |
[((450, 464), 'ibllib.atlas.BrainRegions', 'BrainRegions', ([], {}), '()\n', (462, 464), False, 'from ibllib.atlas import BrainRegions\n'), ((658, 677), 'model_functions.load_channel_data', 'load_channel_data', ([], {}), '()\n', (675, 677), False, 'from model_functions import load_channel_data, load_trained_model\n'), ((1180, 1220), 'model_functions.load_trained_model', 'load_trained_model', (['"""channels"""', '"""cosmos"""'], {}), "('channels', 'cosmos')\n", (1198, 1220), False, 'from model_functions import load_channel_data, load_trained_model\n'), ((1423, 1444), 'matplotlib.pyplot.hist', 'plt.hist', (['certainties'], {}), '(certainties)\n', (1431, 1444), True, 'import matplotlib.pyplot as plt\n'), ((1445, 1456), 'matplotlib.pyplot.close', 'plt.close', ([], {}), '()\n', (1454, 1456), True, 'import matplotlib.pyplot as plt\n'), ((1535, 1609), 'matplotlib.pyplot.hist', 'plt.hist', (['certainties[regions == predictions]'], {'label': '"""Correct predictions"""'}), "(certainties[regions == predictions], label='Correct predictions')\n", (1543, 1609), True, 'import matplotlib.pyplot as plt\n'), ((1610, 1682), 'matplotlib.pyplot.hist', 'plt.hist', (['certainties[regions != predictions]'], {'label': '"""Wrong predictions"""'}), "(certainties[regions != predictions], label='Wrong predictions')\n", (1618, 1682), True, 'import matplotlib.pyplot as plt\n'), ((1683, 1722), 'matplotlib.pyplot.title', 'plt.title', (['"""Model calibration"""'], {'size': '(24)'}), "('Model calibration', size=24)\n", (1692, 1722), True, 'import matplotlib.pyplot as plt\n'), ((1723, 1761), 'matplotlib.pyplot.legend', 'plt.legend', ([], {'frameon': '(False)', 'fontsize': '(16)'}), '(frameon=False, fontsize=16)\n', (1733, 1761), True, 'import matplotlib.pyplot as plt\n'), ((1762, 1795), 'matplotlib.pyplot.ylabel', 'plt.ylabel', (['"""Occurences"""'], {'size': '(21)'}), "('Occurences', size=21)\n", (1772, 1795), True, 'import matplotlib.pyplot as plt\n'), ((1796, 1844), 'matplotlib.pyplot.xlabel', 'plt.xlabel', (['"""Prob for predicted region"""'], {'size': '(21)'}), "('Prob for predicted region', size=21)\n", (1806, 1844), True, 'import matplotlib.pyplot as plt\n'), ((1845, 1868), 'matplotlib.pyplot.xticks', 'plt.xticks', ([], {'fontsize': '(14)'}), '(fontsize=14)\n', (1855, 1868), True, 'import matplotlib.pyplot as plt\n'), ((1869, 1892), 'matplotlib.pyplot.yticks', 'plt.yticks', ([], {'fontsize': '(14)'}), '(fontsize=14)\n', (1879, 1892), True, 'import matplotlib.pyplot as plt\n'), ((1894, 1907), 'seaborn.despine', 'sns.despine', ([], {}), '()\n', (1905, 1907), True, 'import seaborn as sns\n'), ((1908, 1926), 'matplotlib.pyplot.tight_layout', 'plt.tight_layout', ([], {}), '()\n', (1924, 1926), True, 'import matplotlib.pyplot as plt\n'), ((1927, 1978), 'matplotlib.pyplot.savefig', 'plt.savefig', (['"""/home/sebastian/Pictures/calibration"""'], {}), "('/home/sebastian/Pictures/calibration')\n", (1938, 1978), True, 'import matplotlib.pyplot as plt\n'), ((1979, 1990), 'matplotlib.pyplot.close', 'plt.close', ([], {}), '()\n', (1988, 1990), True, 'import matplotlib.pyplot as plt\n'), ((2064, 2100), 'sklearn.metrics.accuracy_score', 'accuracy_score', (['regions', 'predictions'], {}), '(regions, predictions)\n', (2078, 2100), False, 'from sklearn.metrics import accuracy_score, balanced_accuracy_score, confusion_matrix\n'), ((2108, 2153), 'sklearn.metrics.balanced_accuracy_score', 'balanced_accuracy_score', (['regions', 'predictions'], {}), '(regions, predictions)\n', (2131, 2153), False, 'from sklearn.metrics import accuracy_score, balanced_accuracy_score, confusion_matrix\n'), ((2319, 2371), 'sklearn.metrics.confusion_matrix', 'confusion_matrix', (['regions', 'predictions'], {'labels': 'names'}), '(regions, predictions, labels=names)\n', (2335, 2371), False, 'from sklearn.metrics import accuracy_score, balanced_accuracy_score, confusion_matrix\n'), ((2896, 2915), 'matplotlib.pyplot.imshow', 'plt.imshow', (['cm_copy'], {}), '(cm_copy)\n', (2906, 2915), True, 'import matplotlib.pyplot as plt\n'), ((3005, 3015), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (3013, 3015), True, 'import matplotlib.pyplot as plt\n'), ((2281, 2312), 'numpy.append', 'np.append', (['regions', 'predictions'], {}), '(regions, predictions)\n', (2290, 2312), True, 'import numpy as np\n'), ((2488, 2500), 'numpy.isnan', 'np.isnan', (['cm'], {}), '(cm)\n', (2496, 2500), True, 'import numpy as np\n'), ((2552, 2576), 'numpy.argmax', 'np.argmax', (['cm'], {'axis': 'None'}), '(cm, axis=None)\n', (2561, 2576), True, 'import numpy as np\n'), ((2470, 2482), 'numpy.isnan', 'np.isnan', (['cm'], {}), '(cm)\n', (2478, 2482), True, 'import numpy as np\n')]
|
#!/usr/bin/env python
import pytest
from pyxenon_snippets import directory_listing_recursive
def test_directory_listing_recursive():
directory_listing_recursive.run_example()
|
[
"pyxenon_snippets.directory_listing_recursive.run_example"
] |
[((141, 182), 'pyxenon_snippets.directory_listing_recursive.run_example', 'directory_listing_recursive.run_example', ([], {}), '()\n', (180, 182), False, 'from pyxenon_snippets import directory_listing_recursive\n')]
|
import sys
read = sys.stdin.buffer.read
readline = sys.stdin.buffer.readline
readlines = sys.stdin.buffer.readlines
sys.setrecursionlimit(10 ** 7)
from itertools import product
n, m = map(int, readline().split())
inf = float('inf')
dp = [inf] * (2 ** n)
dp[0] = 0
for _ in range(m):
s, c = readline().rstrip().decode().split()
c = int(c)
bit = [0] * n
for i, ss in enumerate(s):
if ss == 'Y':
bit[i] = 1
for i, v in enumerate(product([0, 1], repeat=n)):
if dp[i] != inf:
num = 0
for index, (x, y) in enumerate(zip(v[::-1], bit)):
if x == 1 or y == 1:
num += 2 ** index
dp[num] = min(dp[num], dp[i] + c)
print(-1 if dp[-1] == inf else dp[-1])
|
[
"sys.setrecursionlimit",
"itertools.product"
] |
[((116, 146), 'sys.setrecursionlimit', 'sys.setrecursionlimit', (['(10 ** 7)'], {}), '(10 ** 7)\n', (137, 146), False, 'import sys\n'), ((468, 493), 'itertools.product', 'product', (['[0, 1]'], {'repeat': 'n'}), '([0, 1], repeat=n)\n', (475, 493), False, 'from itertools import product\n')]
|
# -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import warnings
from typing import Awaitable, Callable, Dict, Optional, Sequence, Tuple, Union
from google.api_core import gapic_v1
from google.api_core import grpc_helpers_async
from google.auth import credentials as ga_credentials # type: ignore
from google.auth.transport.grpc import SslCredentials # type: ignore
import grpc # type: ignore
from grpc.experimental import aio # type: ignore
from google.cloud.dlp_v2.types import dlp
from google.protobuf import empty_pb2 # type: ignore
from .base import DlpServiceTransport, DEFAULT_CLIENT_INFO
from .grpc import DlpServiceGrpcTransport
class DlpServiceGrpcAsyncIOTransport(DlpServiceTransport):
"""gRPC AsyncIO backend transport for DlpService.
The Cloud Data Loss Prevention (DLP) API is a service that
allows clients to detect the presence of Personally Identifiable
Information (PII) and other privacy-sensitive data in user-
supplied, unstructured data streams, like text blocks or images.
The service also includes methods for sensitive data redaction
and scheduling of data scans on Google Cloud Platform based data
sets.
To learn more about concepts and find how-to guides see
https://cloud.google.com/dlp/docs/.
This class defines the same methods as the primary client, so the
primary client can load the underlying transport implementation
and call it.
It sends protocol buffers over the wire using gRPC (which is built on
top of HTTP/2); the ``grpcio`` package must be installed.
"""
_grpc_channel: aio.Channel
_stubs: Dict[str, Callable] = {}
@classmethod
def create_channel(
cls,
host: str = "dlp.googleapis.com",
credentials: ga_credentials.Credentials = None,
credentials_file: Optional[str] = None,
scopes: Optional[Sequence[str]] = None,
quota_project_id: Optional[str] = None,
**kwargs,
) -> aio.Channel:
"""Create and return a gRPC AsyncIO channel object.
Args:
host (Optional[str]): The host for the channel to use.
credentials (Optional[~.Credentials]): The
authorization credentials to attach to requests. These
credentials identify this application to the service. If
none are specified, the client will attempt to ascertain
the credentials from the environment.
credentials_file (Optional[str]): A file with credentials that can
be loaded with :func:`google.auth.load_credentials_from_file`.
This argument is ignored if ``channel`` is provided.
scopes (Optional[Sequence[str]]): A optional list of scopes needed for this
service. These are only used when credentials are not specified and
are passed to :func:`google.auth.default`.
quota_project_id (Optional[str]): An optional project to use for billing
and quota.
kwargs (Optional[dict]): Keyword arguments, which are passed to the
channel creation.
Returns:
aio.Channel: A gRPC AsyncIO channel object.
"""
return grpc_helpers_async.create_channel(
host,
credentials=credentials,
credentials_file=credentials_file,
quota_project_id=quota_project_id,
default_scopes=cls.AUTH_SCOPES,
scopes=scopes,
default_host=cls.DEFAULT_HOST,
**kwargs,
)
def __init__(
self,
*,
host: str = "dlp.googleapis.com",
credentials: ga_credentials.Credentials = None,
credentials_file: Optional[str] = None,
scopes: Optional[Sequence[str]] = None,
channel: aio.Channel = None,
api_mtls_endpoint: str = None,
client_cert_source: Callable[[], Tuple[bytes, bytes]] = None,
ssl_channel_credentials: grpc.ChannelCredentials = None,
client_cert_source_for_mtls: Callable[[], Tuple[bytes, bytes]] = None,
quota_project_id=None,
client_info: gapic_v1.client_info.ClientInfo = DEFAULT_CLIENT_INFO,
always_use_jwt_access: Optional[bool] = False,
) -> None:
"""Instantiate the transport.
Args:
host (Optional[str]):
The hostname to connect to.
credentials (Optional[google.auth.credentials.Credentials]): The
authorization credentials to attach to requests. These
credentials identify the application to the service; if none
are specified, the client will attempt to ascertain the
credentials from the environment.
This argument is ignored if ``channel`` is provided.
credentials_file (Optional[str]): A file with credentials that can
be loaded with :func:`google.auth.load_credentials_from_file`.
This argument is ignored if ``channel`` is provided.
scopes (Optional[Sequence[str]]): A optional list of scopes needed for this
service. These are only used when credentials are not specified and
are passed to :func:`google.auth.default`.
channel (Optional[aio.Channel]): A ``Channel`` instance through
which to make calls.
api_mtls_endpoint (Optional[str]): Deprecated. The mutual TLS endpoint.
If provided, it overrides the ``host`` argument and tries to create
a mutual TLS channel with client SSL credentials from
``client_cert_source`` or application default SSL credentials.
client_cert_source (Optional[Callable[[], Tuple[bytes, bytes]]]):
Deprecated. A callback to provide client SSL certificate bytes and
private key bytes, both in PEM format. It is ignored if
``api_mtls_endpoint`` is None.
ssl_channel_credentials (grpc.ChannelCredentials): SSL credentials
for the grpc channel. It is ignored if ``channel`` is provided.
client_cert_source_for_mtls (Optional[Callable[[], Tuple[bytes, bytes]]]):
A callback to provide client certificate bytes and private key bytes,
both in PEM format. It is used to configure a mutual TLS channel. It is
ignored if ``channel`` or ``ssl_channel_credentials`` is provided.
quota_project_id (Optional[str]): An optional project to use for billing
and quota.
client_info (google.api_core.gapic_v1.client_info.ClientInfo):
The client info used to send a user-agent string along with
API requests. If ``None``, then default info will be used.
Generally, you only need to set this if you're developing
your own client library.
always_use_jwt_access (Optional[bool]): Whether self signed JWT should
be used for service account credentials.
Raises:
google.auth.exceptions.MutualTlsChannelError: If mutual TLS transport
creation failed for any reason.
google.api_core.exceptions.DuplicateCredentialArgs: If both ``credentials``
and ``credentials_file`` are passed.
"""
self._grpc_channel = None
self._ssl_channel_credentials = ssl_channel_credentials
self._stubs: Dict[str, Callable] = {}
if api_mtls_endpoint:
warnings.warn("api_mtls_endpoint is deprecated", DeprecationWarning)
if client_cert_source:
warnings.warn("client_cert_source is deprecated", DeprecationWarning)
if channel:
# Ignore credentials if a channel was passed.
credentials = False
# If a channel was explicitly provided, set it.
self._grpc_channel = channel
self._ssl_channel_credentials = None
else:
if api_mtls_endpoint:
host = api_mtls_endpoint
# Create SSL credentials with client_cert_source or application
# default SSL credentials.
if client_cert_source:
cert, key = client_cert_source()
self._ssl_channel_credentials = grpc.ssl_channel_credentials(
certificate_chain=cert, private_key=key
)
else:
self._ssl_channel_credentials = SslCredentials().ssl_credentials
else:
if client_cert_source_for_mtls and not ssl_channel_credentials:
cert, key = client_cert_source_for_mtls()
self._ssl_channel_credentials = grpc.ssl_channel_credentials(
certificate_chain=cert, private_key=key
)
# The base transport sets the host, credentials and scopes
super().__init__(
host=host,
credentials=credentials,
credentials_file=credentials_file,
scopes=scopes,
quota_project_id=quota_project_id,
client_info=client_info,
always_use_jwt_access=always_use_jwt_access,
)
if not self._grpc_channel:
self._grpc_channel = type(self).create_channel(
self._host,
credentials=self._credentials,
credentials_file=credentials_file,
scopes=self._scopes,
ssl_credentials=self._ssl_channel_credentials,
quota_project_id=quota_project_id,
options=[
("grpc.max_send_message_length", -1),
("grpc.max_receive_message_length", -1),
],
)
# Wrap messages. This must be done after self._grpc_channel exists
self._prep_wrapped_messages(client_info)
@property
def grpc_channel(self) -> aio.Channel:
"""Create the channel designed to connect to this service.
This property caches on the instance; repeated calls return
the same channel.
"""
# Return the channel from cache.
return self._grpc_channel
@property
def inspect_content(
self,
) -> Callable[[dlp.InspectContentRequest], Awaitable[dlp.InspectContentResponse]]:
r"""Return a callable for the inspect content method over gRPC.
Finds potentially sensitive info in content.
This method has limits on input size, processing time,
and output size.
When no InfoTypes or CustomInfoTypes are specified in
this request, the system will automatically choose what
detectors to run. By default this may be all types, but
may change over time as detectors are updated.
For how to guides, see
https://cloud.google.com/dlp/docs/inspecting-images and
https://cloud.google.com/dlp/docs/inspecting-text,
Returns:
Callable[[~.InspectContentRequest],
Awaitable[~.InspectContentResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "inspect_content" not in self._stubs:
self._stubs["inspect_content"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/InspectContent",
request_serializer=dlp.InspectContentRequest.serialize,
response_deserializer=dlp.InspectContentResponse.deserialize,
)
return self._stubs["inspect_content"]
@property
def redact_image(
self,
) -> Callable[[dlp.RedactImageRequest], Awaitable[dlp.RedactImageResponse]]:
r"""Return a callable for the redact image method over gRPC.
Redacts potentially sensitive info from an image.
This method has limits on input size, processing time,
and output size. See
https://cloud.google.com/dlp/docs/redacting-sensitive-
data-images to learn more.
When no InfoTypes or CustomInfoTypes are specified in
this request, the system will automatically choose what
detectors to run. By default this may be all types, but
may change over time as detectors are updated.
Returns:
Callable[[~.RedactImageRequest],
Awaitable[~.RedactImageResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "redact_image" not in self._stubs:
self._stubs["redact_image"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/RedactImage",
request_serializer=dlp.RedactImageRequest.serialize,
response_deserializer=dlp.RedactImageResponse.deserialize,
)
return self._stubs["redact_image"]
@property
def deidentify_content(
self,
) -> Callable[
[dlp.DeidentifyContentRequest], Awaitable[dlp.DeidentifyContentResponse]
]:
r"""Return a callable for the deidentify content method over gRPC.
De-identifies potentially sensitive info from a
ContentItem. This method has limits on input size and
output size. See
https://cloud.google.com/dlp/docs/deidentify-sensitive-
data to learn more.
When no InfoTypes or CustomInfoTypes are specified in
this request, the system will automatically choose what
detectors to run. By default this may be all types, but
may change over time as detectors are updated.
Returns:
Callable[[~.DeidentifyContentRequest],
Awaitable[~.DeidentifyContentResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "deidentify_content" not in self._stubs:
self._stubs["deidentify_content"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/DeidentifyContent",
request_serializer=dlp.DeidentifyContentRequest.serialize,
response_deserializer=dlp.DeidentifyContentResponse.deserialize,
)
return self._stubs["deidentify_content"]
@property
def reidentify_content(
self,
) -> Callable[
[dlp.ReidentifyContentRequest], Awaitable[dlp.ReidentifyContentResponse]
]:
r"""Return a callable for the reidentify content method over gRPC.
Re-identifies content that has been de-identified. See
https://cloud.google.com/dlp/docs/pseudonymization#re-identification_in_free_text_code_example
to learn more.
Returns:
Callable[[~.ReidentifyContentRequest],
Awaitable[~.ReidentifyContentResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "reidentify_content" not in self._stubs:
self._stubs["reidentify_content"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/ReidentifyContent",
request_serializer=dlp.ReidentifyContentRequest.serialize,
response_deserializer=dlp.ReidentifyContentResponse.deserialize,
)
return self._stubs["reidentify_content"]
@property
def list_info_types(
self,
) -> Callable[[dlp.ListInfoTypesRequest], Awaitable[dlp.ListInfoTypesResponse]]:
r"""Return a callable for the list info types method over gRPC.
Returns a list of the sensitive information types
that the DLP API supports. See
https://cloud.google.com/dlp/docs/infotypes-reference to
learn more.
Returns:
Callable[[~.ListInfoTypesRequest],
Awaitable[~.ListInfoTypesResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "list_info_types" not in self._stubs:
self._stubs["list_info_types"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/ListInfoTypes",
request_serializer=dlp.ListInfoTypesRequest.serialize,
response_deserializer=dlp.ListInfoTypesResponse.deserialize,
)
return self._stubs["list_info_types"]
@property
def create_inspect_template(
self,
) -> Callable[[dlp.CreateInspectTemplateRequest], Awaitable[dlp.InspectTemplate]]:
r"""Return a callable for the create inspect template method over gRPC.
Creates an InspectTemplate for re-using frequently
used configuration for inspecting content, images, and
storage. See https://cloud.google.com/dlp/docs/creating-
templates to learn more.
Returns:
Callable[[~.CreateInspectTemplateRequest],
Awaitable[~.InspectTemplate]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "create_inspect_template" not in self._stubs:
self._stubs["create_inspect_template"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/CreateInspectTemplate",
request_serializer=dlp.CreateInspectTemplateRequest.serialize,
response_deserializer=dlp.InspectTemplate.deserialize,
)
return self._stubs["create_inspect_template"]
@property
def update_inspect_template(
self,
) -> Callable[[dlp.UpdateInspectTemplateRequest], Awaitable[dlp.InspectTemplate]]:
r"""Return a callable for the update inspect template method over gRPC.
Updates the InspectTemplate.
See https://cloud.google.com/dlp/docs/creating-templates
to learn more.
Returns:
Callable[[~.UpdateInspectTemplateRequest],
Awaitable[~.InspectTemplate]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "update_inspect_template" not in self._stubs:
self._stubs["update_inspect_template"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/UpdateInspectTemplate",
request_serializer=dlp.UpdateInspectTemplateRequest.serialize,
response_deserializer=dlp.InspectTemplate.deserialize,
)
return self._stubs["update_inspect_template"]
@property
def get_inspect_template(
self,
) -> Callable[[dlp.GetInspectTemplateRequest], Awaitable[dlp.InspectTemplate]]:
r"""Return a callable for the get inspect template method over gRPC.
Gets an InspectTemplate.
See https://cloud.google.com/dlp/docs/creating-templates
to learn more.
Returns:
Callable[[~.GetInspectTemplateRequest],
Awaitable[~.InspectTemplate]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "get_inspect_template" not in self._stubs:
self._stubs["get_inspect_template"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/GetInspectTemplate",
request_serializer=dlp.GetInspectTemplateRequest.serialize,
response_deserializer=dlp.InspectTemplate.deserialize,
)
return self._stubs["get_inspect_template"]
@property
def list_inspect_templates(
self,
) -> Callable[
[dlp.ListInspectTemplatesRequest], Awaitable[dlp.ListInspectTemplatesResponse]
]:
r"""Return a callable for the list inspect templates method over gRPC.
Lists InspectTemplates.
See https://cloud.google.com/dlp/docs/creating-templates
to learn more.
Returns:
Callable[[~.ListInspectTemplatesRequest],
Awaitable[~.ListInspectTemplatesResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "list_inspect_templates" not in self._stubs:
self._stubs["list_inspect_templates"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/ListInspectTemplates",
request_serializer=dlp.ListInspectTemplatesRequest.serialize,
response_deserializer=dlp.ListInspectTemplatesResponse.deserialize,
)
return self._stubs["list_inspect_templates"]
@property
def delete_inspect_template(
self,
) -> Callable[[dlp.DeleteInspectTemplateRequest], Awaitable[empty_pb2.Empty]]:
r"""Return a callable for the delete inspect template method over gRPC.
Deletes an InspectTemplate.
See https://cloud.google.com/dlp/docs/creating-templates
to learn more.
Returns:
Callable[[~.DeleteInspectTemplateRequest],
Awaitable[~.Empty]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "delete_inspect_template" not in self._stubs:
self._stubs["delete_inspect_template"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/DeleteInspectTemplate",
request_serializer=dlp.DeleteInspectTemplateRequest.serialize,
response_deserializer=empty_pb2.Empty.FromString,
)
return self._stubs["delete_inspect_template"]
@property
def create_deidentify_template(
self,
) -> Callable[
[dlp.CreateDeidentifyTemplateRequest], Awaitable[dlp.DeidentifyTemplate]
]:
r"""Return a callable for the create deidentify template method over gRPC.
Creates a DeidentifyTemplate for re-using frequently
used configuration for de-identifying content, images,
and storage. See
https://cloud.google.com/dlp/docs/creating-templates-
deid to learn more.
Returns:
Callable[[~.CreateDeidentifyTemplateRequest],
Awaitable[~.DeidentifyTemplate]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "create_deidentify_template" not in self._stubs:
self._stubs["create_deidentify_template"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/CreateDeidentifyTemplate",
request_serializer=dlp.CreateDeidentifyTemplateRequest.serialize,
response_deserializer=dlp.DeidentifyTemplate.deserialize,
)
return self._stubs["create_deidentify_template"]
@property
def update_deidentify_template(
self,
) -> Callable[
[dlp.UpdateDeidentifyTemplateRequest], Awaitable[dlp.DeidentifyTemplate]
]:
r"""Return a callable for the update deidentify template method over gRPC.
Updates the DeidentifyTemplate.
See https://cloud.google.com/dlp/docs/creating-
templates-deid to learn more.
Returns:
Callable[[~.UpdateDeidentifyTemplateRequest],
Awaitable[~.DeidentifyTemplate]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "update_deidentify_template" not in self._stubs:
self._stubs["update_deidentify_template"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/UpdateDeidentifyTemplate",
request_serializer=dlp.UpdateDeidentifyTemplateRequest.serialize,
response_deserializer=dlp.DeidentifyTemplate.deserialize,
)
return self._stubs["update_deidentify_template"]
@property
def get_deidentify_template(
self,
) -> Callable[
[dlp.GetDeidentifyTemplateRequest], Awaitable[dlp.DeidentifyTemplate]
]:
r"""Return a callable for the get deidentify template method over gRPC.
Gets a DeidentifyTemplate.
See https://cloud.google.com/dlp/docs/creating-
templates-deid to learn more.
Returns:
Callable[[~.GetDeidentifyTemplateRequest],
Awaitable[~.DeidentifyTemplate]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "get_deidentify_template" not in self._stubs:
self._stubs["get_deidentify_template"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/GetDeidentifyTemplate",
request_serializer=dlp.GetDeidentifyTemplateRequest.serialize,
response_deserializer=dlp.DeidentifyTemplate.deserialize,
)
return self._stubs["get_deidentify_template"]
@property
def list_deidentify_templates(
self,
) -> Callable[
[dlp.ListDeidentifyTemplatesRequest],
Awaitable[dlp.ListDeidentifyTemplatesResponse],
]:
r"""Return a callable for the list deidentify templates method over gRPC.
Lists DeidentifyTemplates.
See https://cloud.google.com/dlp/docs/creating-
templates-deid to learn more.
Returns:
Callable[[~.ListDeidentifyTemplatesRequest],
Awaitable[~.ListDeidentifyTemplatesResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "list_deidentify_templates" not in self._stubs:
self._stubs["list_deidentify_templates"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/ListDeidentifyTemplates",
request_serializer=dlp.ListDeidentifyTemplatesRequest.serialize,
response_deserializer=dlp.ListDeidentifyTemplatesResponse.deserialize,
)
return self._stubs["list_deidentify_templates"]
@property
def delete_deidentify_template(
self,
) -> Callable[[dlp.DeleteDeidentifyTemplateRequest], Awaitable[empty_pb2.Empty]]:
r"""Return a callable for the delete deidentify template method over gRPC.
Deletes a DeidentifyTemplate.
See https://cloud.google.com/dlp/docs/creating-
templates-deid to learn more.
Returns:
Callable[[~.DeleteDeidentifyTemplateRequest],
Awaitable[~.Empty]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "delete_deidentify_template" not in self._stubs:
self._stubs["delete_deidentify_template"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/DeleteDeidentifyTemplate",
request_serializer=dlp.DeleteDeidentifyTemplateRequest.serialize,
response_deserializer=empty_pb2.Empty.FromString,
)
return self._stubs["delete_deidentify_template"]
@property
def create_job_trigger(
self,
) -> Callable[[dlp.CreateJobTriggerRequest], Awaitable[dlp.JobTrigger]]:
r"""Return a callable for the create job trigger method over gRPC.
Creates a job trigger to run DLP actions such as
scanning storage for sensitive information on a set
schedule. See
https://cloud.google.com/dlp/docs/creating-job-triggers
to learn more.
Returns:
Callable[[~.CreateJobTriggerRequest],
Awaitable[~.JobTrigger]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "create_job_trigger" not in self._stubs:
self._stubs["create_job_trigger"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/CreateJobTrigger",
request_serializer=dlp.CreateJobTriggerRequest.serialize,
response_deserializer=dlp.JobTrigger.deserialize,
)
return self._stubs["create_job_trigger"]
@property
def update_job_trigger(
self,
) -> Callable[[dlp.UpdateJobTriggerRequest], Awaitable[dlp.JobTrigger]]:
r"""Return a callable for the update job trigger method over gRPC.
Updates a job trigger.
See https://cloud.google.com/dlp/docs/creating-job-
triggers to learn more.
Returns:
Callable[[~.UpdateJobTriggerRequest],
Awaitable[~.JobTrigger]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "update_job_trigger" not in self._stubs:
self._stubs["update_job_trigger"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/UpdateJobTrigger",
request_serializer=dlp.UpdateJobTriggerRequest.serialize,
response_deserializer=dlp.JobTrigger.deserialize,
)
return self._stubs["update_job_trigger"]
@property
def hybrid_inspect_job_trigger(
self,
) -> Callable[
[dlp.HybridInspectJobTriggerRequest], Awaitable[dlp.HybridInspectResponse]
]:
r"""Return a callable for the hybrid inspect job trigger method over gRPC.
Inspect hybrid content and store findings to a
trigger. The inspection will be processed
asynchronously. To review the findings monitor the jobs
within the trigger.
Returns:
Callable[[~.HybridInspectJobTriggerRequest],
Awaitable[~.HybridInspectResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "hybrid_inspect_job_trigger" not in self._stubs:
self._stubs["hybrid_inspect_job_trigger"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/HybridInspectJobTrigger",
request_serializer=dlp.HybridInspectJobTriggerRequest.serialize,
response_deserializer=dlp.HybridInspectResponse.deserialize,
)
return self._stubs["hybrid_inspect_job_trigger"]
@property
def get_job_trigger(
self,
) -> Callable[[dlp.GetJobTriggerRequest], Awaitable[dlp.JobTrigger]]:
r"""Return a callable for the get job trigger method over gRPC.
Gets a job trigger.
See https://cloud.google.com/dlp/docs/creating-job-
triggers to learn more.
Returns:
Callable[[~.GetJobTriggerRequest],
Awaitable[~.JobTrigger]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "get_job_trigger" not in self._stubs:
self._stubs["get_job_trigger"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/GetJobTrigger",
request_serializer=dlp.GetJobTriggerRequest.serialize,
response_deserializer=dlp.JobTrigger.deserialize,
)
return self._stubs["get_job_trigger"]
@property
def list_job_triggers(
self,
) -> Callable[[dlp.ListJobTriggersRequest], Awaitable[dlp.ListJobTriggersResponse]]:
r"""Return a callable for the list job triggers method over gRPC.
Lists job triggers.
See https://cloud.google.com/dlp/docs/creating-job-
triggers to learn more.
Returns:
Callable[[~.ListJobTriggersRequest],
Awaitable[~.ListJobTriggersResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "list_job_triggers" not in self._stubs:
self._stubs["list_job_triggers"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/ListJobTriggers",
request_serializer=dlp.ListJobTriggersRequest.serialize,
response_deserializer=dlp.ListJobTriggersResponse.deserialize,
)
return self._stubs["list_job_triggers"]
@property
def delete_job_trigger(
self,
) -> Callable[[dlp.DeleteJobTriggerRequest], Awaitable[empty_pb2.Empty]]:
r"""Return a callable for the delete job trigger method over gRPC.
Deletes a job trigger.
See https://cloud.google.com/dlp/docs/creating-job-
triggers to learn more.
Returns:
Callable[[~.DeleteJobTriggerRequest],
Awaitable[~.Empty]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "delete_job_trigger" not in self._stubs:
self._stubs["delete_job_trigger"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/DeleteJobTrigger",
request_serializer=dlp.DeleteJobTriggerRequest.serialize,
response_deserializer=empty_pb2.Empty.FromString,
)
return self._stubs["delete_job_trigger"]
@property
def activate_job_trigger(
self,
) -> Callable[[dlp.ActivateJobTriggerRequest], Awaitable[dlp.DlpJob]]:
r"""Return a callable for the activate job trigger method over gRPC.
Activate a job trigger. Causes the immediate execute
of a trigger instead of waiting on the trigger event to
occur.
Returns:
Callable[[~.ActivateJobTriggerRequest],
Awaitable[~.DlpJob]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "activate_job_trigger" not in self._stubs:
self._stubs["activate_job_trigger"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/ActivateJobTrigger",
request_serializer=dlp.ActivateJobTriggerRequest.serialize,
response_deserializer=dlp.DlpJob.deserialize,
)
return self._stubs["activate_job_trigger"]
@property
def create_dlp_job(
self,
) -> Callable[[dlp.CreateDlpJobRequest], Awaitable[dlp.DlpJob]]:
r"""Return a callable for the create dlp job method over gRPC.
Creates a new job to inspect storage or calculate
risk metrics. See
https://cloud.google.com/dlp/docs/inspecting-storage and
https://cloud.google.com/dlp/docs/compute-risk-analysis
to learn more.
When no InfoTypes or CustomInfoTypes are specified in
inspect jobs, the system will automatically choose what
detectors to run. By default this may be all types, but
may change over time as detectors are updated.
Returns:
Callable[[~.CreateDlpJobRequest],
Awaitable[~.DlpJob]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "create_dlp_job" not in self._stubs:
self._stubs["create_dlp_job"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/CreateDlpJob",
request_serializer=dlp.CreateDlpJobRequest.serialize,
response_deserializer=dlp.DlpJob.deserialize,
)
return self._stubs["create_dlp_job"]
@property
def list_dlp_jobs(
self,
) -> Callable[[dlp.ListDlpJobsRequest], Awaitable[dlp.ListDlpJobsResponse]]:
r"""Return a callable for the list dlp jobs method over gRPC.
Lists DlpJobs that match the specified filter in the
request. See
https://cloud.google.com/dlp/docs/inspecting-storage and
https://cloud.google.com/dlp/docs/compute-risk-analysis
to learn more.
Returns:
Callable[[~.ListDlpJobsRequest],
Awaitable[~.ListDlpJobsResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "list_dlp_jobs" not in self._stubs:
self._stubs["list_dlp_jobs"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/ListDlpJobs",
request_serializer=dlp.ListDlpJobsRequest.serialize,
response_deserializer=dlp.ListDlpJobsResponse.deserialize,
)
return self._stubs["list_dlp_jobs"]
@property
def get_dlp_job(self) -> Callable[[dlp.GetDlpJobRequest], Awaitable[dlp.DlpJob]]:
r"""Return a callable for the get dlp job method over gRPC.
Gets the latest state of a long-running DlpJob.
See https://cloud.google.com/dlp/docs/inspecting-storage
and https://cloud.google.com/dlp/docs/compute-risk-
analysis to learn more.
Returns:
Callable[[~.GetDlpJobRequest],
Awaitable[~.DlpJob]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "get_dlp_job" not in self._stubs:
self._stubs["get_dlp_job"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/GetDlpJob",
request_serializer=dlp.GetDlpJobRequest.serialize,
response_deserializer=dlp.DlpJob.deserialize,
)
return self._stubs["get_dlp_job"]
@property
def delete_dlp_job(
self,
) -> Callable[[dlp.DeleteDlpJobRequest], Awaitable[empty_pb2.Empty]]:
r"""Return a callable for the delete dlp job method over gRPC.
Deletes a long-running DlpJob. This method indicates
that the client is no longer interested in the DlpJob
result. The job will be cancelled if possible.
See https://cloud.google.com/dlp/docs/inspecting-storage
and https://cloud.google.com/dlp/docs/compute-risk-
analysis to learn more.
Returns:
Callable[[~.DeleteDlpJobRequest],
Awaitable[~.Empty]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "delete_dlp_job" not in self._stubs:
self._stubs["delete_dlp_job"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/DeleteDlpJob",
request_serializer=dlp.DeleteDlpJobRequest.serialize,
response_deserializer=empty_pb2.Empty.FromString,
)
return self._stubs["delete_dlp_job"]
@property
def cancel_dlp_job(
self,
) -> Callable[[dlp.CancelDlpJobRequest], Awaitable[empty_pb2.Empty]]:
r"""Return a callable for the cancel dlp job method over gRPC.
Starts asynchronous cancellation on a long-running
DlpJob. The server makes a best effort to cancel the
DlpJob, but success is not guaranteed.
See https://cloud.google.com/dlp/docs/inspecting-storage
and https://cloud.google.com/dlp/docs/compute-risk-
analysis to learn more.
Returns:
Callable[[~.CancelDlpJobRequest],
Awaitable[~.Empty]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "cancel_dlp_job" not in self._stubs:
self._stubs["cancel_dlp_job"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/CancelDlpJob",
request_serializer=dlp.CancelDlpJobRequest.serialize,
response_deserializer=empty_pb2.Empty.FromString,
)
return self._stubs["cancel_dlp_job"]
@property
def create_stored_info_type(
self,
) -> Callable[[dlp.CreateStoredInfoTypeRequest], Awaitable[dlp.StoredInfoType]]:
r"""Return a callable for the create stored info type method over gRPC.
Creates a pre-built stored infoType to be used for
inspection. See
https://cloud.google.com/dlp/docs/creating-stored-
infotypes to learn more.
Returns:
Callable[[~.CreateStoredInfoTypeRequest],
Awaitable[~.StoredInfoType]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "create_stored_info_type" not in self._stubs:
self._stubs["create_stored_info_type"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/CreateStoredInfoType",
request_serializer=dlp.CreateStoredInfoTypeRequest.serialize,
response_deserializer=dlp.StoredInfoType.deserialize,
)
return self._stubs["create_stored_info_type"]
@property
def update_stored_info_type(
self,
) -> Callable[[dlp.UpdateStoredInfoTypeRequest], Awaitable[dlp.StoredInfoType]]:
r"""Return a callable for the update stored info type method over gRPC.
Updates the stored infoType by creating a new
version. The existing version will continue to be used
until the new version is ready. See
https://cloud.google.com/dlp/docs/creating-stored-
infotypes to learn more.
Returns:
Callable[[~.UpdateStoredInfoTypeRequest],
Awaitable[~.StoredInfoType]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "update_stored_info_type" not in self._stubs:
self._stubs["update_stored_info_type"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/UpdateStoredInfoType",
request_serializer=dlp.UpdateStoredInfoTypeRequest.serialize,
response_deserializer=dlp.StoredInfoType.deserialize,
)
return self._stubs["update_stored_info_type"]
@property
def get_stored_info_type(
self,
) -> Callable[[dlp.GetStoredInfoTypeRequest], Awaitable[dlp.StoredInfoType]]:
r"""Return a callable for the get stored info type method over gRPC.
Gets a stored infoType.
See https://cloud.google.com/dlp/docs/creating-stored-
infotypes to learn more.
Returns:
Callable[[~.GetStoredInfoTypeRequest],
Awaitable[~.StoredInfoType]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "get_stored_info_type" not in self._stubs:
self._stubs["get_stored_info_type"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/GetStoredInfoType",
request_serializer=dlp.GetStoredInfoTypeRequest.serialize,
response_deserializer=dlp.StoredInfoType.deserialize,
)
return self._stubs["get_stored_info_type"]
@property
def list_stored_info_types(
self,
) -> Callable[
[dlp.ListStoredInfoTypesRequest], Awaitable[dlp.ListStoredInfoTypesResponse]
]:
r"""Return a callable for the list stored info types method over gRPC.
Lists stored infoTypes.
See https://cloud.google.com/dlp/docs/creating-stored-
infotypes to learn more.
Returns:
Callable[[~.ListStoredInfoTypesRequest],
Awaitable[~.ListStoredInfoTypesResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "list_stored_info_types" not in self._stubs:
self._stubs["list_stored_info_types"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/ListStoredInfoTypes",
request_serializer=dlp.ListStoredInfoTypesRequest.serialize,
response_deserializer=dlp.ListStoredInfoTypesResponse.deserialize,
)
return self._stubs["list_stored_info_types"]
@property
def delete_stored_info_type(
self,
) -> Callable[[dlp.DeleteStoredInfoTypeRequest], Awaitable[empty_pb2.Empty]]:
r"""Return a callable for the delete stored info type method over gRPC.
Deletes a stored infoType.
See https://cloud.google.com/dlp/docs/creating-stored-
infotypes to learn more.
Returns:
Callable[[~.DeleteStoredInfoTypeRequest],
Awaitable[~.Empty]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "delete_stored_info_type" not in self._stubs:
self._stubs["delete_stored_info_type"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/DeleteStoredInfoType",
request_serializer=dlp.DeleteStoredInfoTypeRequest.serialize,
response_deserializer=empty_pb2.Empty.FromString,
)
return self._stubs["delete_stored_info_type"]
@property
def hybrid_inspect_dlp_job(
self,
) -> Callable[
[dlp.HybridInspectDlpJobRequest], Awaitable[dlp.HybridInspectResponse]
]:
r"""Return a callable for the hybrid inspect dlp job method over gRPC.
Inspect hybrid content and store findings to a job.
To review the findings, inspect the job. Inspection will
occur asynchronously.
Returns:
Callable[[~.HybridInspectDlpJobRequest],
Awaitable[~.HybridInspectResponse]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "hybrid_inspect_dlp_job" not in self._stubs:
self._stubs["hybrid_inspect_dlp_job"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/HybridInspectDlpJob",
request_serializer=dlp.HybridInspectDlpJobRequest.serialize,
response_deserializer=dlp.HybridInspectResponse.deserialize,
)
return self._stubs["hybrid_inspect_dlp_job"]
@property
def finish_dlp_job(
self,
) -> Callable[[dlp.FinishDlpJobRequest], Awaitable[empty_pb2.Empty]]:
r"""Return a callable for the finish dlp job method over gRPC.
Finish a running hybrid DlpJob. Triggers the
finalization steps and running of any enabled actions
that have not yet run.
Returns:
Callable[[~.FinishDlpJobRequest],
Awaitable[~.Empty]]:
A function that, when called, will call the underlying RPC
on the server.
"""
# Generate a "stub function" on-the-fly which will actually make
# the request.
# gRPC handles serialization and deserialization, so we just need
# to pass in the functions for each.
if "finish_dlp_job" not in self._stubs:
self._stubs["finish_dlp_job"] = self.grpc_channel.unary_unary(
"/google.privacy.dlp.v2.DlpService/FinishDlpJob",
request_serializer=dlp.FinishDlpJobRequest.serialize,
response_deserializer=empty_pb2.Empty.FromString,
)
return self._stubs["finish_dlp_job"]
def close(self):
return self.grpc_channel.close()
__all__ = ("DlpServiceGrpcAsyncIOTransport",)
|
[
"warnings.warn",
"google.api_core.grpc_helpers_async.create_channel",
"grpc.ssl_channel_credentials",
"google.auth.transport.grpc.SslCredentials"
] |
[((3783, 4018), 'google.api_core.grpc_helpers_async.create_channel', 'grpc_helpers_async.create_channel', (['host'], {'credentials': 'credentials', 'credentials_file': 'credentials_file', 'quota_project_id': 'quota_project_id', 'default_scopes': 'cls.AUTH_SCOPES', 'scopes': 'scopes', 'default_host': 'cls.DEFAULT_HOST'}), '(host, credentials=credentials,\n credentials_file=credentials_file, quota_project_id=quota_project_id,\n default_scopes=cls.AUTH_SCOPES, scopes=scopes, default_host=cls.\n DEFAULT_HOST, **kwargs)\n', (3816, 4018), False, 'from google.api_core import grpc_helpers_async\n'), ((8111, 8179), 'warnings.warn', 'warnings.warn', (['"""api_mtls_endpoint is deprecated"""', 'DeprecationWarning'], {}), "('api_mtls_endpoint is deprecated', DeprecationWarning)\n", (8124, 8179), False, 'import warnings\n'), ((8223, 8292), 'warnings.warn', 'warnings.warn', (['"""client_cert_source is deprecated"""', 'DeprecationWarning'], {}), "('client_cert_source is deprecated', DeprecationWarning)\n", (8236, 8292), False, 'import warnings\n'), ((8911, 8980), 'grpc.ssl_channel_credentials', 'grpc.ssl_channel_credentials', ([], {'certificate_chain': 'cert', 'private_key': 'key'}), '(certificate_chain=cert, private_key=key)\n', (8939, 8980), False, 'import grpc\n'), ((9347, 9416), 'grpc.ssl_channel_credentials', 'grpc.ssl_channel_credentials', ([], {'certificate_chain': 'cert', 'private_key': 'key'}), '(certificate_chain=cert, private_key=key)\n', (9375, 9416), False, 'import grpc\n'), ((9101, 9117), 'google.auth.transport.grpc.SslCredentials', 'SslCredentials', ([], {}), '()\n', (9115, 9117), False, 'from google.auth.transport.grpc import SslCredentials\n')]
|
import struct
import numpy as np
import pandas as pd
df_train = pd.read_csv('../data/train_data.csv')
df_valid = pd.read_csv('../data/valid_data.csv')
df_test = pd.read_csv('../data/test_data.csv')
with open('result.dat', 'rb') as f:
N, = struct.unpack('i', f.read(4))
no_dims, = struct.unpack('i', f.read(4))
print(N, no_dims)
mappedX = struct.unpack('{}d'.format(N * no_dims), f.read(8 * N * no_dims))
mappedX = np.array(mappedX).reshape((N, no_dims))
print(mappedX)
tsne_train = mappedX[:len(df_train)]
tsne_valid = mappedX[len(df_train):len(df_train)+len(df_valid)]
tsne_test = mappedX[len(df_train)+len(df_valid):]
assert(len(tsne_train) == len(df_train))
assert(len(tsne_valid) == len(df_valid))
assert(len(tsne_test) == len(df_test))
save_path = '../data/tsne_{}d_30p.npz'.format(no_dims)
np.savez(save_path, train=tsne_train, valid=tsne_valid, test=tsne_test)
print('Saved: {}'.format(save_path))
# landmarks, = struct.unpack('{}i'.format(N), f.read(4 * N))
# costs, = struct.unpack('{}d'.format(N), f.read(8 * N))
|
[
"numpy.array",
"numpy.savez",
"pandas.read_csv"
] |
[((65, 102), 'pandas.read_csv', 'pd.read_csv', (['"""../data/train_data.csv"""'], {}), "('../data/train_data.csv')\n", (76, 102), True, 'import pandas as pd\n'), ((114, 151), 'pandas.read_csv', 'pd.read_csv', (['"""../data/valid_data.csv"""'], {}), "('../data/valid_data.csv')\n", (125, 151), True, 'import pandas as pd\n'), ((162, 198), 'pandas.read_csv', 'pd.read_csv', (['"""../data/test_data.csv"""'], {}), "('../data/test_data.csv')\n", (173, 198), True, 'import pandas as pd\n'), ((858, 929), 'numpy.savez', 'np.savez', (['save_path'], {'train': 'tsne_train', 'valid': 'tsne_valid', 'test': 'tsne_test'}), '(save_path, train=tsne_train, valid=tsne_valid, test=tsne_test)\n', (866, 929), True, 'import numpy as np\n'), ((437, 454), 'numpy.array', 'np.array', (['mappedX'], {}), '(mappedX)\n', (445, 454), True, 'import numpy as np\n')]
|
from django.conf.urls.defaults import patterns, url, include
# Uncomment the next two lines to enable the admin:
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns(
'',
(r'^log/', include('requestlog.urls')),
(r'^admin/', include(admin.site.urls)),
# Pass anything that doesn't match on to the mrs app
url(r'^',
include('moca.mrs.urls')),
)
from django.conf import settings
if settings.DEBUG:
urlpatterns += patterns(
'',
(r'^static/(?P<path>.*)$',
'django.views.static.serve',
{'document_root': settings.MEDIA_ROOT}),
)
|
[
"django.conf.urls.defaults.patterns",
"django.conf.urls.defaults.include",
"django.contrib.admin.autodiscover"
] |
[((147, 167), 'django.contrib.admin.autodiscover', 'admin.autodiscover', ([], {}), '()\n', (165, 167), False, 'from django.contrib import admin\n'), ((472, 585), 'django.conf.urls.defaults.patterns', 'patterns', (['""""""', "('^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root':\n settings.MEDIA_ROOT})"], {}), "('', ('^static/(?P<path>.*)$', 'django.views.static.serve', {\n 'document_root': settings.MEDIA_ROOT}))\n", (480, 585), False, 'from django.conf.urls.defaults import patterns, url, include\n'), ((217, 243), 'django.conf.urls.defaults.include', 'include', (['"""requestlog.urls"""'], {}), "('requestlog.urls')\n", (224, 243), False, 'from django.conf.urls.defaults import patterns, url, include\n'), ((263, 287), 'django.conf.urls.defaults.include', 'include', (['admin.site.urls'], {}), '(admin.site.urls)\n', (270, 287), False, 'from django.conf.urls.defaults import patterns, url, include\n'), ((370, 394), 'django.conf.urls.defaults.include', 'include', (['"""moca.mrs.urls"""'], {}), "('moca.mrs.urls')\n", (377, 394), False, 'from django.conf.urls.defaults import patterns, url, include\n')]
|
"""
File contains handler for ReferenceDataRequest
"""
import asyncio
import uuid
from typing import Dict
from typing import List
from .base_handler import HandlerBase
from .base_request import RequestBase
from .requests import Subscription
from .utils.blp_name import RESPONSE_ERROR
from .utils.log import get_logger
# pylint: disable=ungrouped-imports
try:
import blpapi
except ImportError:
from async_blp.utils import env_test as blpapi
LOGGER = get_logger()
class RequestHandler(HandlerBase):
"""
Handler gets response events from Bloomberg from other thread,
then puts it to request queue. Each handler opens its own session
Sends requests and processes incoming responses.
"""
def __init__(self,
session_options: blpapi.SessionOptions,
loop: asyncio.AbstractEventLoop = None):
super().__init__(session_options, loop)
local_methods = {
blpapi.Event.RESPONSE: self._response_handler,
blpapi.Event.PARTIAL_RESPONSE: self._partial_response_handler,
# according to BLPAPI-Core-Developer-Guide section 10.1,
# REQUEST_STATUS event is send only with RequestFailure messages
blpapi.Event.REQUEST_STATUS: self._raise_exception,
}
self._method_map.update(local_methods)
async def send_requests(self, requests: List[RequestBase]):
"""
Send requests to Bloomberg
Wait until session is started and required service is opened,
then send requests
"""
await self.session_started.wait()
for request in requests:
corr_id = blpapi.CorrelationId(uuid.uuid4())
self._current_requests[corr_id] = request
# wait until the necessary service is opened
service = await self._get_service(request.service_name)
blp_request = request.create(service)
self._session.sendRequest(blp_request, correlationId=corr_id)
LOGGER.debug('%s: request send:\n%s',
self.__class__.__name__,
blp_request)
@classmethod
def _is_error_msg(cls, msg: blpapi.Message) -> bool:
"""
Return True if msg contains responseError element. It indicates errors
such as lost connection, request limit reached etc.
"""
if msg.hasElement(RESPONSE_ERROR):
LOGGER.debug('%s: error message received:\n%s',
cls.__name__,
msg)
return True
return False
def _partial_response_handler(self, event_: blpapi.Event):
"""
Process blpapi.Event.PARTIAL_RESPONSE events. Send all valid messages
from the given event to the requests with the corresponding
correlation id
"""
for msg in event_:
if self._is_error_msg(msg):
self._close_requests(msg.correlationIds())
continue
for cor_id in msg.correlationIds():
request = self._current_requests[cor_id]
request.send_queue_message(msg)
def _response_handler(self, event_: blpapi.Event):
"""
Process blpapi.Event.RESPONSE events. This is the last event for the
corresponding requests, therefore after processing all messages
from the event, None will be send to the corresponding requests.
"""
self._partial_response_handler(event_)
for msg in event_:
self._close_requests(msg.correlationIds())
class SubscriptionHandler(HandlerBase):
"""
Handler gets response events from Bloomberg from other thread,
then puts it to request queue. Each handler opens its own session
Used for handling subscription requests and responses
"""
def __init__(self,
session_options: blpapi.SessionOptions,
loop: asyncio.AbstractEventLoop = None):
super().__init__(session_options, loop)
# only for typing
self._current_requests: Dict[blpapi.CorrelationId, Subscription] = {}
local_methods = {
blpapi.Event.SUBSCRIPTION_STATUS: self._subscriber_status_handler,
blpapi.Event.SUBSCRIPTION_DATA: self._subscriber_data_handler,
}
self._method_map.update(local_methods)
def _subscriber_data_handler(self, event_: blpapi.Event):
"""
Redirect data to the request queue.
"""
for msg in event_:
for cor_id in msg.correlationIds():
self._current_requests[cor_id].send_queue_message(msg)
def _subscriber_status_handler(self, event_: blpapi.Event):
"""
Raise exception if something goes wrong
"""
for msg in event_:
if msg.asElement().name() not in ("SubscriptionStarted",
"SubscriptionStreamsActivated",
):
self._raise_exception(msg)
async def subscribe(self, subscriptions: List[Subscription]):
"""
Send subscriptions to Bloomberg
Wait until session is started, then send subscription
"""
await self.session_started.wait()
for subscription in subscriptions:
corr_id = blpapi.CorrelationId(str(uuid.uuid4()))
self._current_requests[corr_id] = subscription
blp_subscription = subscription.create_subscription(corr_id)
self._session.subscribe(blp_subscription)
LOGGER.debug('%s: subscription send:\n%s',
self.__class__.__name__,
blp_subscription)
async def read_subscribers(self, security_id: str = None):
"""
You can check what are already come from Bloomberg
"""
if security_id is None:
tasks = [asyncio.create_task(request.process())
for request in self._current_requests.values()]
else:
tasks = [asyncio.create_task(request.process())
for request in self._current_requests.values() if
security_id in request.securities]
requests_result = await asyncio.gather(*tasks)
return requests_result
|
[
"uuid.uuid4",
"asyncio.gather"
] |
[((6291, 6313), 'asyncio.gather', 'asyncio.gather', (['*tasks'], {}), '(*tasks)\n', (6305, 6313), False, 'import asyncio\n'), ((1691, 1703), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (1701, 1703), False, 'import uuid\n'), ((5398, 5410), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (5408, 5410), False, 'import uuid\n')]
|
from math import floor
import pandas as pd
def filter_param_cd(df, code):
"""Return df filtered by approved data
"""
approved_df = df.copy()
params = [param.strip('_cd') for param in df.columns if param.endswith('_cd')]
for param in params:
#filter out records where param_cd doesn't contain 'A' for approved.
approved_df[param].where(approved_df[param + '_cd'].str.contains(code), inplace=True)
# drop any rows where all params are nan and return
#return approved_df.dropna(axis=0, how='all', subset=params)
return approved_df
def interp_to_freq(df, freq=15, interp_limit=120, fields=None):
"""
WARNING: for now this only works on one site at a time,
Also must review this function further
Args:
df (DataFrame): a dataframe with a datetime index
freq (int): frequency in minutes
interp_limit (int): max time to interpolate over
Returns:
DataFrame
"""
#XXX assumes no? multiindex
df = df.copy()
if type(df) == pd.core.series.Series:
df = df.to_frame()
#df.reset_index(level=0, inplace=True)
limit = floor(interp_limit/freq)
freq_str = '{}min'.format(freq)
start = df.index[0]
end = df.index[-1]
new_index = pd.date_range(start=start, end=end, periods=None, freq=freq_str)
#new_index = new_index.union(df.index)
new_df = pd.DataFrame(index=new_index)
new_df = new_df.merge(df, how='outer', left_index=True, right_index=True)
#new_df = pd.merge(df, new_df, how='outer', left_index=True, right_index=True)
#this resampling eould be more efficient
out_df = new_df.interpolate(method='time',limit=limit, limit_direction='both').asfreq(freq_str)
out_df = out_df.resample('{}T'.format(freq)).asfreq()
out_df.index.name = 'datetime'
return out_df
#out_df.set_index('site_no', append=True, inplace=True)
#return out_df.reorder_levels(['site_no','datetime'])
def fill_iv_w_dv(iv_df, dv_df, freq='15min', col='00060'):
"""Fill gaps in an instantaneous discharge record with daily average estimates
Args:
iv_df (DataFrame): instantaneous discharge record
dv_df (DataFrame): Average daily discharge record.
freq (int): frequency of iv record
Returns:
DataFrame: filled-in discharge record
"""
#double brackets makes this a dataframe
dv_df.rename(axis='columns',
mapper={'00060_Mean':'00060'},
inplace=True)
#limit ffill to one day or 96 samples at 15min intervals
updating_field = dv_df[[col]].asfreq(freq).ffill(limit=96)
iv_df.update(updating_field, overwrite=False)
#return update_merge(iv_df, updating_field, na_only=True)
return iv_df
#This function may be deprecated once pandas.update support joins besides left.
def update_merge(left, right, na_only=False, on=None):
"""Performs a combination
Args:
left (DataFrame): original data
right (DataFrame): updated data
na_only (bool): if True, only update na values
TODO: na_only
"""
df = left.merge(right, how='outer',
left_index=True, right_index=True)
# check for column overlap and resolve update
for column in df.columns:
#if duplicated column, use the value from right
if column[-2:] == '_x':
name = column[:-2] # find column name
if na_only:
df[name] = df[name+'_x'].fillna(df[name+'_y'])
else:
df[name+'_x'].update(df[name+'_y'])
df[name] = df[name+'_x']
df.drop([name + '_x', name + '_y'], axis=1, inplace=True)
return df
|
[
"pandas.DataFrame",
"pandas.date_range",
"math.floor"
] |
[((1141, 1167), 'math.floor', 'floor', (['(interp_limit / freq)'], {}), '(interp_limit / freq)\n', (1146, 1167), False, 'from math import floor\n'), ((1267, 1331), 'pandas.date_range', 'pd.date_range', ([], {'start': 'start', 'end': 'end', 'periods': 'None', 'freq': 'freq_str'}), '(start=start, end=end, periods=None, freq=freq_str)\n', (1280, 1331), True, 'import pandas as pd\n'), ((1392, 1421), 'pandas.DataFrame', 'pd.DataFrame', ([], {'index': 'new_index'}), '(index=new_index)\n', (1404, 1421), True, 'import pandas as pd\n')]
|
import json
import requests
import pandas as pd
import websocket
# Get Alpaca API Credential
endpoint = "https://data.alpaca.markets/v2"
headers = json.loads(open("key.txt", 'r').read())
def hist_data(symbols, start="2021-01-01", timeframe="1Hour", limit=50, end=""):
"""
returns historical bar data for a string of symbols separated by comma
symbols should be in a string format separated by comma e.g. symbols = "MSFT,AMZN,GOOG"
"""
df_data_tickers = {}
for symbol in symbols:
bar_url = endpoint + "/stocks/{}/bars".format(symbol)
params = {"start":start, "limit" :limit, "timeframe":timeframe}
data = {"bars": [], "next_page_token":'', "symbol":symbol}
while True:
r = requests.get(bar_url, headers = headers, params = params)
r = r.json()
if r["next_page_token"] == None:
data["bars"]+=r["bars"]
break
else:
params["page_token"] = r["next_page_token"]
data["bars"]+=r["bars"]
data["next_page_token"] = r["next_page_token"]
df_data = pd.DataFrame(data["bars"])
df_data.rename({"t":"time","o":"open","h":"high","l":"low","c":"close","v":"volume"},axis=1, inplace=True)
df_data["time"] = pd.to_datetime(df_data["time"])
df_data.set_index("time",inplace=True)
df_data.index = df_data.index.tz_convert("America/Indiana/Petersburg")
df_data_tickers[symbol] = df_data
return df_data_tickers
def get_historical_data(ticker_list, start_date, end_date=None, limit=10000, timeframe="1Day"):
"""
returns historical bar data for a string of symbols separated by comma
symbols should be in a string format separated by comma e.g. symbols = "MSFT,AMZN,GOOG"
* timeframe - Timeframe for the aggregation. Available values are: `1Min`, `1Hour`, `1Day`
https://alpaca.markets/docs/api-documentation/api-v2/market-data/alpaca-data-api-v2/historical/#bars
"""
df_data_tickers = {}
for symbol in ticker_list:
bar_url = endpoint + "/stocks/{}/bars".format(symbol)
params = {"start":start_date, "end": end_date, "limit": limit, "timeframe":timeframe}
data = {"bars": [], "next_page_token": '', "symbol": symbol}
# r = requests.get(bar_url, headers=headers, params=params)
# r = r.json()
# data["bars"] += r["bars"]
while True:
r = requests.get(bar_url, headers=headers, params=params)
r = r.json()
try:
if r["next_page_token"] == None:
data["bars"] += r["bars"]
break
else:
params["page_token"] = r["next_page_token"]
data["bars"] += r["bars"]
data["next_page_token"] = r["next_page_token"]
except:
break
# Create a DataFrame for the data["bars"] of each stock
df_data = pd.DataFrame(data["bars"])
df_data.rename({"t":"time","o":"open","h":"high","l":"low","c":"close","v":"volume"},axis=1, inplace=True)
try:
df_data["time"] = pd.to_datetime(df_data["time"])
df_data.set_index("time",inplace=True)
df_data.index = df_data.index.tz_convert("America/New_York")
df_data_tickers[symbol] = df_data
except:
pass
print("---- Created for [{}]".format(symbol))
return df_data_tickers
|
[
"pandas.DataFrame",
"requests.get",
"pandas.to_datetime"
] |
[((1232, 1258), 'pandas.DataFrame', 'pd.DataFrame', (["data['bars']"], {}), "(data['bars'])\n", (1244, 1258), True, 'import pandas as pd\n'), ((1402, 1433), 'pandas.to_datetime', 'pd.to_datetime', (["df_data['time']"], {}), "(df_data['time'])\n", (1416, 1433), True, 'import pandas as pd\n'), ((3161, 3187), 'pandas.DataFrame', 'pd.DataFrame', (["data['bars']"], {}), "(data['bars'])\n", (3173, 3187), True, 'import pandas as pd\n'), ((792, 845), 'requests.get', 'requests.get', (['bar_url'], {'headers': 'headers', 'params': 'params'}), '(bar_url, headers=headers, params=params)\n', (804, 845), False, 'import requests\n'), ((2607, 2660), 'requests.get', 'requests.get', (['bar_url'], {'headers': 'headers', 'params': 'params'}), '(bar_url, headers=headers, params=params)\n', (2619, 2660), False, 'import requests\n'), ((3349, 3380), 'pandas.to_datetime', 'pd.to_datetime', (["df_data['time']"], {}), "(df_data['time'])\n", (3363, 3380), True, 'import pandas as pd\n')]
|
import asyncio
from yaps.api import protocol
from yaps.utils.log import Log
SLEEP_SLOT_TIME = 1 # In seconds.
class State:
PING_PONG = 1
PING_PONG_1_MISS = 2
PING_PONG_2_MISS = 3
class Subscription:
"""
Abstraction for handling a subscription.
This class has utilites that lets it increment a counter and indicate
if it has timed out or not.
It can also send new data to the subscriber.
"""
def __init__(self,
topic: str,
reader: asyncio.StreamReader,
writer: asyncio.StreamWriter):
self._time = 0
self._state = State.PING_PONG
self._reader = reader
self._writer = writer
self._alive = True
self._set_identifier(topic)
async def start_idle(self) -> None:
""" Sets the task into idle sleep, count up a timer.
When the timer reaches timeout, timed_out() will return True.
"""
while self._alive:
# Go idle so other tasks can run.
await asyncio.sleep(SLEEP_SLOT_TIME)
# Update timer.
self._time += SLEEP_SLOT_TIME
self.die()
def _next_state(self) -> bool:
""" Advances to the next state. Returns true if the subscription
should be kept alive, and false if it should die.
"""
alive = True
if self._state == State.PING_PONG:
self._state = State.PING_PONG_1_MISS
elif self._state == State.PING_PONG_1_MISS:
self._state = State.PING_PONG_2_MISS
elif self._state == State.PING_PONG_2_MISS:
alive = False
return alive
async def ping(self) -> None:
""" Pings the subscriber and waits for a PONG back.
If the subscriber doesn't pong back, the subscription is closed.
"""
await protocol.send_packet(self._writer, protocol.Commands.PING)
Log.debug(f'Ping {self}')
pong = await protocol.read_packet(self._reader)
if await protocol.async_cmd_ok(pong, protocol.Commands.PONG):
# If PONG, reset timer.
self._time = 0
else:
Log.err(f'Bad ping! {self._alive} -> {self._state}')
# If no PONG, advance to next state, and potentially close.
alive = self._next_state()
if not alive:
self.die()
async def new_data(self, message: str) -> bool:
""" Sends the new data to the subscriber.
Returns true if succesful, false if not.
"""
send_ok = True
try:
# Send new data to subscriber
await protocol.send_packet(self._writer,
protocol.Commands.NEW_DATA,
data=message.encode('utf-8'))
# Wait for SUBSCRIBE_ACK
response = await protocol.read_packet(self._reader)
except (BrokenPipeError, ConnectionResetError):
send_ok = False
if send_ok:
# If no ACK is recieved, close the connection.
if not await protocol.async_cmd_ok(response,
protocol.Commands.NEW_DATA_ACK,
self._writer):
send_ok = False
if not send_ok:
self.die()
# Reset timer.
self._time = 0
return send_ok
def timed_out(self):
return self._time > protocol.PING_PONG_TIMEOUT
def is_dead(self) -> bool:
return not self._alive
def die(self) -> None:
if not self._alive:
return
self._alive = False
Log.debug(f'Subscription died {self}')
def _set_identifier(self, topic: str) -> None:
""" Sets the identification of the subscription.
This consists of:
1. Topic
2. File descripter number from reader/writer stream.
"""
self.topic = topic
try:
self.fd = self._writer.get_extra_info('socket').fileno()
except AttributeError:
# Streams are incorrect
Log.err(f'Incorrect streams to subscription to {self.topic}')
self.fd = None
def __repr__(self):
return f'| ID:{self.fd} Topic: {self.topic} |'
def __lt__(self, other):
return self._time - other._time
|
[
"yaps.api.protocol.async_cmd_ok",
"yaps.utils.log.Log.err",
"yaps.api.protocol.read_packet",
"yaps.api.protocol.send_packet",
"asyncio.sleep",
"yaps.utils.log.Log.debug"
] |
[((1954, 1979), 'yaps.utils.log.Log.debug', 'Log.debug', (['f"""Ping {self}"""'], {}), "(f'Ping {self}')\n", (1963, 1979), False, 'from yaps.utils.log import Log\n'), ((3717, 3755), 'yaps.utils.log.Log.debug', 'Log.debug', (['f"""Subscription died {self}"""'], {}), "(f'Subscription died {self}')\n", (3726, 3755), False, 'from yaps.utils.log import Log\n'), ((1887, 1945), 'yaps.api.protocol.send_packet', 'protocol.send_packet', (['self._writer', 'protocol.Commands.PING'], {}), '(self._writer, protocol.Commands.PING)\n', (1907, 1945), False, 'from yaps.api import protocol\n'), ((2002, 2036), 'yaps.api.protocol.read_packet', 'protocol.read_packet', (['self._reader'], {}), '(self._reader)\n', (2022, 2036), False, 'from yaps.api import protocol\n'), ((2054, 2105), 'yaps.api.protocol.async_cmd_ok', 'protocol.async_cmd_ok', (['pong', 'protocol.Commands.PONG'], {}), '(pong, protocol.Commands.PONG)\n', (2075, 2105), False, 'from yaps.api import protocol\n'), ((2196, 2248), 'yaps.utils.log.Log.err', 'Log.err', (['f"""Bad ping! {self._alive} -> {self._state}"""'], {}), "(f'Bad ping! {self._alive} -> {self._state}')\n", (2203, 2248), False, 'from yaps.utils.log import Log\n'), ((1070, 1100), 'asyncio.sleep', 'asyncio.sleep', (['SLEEP_SLOT_TIME'], {}), '(SLEEP_SLOT_TIME)\n', (1083, 1100), False, 'import asyncio\n'), ((2915, 2949), 'yaps.api.protocol.read_packet', 'protocol.read_packet', (['self._reader'], {}), '(self._reader)\n', (2935, 2949), False, 'from yaps.api import protocol\n'), ((4181, 4242), 'yaps.utils.log.Log.err', 'Log.err', (['f"""Incorrect streams to subscription to {self.topic}"""'], {}), "(f'Incorrect streams to subscription to {self.topic}')\n", (4188, 4242), False, 'from yaps.utils.log import Log\n'), ((3139, 3216), 'yaps.api.protocol.async_cmd_ok', 'protocol.async_cmd_ok', (['response', 'protocol.Commands.NEW_DATA_ACK', 'self._writer'], {}), '(response, protocol.Commands.NEW_DATA_ACK, self._writer)\n', (3160, 3216), False, 'from yaps.api import protocol\n')]
|
#!/cygdrive/c/Python27/python.exe
# <NAME>, Ph.D.
# Swint-Kruse Laboratory
# Physician Scientist Training Program
# University of Kansas Medical Center
# This code is adapted from the example available at
# http://pandasplotting.blogspot.com/2012/04/added-kde-to-scatter-matrix-diagonals.html
# Creates a scatterplot matrix (off-diagonals) with a kernal density estimate (KDE)
# of the distribution of (univariate) data on the diagonal
import numpy as np
import matplotlib.pyplot as plt
import pandas
import sys
infile=sys.argv[1]
outfile=sys.argv[2]
maindata = pandas.read_csv(infile, sep="\t")
plt.rcParams['patch.facecolor'] = 'k' # Make the markers black
# Plot
ax = pandas.tools.plotting.scatter_matrix(maindata, alpha=0.1, marker='k.', figsize=(8,8), diagonal='kde', range_padding=0.1)
# Give a small inter-plot spacing
plt.subplots_adjust(wspace=.05, hspace=.05)
#Save the figure
plt.savefig(outfile, dpi=600)
|
[
"pandas.tools.plotting.scatter_matrix",
"matplotlib.pyplot.subplots_adjust",
"pandas.read_csv",
"matplotlib.pyplot.savefig"
] |
[((568, 601), 'pandas.read_csv', 'pandas.read_csv', (['infile'], {'sep': '"""\t"""'}), "(infile, sep='\\t')\n", (583, 601), False, 'import pandas\n'), ((679, 804), 'pandas.tools.plotting.scatter_matrix', 'pandas.tools.plotting.scatter_matrix', (['maindata'], {'alpha': '(0.1)', 'marker': '"""k."""', 'figsize': '(8, 8)', 'diagonal': '"""kde"""', 'range_padding': '(0.1)'}), "(maindata, alpha=0.1, marker='k.',\n figsize=(8, 8), diagonal='kde', range_padding=0.1)\n", (715, 804), False, 'import pandas\n'), ((835, 880), 'matplotlib.pyplot.subplots_adjust', 'plt.subplots_adjust', ([], {'wspace': '(0.05)', 'hspace': '(0.05)'}), '(wspace=0.05, hspace=0.05)\n', (854, 880), True, 'import matplotlib.pyplot as plt\n'), ((897, 926), 'matplotlib.pyplot.savefig', 'plt.savefig', (['outfile'], {'dpi': '(600)'}), '(outfile, dpi=600)\n', (908, 926), True, 'import matplotlib.pyplot as plt\n')]
|
import pygame;
import numpy as np;
from math import sin, cos;
pygame.init();
width, height, depth = 640, 480, 800;
camera = [width // 2, height // 2, depth];
units_x, units_y, units_z = 8, 8, 8;
scale_x, scale_y, scale_z = width / units_x, height / units_y, depth / units_z;
screen = pygame.display.set_mode((width, height));
pygame.display.set_caption("3D perspective projection test");
pygame.key.set_repeat(100, 50);
def scale(p):
""" scale a point by the number of pixels per unit in each direction """
return p[0] * scale_x, p[1] * scale_y, p[2] * scale_z;
def translate_to_screen(p):
""" convert from projected cartesian coordinates to canvas coordinates """
return p[0] + width // 2, height // 2 - p[1];
def project(p):
""" project a point onto the 2D plane """
proj_x = (camera[2] * (p[0] - camera[0])) / (camera[2] + p[2]) + camera[0];
proj_y = (camera[2] * (p[1] - camera[1])) / (camera[2] + p[2]) + camera[1];
return proj_x, proj_y;
def rproj(a, tx, ty, tz):
rotation = rot_mat_x(tx).dot(rot_mat_y(ty)).dot(rot_mat_z(tz));
sub = np.array([a]) - np.array([camera]);
d = list(sub.dot(rotation)[0]);
e = width, height, depth;
return e[2] / d[2] * d[0] + e[0], e[2] / d[2] * d[1] + e[1];
def screen_point(p):
""" convert a point in 3D cartesian space to a point in 2D canvas space """
return translate_to_screen(project(scale(p)));
def project_triangle(tri):
""" return the screen coordinates of a triangle """
angs = (tx, ty, tz);
return rproj(tri[0], *angs), rproj(tri[1], *angs), rproj(tri[2], *angs);
## return screen_point(tri[0]), screen_point(tri[1]), screen_point(tri[2]);
def project_line(line):
""" return the screen coordinates of a line """
return screen_point(line[0]), screen_point(line[1]);
def rot_mat_x(theta):
return np.array([
[1, 0, 0],
[0, cos(theta), -sin(theta)],
[0, sin(theta), cos(theta)],
]);
def rot_mat_y(theta):
return np.array([
[cos(theta), 0, sin(theta)],
[0, 1, 0],
[-sin(theta), 0, cos(theta)],
]);
def rot_mat_z(theta):
return np.array([
[cos(theta), -sin(theta), 0],
[sin(theta), cos(theta), 0],
[0, 0, 1],
]);
triangle = ((1, 1, 1), (2, 2, 2), (1, 2, 1));
x_axis = ((-2, 0, 0), (2, 0, 0));
y_axis = ((0, -2, 0), (0, 2, 0));
z_axis = ((0, 0, -2), (0, 0, 2));
tx, ty, tz = 0, 0, 0;
clock = pygame.time.Clock();
running = True;
while running:
screen.fill((255, 255, 200));
proj_triangle = project_triangle(triangle);
pygame.draw.polygon(screen, (255, 0, 200), proj_triangle);
pygame.draw.polygon(screen, (0, 0, 0), proj_triangle, 1);
pygame.draw.rect(screen, (255, 0, 0), (*proj_triangle[0], 10, 10));
pygame.draw.rect(screen, (0, 255, 0), (*proj_triangle[1], 10, 10));
pygame.draw.rect(screen, (0, 0, 255), (*proj_triangle[2], 10, 10));
## proj_ax, proj_ay, proj_az = project_line(x_axis), project_line(y_axis), project_line(z_axis);
## pygame.draw.line(screen, (255, 0, 0), proj_ax[0], proj_ax[1], 1);
## pygame.draw.line(screen, (0, 255, 0), proj_ay[0], proj_ay[1], 1);
## pygame.draw.line(screen, (0, 0, 255), proj_az[0], proj_az[1], 1);
pygame.display.flip();
for event in pygame.event.get():
if event.type == pygame.QUIT:
running = False;
break;
elif event.type == pygame.KEYDOWN:
if event.key == pygame.K_LEFT:
#camera[0] -= 25;
## camera = list(np.array([camera]).dot(rot_mat_y(0.2).dot(rot_mat_z(0.1)))[0]);
tx += 0.1;
elif event.key == pygame.K_RIGHT:
#camera[0] += 25;
## camera = list(np.array([camera]).dot(rot_mat_z(-0.1))[0]);
tx -= 0.1;
elif event.key == pygame.K_UP:
ty += 0.1;
elif event.key == pygame.K_DOWN:
ty -= 0.1;
elif event.key == pygame.K_SPACE:
print(camera);
elif event.key == pygame.K_ESCAPE:
running = False;
break;
clock.tick(30);
pygame.quit();
|
[
"pygame.key.set_repeat",
"pygame.display.set_caption",
"pygame.draw.polygon",
"pygame.init",
"pygame.quit",
"pygame.event.get",
"pygame.display.set_mode",
"pygame.display.flip",
"math.cos",
"numpy.array",
"pygame.draw.rect",
"pygame.time.Clock",
"math.sin"
] |
[((62, 75), 'pygame.init', 'pygame.init', ([], {}), '()\n', (73, 75), False, 'import pygame\n'), ((286, 326), 'pygame.display.set_mode', 'pygame.display.set_mode', (['(width, height)'], {}), '((width, height))\n', (309, 326), False, 'import pygame\n'), ((328, 388), 'pygame.display.set_caption', 'pygame.display.set_caption', (['"""3D perspective projection test"""'], {}), "('3D perspective projection test')\n", (354, 388), False, 'import pygame\n'), ((390, 420), 'pygame.key.set_repeat', 'pygame.key.set_repeat', (['(100)', '(50)'], {}), '(100, 50)\n', (411, 420), False, 'import pygame\n'), ((2407, 2426), 'pygame.time.Clock', 'pygame.time.Clock', ([], {}), '()\n', (2424, 2426), False, 'import pygame\n'), ((4128, 4141), 'pygame.quit', 'pygame.quit', ([], {}), '()\n', (4139, 4141), False, 'import pygame\n'), ((2546, 2603), 'pygame.draw.polygon', 'pygame.draw.polygon', (['screen', '(255, 0, 200)', 'proj_triangle'], {}), '(screen, (255, 0, 200), proj_triangle)\n', (2565, 2603), False, 'import pygame\n'), ((2609, 2665), 'pygame.draw.polygon', 'pygame.draw.polygon', (['screen', '(0, 0, 0)', 'proj_triangle', '(1)'], {}), '(screen, (0, 0, 0), proj_triangle, 1)\n', (2628, 2665), False, 'import pygame\n'), ((2671, 2737), 'pygame.draw.rect', 'pygame.draw.rect', (['screen', '(255, 0, 0)', '(*proj_triangle[0], 10, 10)'], {}), '(screen, (255, 0, 0), (*proj_triangle[0], 10, 10))\n', (2687, 2737), False, 'import pygame\n'), ((2743, 2809), 'pygame.draw.rect', 'pygame.draw.rect', (['screen', '(0, 255, 0)', '(*proj_triangle[1], 10, 10)'], {}), '(screen, (0, 255, 0), (*proj_triangle[1], 10, 10))\n', (2759, 2809), False, 'import pygame\n'), ((2815, 2881), 'pygame.draw.rect', 'pygame.draw.rect', (['screen', '(0, 0, 255)', '(*proj_triangle[2], 10, 10)'], {}), '(screen, (0, 0, 255), (*proj_triangle[2], 10, 10))\n', (2831, 2881), False, 'import pygame\n'), ((3203, 3224), 'pygame.display.flip', 'pygame.display.flip', ([], {}), '()\n', (3222, 3224), False, 'import pygame\n'), ((3248, 3266), 'pygame.event.get', 'pygame.event.get', ([], {}), '()\n', (3264, 3266), False, 'import pygame\n'), ((1070, 1083), 'numpy.array', 'np.array', (['[a]'], {}), '([a])\n', (1078, 1083), True, 'import numpy as np\n'), ((1086, 1104), 'numpy.array', 'np.array', (['[camera]'], {}), '([camera])\n', (1094, 1104), True, 'import numpy as np\n'), ((1861, 1871), 'math.cos', 'cos', (['theta'], {}), '(theta)\n', (1864, 1871), False, 'from math import sin, cos\n'), ((1899, 1909), 'math.sin', 'sin', (['theta'], {}), '(theta)\n', (1902, 1909), False, 'from math import sin, cos\n'), ((1911, 1921), 'math.cos', 'cos', (['theta'], {}), '(theta)\n', (1914, 1921), False, 'from math import sin, cos\n'), ((1986, 1996), 'math.cos', 'cos', (['theta'], {}), '(theta)\n', (1989, 1996), False, 'from math import sin, cos\n'), ((2001, 2011), 'math.sin', 'sin', (['theta'], {}), '(theta)\n', (2004, 2011), False, 'from math import sin, cos\n'), ((2058, 2068), 'math.cos', 'cos', (['theta'], {}), '(theta)\n', (2061, 2068), False, 'from math import sin, cos\n'), ((2133, 2143), 'math.cos', 'cos', (['theta'], {}), '(theta)\n', (2136, 2143), False, 'from math import sin, cos\n'), ((2171, 2181), 'math.sin', 'sin', (['theta'], {}), '(theta)\n', (2174, 2181), False, 'from math import sin, cos\n'), ((2183, 2193), 'math.cos', 'cos', (['theta'], {}), '(theta)\n', (2186, 2193), False, 'from math import sin, cos\n'), ((1874, 1884), 'math.sin', 'sin', (['theta'], {}), '(theta)\n', (1877, 1884), False, 'from math import sin, cos\n'), ((2043, 2053), 'math.sin', 'sin', (['theta'], {}), '(theta)\n', (2046, 2053), False, 'from math import sin, cos\n'), ((2146, 2156), 'math.sin', 'sin', (['theta'], {}), '(theta)\n', (2149, 2156), False, 'from math import sin, cos\n')]
|
from API.db import db
from datetime import datetime
from passlib.apps import custom_app_context as pwd_context
class User(db.Model):
__tablename__ = "users"
def __init__(self, username, password):
self.username = username
self.password = pwd_context.hash(password)
self.limit = 0
self.user_limit = 0
def verify_password(self, password):
return pwd_context.verify(password, self.password)
def json(self):
return {"id": self.id, "username": self.username, "password": str(self.password), "limit": self.limit,
"user limit": self.user_limit, "spent_limit": self.get_spent()}
def save_in_db(self):
db.session.add(self)
db.session.commit()
@classmethod
def get_by_id(cls, user_id):
return cls.query.filter_by(id=user_id).first()
@classmethod
def get_by_username(cls, username):
return cls.query.filter_by(username=username).first()
def get_limit(self):
return self.limit
def get_user_limit(self):
return self.user_limit
def set_limit(self, limit):
self.limit = limit
def set_user_limit(self, limit):
self.user_limit = limit
def get_cards(self):
return self.cards
def delete(self):
db.session.delete(self)
db.session.commit()
def get_cards(self):
return self.cards.order_by(Card.spent_limit).all()
def get_spent(self):
return sum(x.spent_limit for x in self.cards.all())
class Card(db.Model):
__tablename__ = "card"
id = db.Column(db.Integer, primary_key=True, autoincrement=True)
name = db.Column(db.String(255), nullable=False)
number = db.Column(db.String(16), nullable=False)
ccv = db.Column(db.String(3), nullable=False)
due_date = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
expiration_date = db.Column(db.DateTime, nullable=False, default=datetime.utcnow)
limit = db.Column(db.Float, nullable=False)
spent_limit = db.Column(db.Float, nullable=False)
user_id = db.Column(db.Integer, db.ForeignKey("users.id"), nullable=False)
user = db.relationship("User", back_populates="cards")
def __init__(self, username, name, number, ccv, due_date, expiration_date, limit):
# Parse strings to convert to to datetime
datetime_due_date = datetime.strptime(due_date, "%Y/%m/%d")
datetime_expiration_date = datetime.strptime(expiration_date, "%Y/%m/%d")
self.user_id = User.query.filter_by(username=username).first().id
self.name = name
self. number = number
self.ccv = ccv
self.due_date = datetime_due_date
self.expiration_date = datetime_expiration_date
self.limit = limit
self.spent_limit = 0
def json(self):
return {"id": self.id, "name": self.name, "number": self.number, "ccv": self.ccv,
"due_date": str(self.due_date), "expiration_date": str(self.expiration_date), "limit": self.limit,
"spent limit": self.spent_limit, "user_id": self.user_id}
def save_in_db(self):
db.session.add(self)
db.session.commit()
@classmethod
def get_by_number(cls, number):
return cls.query.filter_by(number=number).first()
def get_limit(self):
return self.limit
def get_spent_limit(self):
return self.spent_limit
def set_spent_limit(self, spent_limit):
self.spent_limit = spent_limit
def delete(self):
user = User.query.filter_by(id=self.user_id).first()
user.set_limit(user.limit-self.limit)
db.session.delete(self)
db.session.commit()
|
[
"passlib.apps.custom_app_context.hash",
"API.db.db.ForeignKey",
"datetime.datetime.strptime",
"API.db.db.session.commit",
"API.db.db.session.add",
"API.db.db.String",
"API.db.db.relationship",
"passlib.apps.custom_app_context.verify",
"API.db.db.session.delete",
"API.db.db.Column"
] |
[((1579, 1638), 'API.db.db.Column', 'db.Column', (['db.Integer'], {'primary_key': '(True)', 'autoincrement': '(True)'}), '(db.Integer, primary_key=True, autoincrement=True)\n', (1588, 1638), False, 'from API.db import db\n'), ((1811, 1874), 'API.db.db.Column', 'db.Column', (['db.DateTime'], {'nullable': '(False)', 'default': 'datetime.utcnow'}), '(db.DateTime, nullable=False, default=datetime.utcnow)\n', (1820, 1874), False, 'from API.db import db\n'), ((1897, 1960), 'API.db.db.Column', 'db.Column', (['db.DateTime'], {'nullable': '(False)', 'default': 'datetime.utcnow'}), '(db.DateTime, nullable=False, default=datetime.utcnow)\n', (1906, 1960), False, 'from API.db import db\n'), ((1973, 2008), 'API.db.db.Column', 'db.Column', (['db.Float'], {'nullable': '(False)'}), '(db.Float, nullable=False)\n', (1982, 2008), False, 'from API.db import db\n'), ((2027, 2062), 'API.db.db.Column', 'db.Column', (['db.Float'], {'nullable': '(False)'}), '(db.Float, nullable=False)\n', (2036, 2062), False, 'from API.db import db\n'), ((2154, 2201), 'API.db.db.relationship', 'db.relationship', (['"""User"""'], {'back_populates': '"""cards"""'}), "('User', back_populates='cards')\n", (2169, 2201), False, 'from API.db import db\n'), ((266, 292), 'passlib.apps.custom_app_context.hash', 'pwd_context.hash', (['password'], {}), '(password)\n', (282, 292), True, 'from passlib.apps import custom_app_context as pwd_context\n'), ((401, 444), 'passlib.apps.custom_app_context.verify', 'pwd_context.verify', (['password', 'self.password'], {}), '(password, self.password)\n', (419, 444), True, 'from passlib.apps import custom_app_context as pwd_context\n'), ((692, 712), 'API.db.db.session.add', 'db.session.add', (['self'], {}), '(self)\n', (706, 712), False, 'from API.db import db\n'), ((721, 740), 'API.db.db.session.commit', 'db.session.commit', ([], {}), '()\n', (738, 740), False, 'from API.db import db\n'), ((1294, 1317), 'API.db.db.session.delete', 'db.session.delete', (['self'], {}), '(self)\n', (1311, 1317), False, 'from API.db import db\n'), ((1326, 1345), 'API.db.db.session.commit', 'db.session.commit', ([], {}), '()\n', (1343, 1345), False, 'from API.db import db\n'), ((1660, 1674), 'API.db.db.String', 'db.String', (['(255)'], {}), '(255)\n', (1669, 1674), False, 'from API.db import db\n'), ((1715, 1728), 'API.db.db.String', 'db.String', (['(16)'], {}), '(16)\n', (1724, 1728), False, 'from API.db import db\n'), ((1766, 1778), 'API.db.db.String', 'db.String', (['(3)'], {}), '(3)\n', (1775, 1778), False, 'from API.db import db\n'), ((2100, 2125), 'API.db.db.ForeignKey', 'db.ForeignKey', (['"""users.id"""'], {}), "('users.id')\n", (2113, 2125), False, 'from API.db import db\n'), ((2368, 2407), 'datetime.datetime.strptime', 'datetime.strptime', (['due_date', '"""%Y/%m/%d"""'], {}), "(due_date, '%Y/%m/%d')\n", (2385, 2407), False, 'from datetime import datetime\n'), ((2443, 2489), 'datetime.datetime.strptime', 'datetime.strptime', (['expiration_date', '"""%Y/%m/%d"""'], {}), "(expiration_date, '%Y/%m/%d')\n", (2460, 2489), False, 'from datetime import datetime\n'), ((3132, 3152), 'API.db.db.session.add', 'db.session.add', (['self'], {}), '(self)\n', (3146, 3152), False, 'from API.db import db\n'), ((3161, 3180), 'API.db.db.session.commit', 'db.session.commit', ([], {}), '()\n', (3178, 3180), False, 'from API.db import db\n'), ((3631, 3654), 'API.db.db.session.delete', 'db.session.delete', (['self'], {}), '(self)\n', (3648, 3654), False, 'from API.db import db\n'), ((3663, 3682), 'API.db.db.session.commit', 'db.session.commit', ([], {}), '()\n', (3680, 3682), False, 'from API.db import db\n')]
|
import unittest
from Dice import Dice
class TestDice(unittest.TestCase):
def setUp(self):
self.sides = 8
self.dice = Dice(self.sides)
def test_roll(self):
for i in range(1000):
self.assertLessEqual(self.dice.roll(), self.sides)
def test_error(self):
self.assertRaises(ValueError, Dice, 0)
if __name__ == '__main__': # pragma no cover
unittest.main()
|
[
"unittest.main",
"Dice.Dice"
] |
[((402, 417), 'unittest.main', 'unittest.main', ([], {}), '()\n', (415, 417), False, 'import unittest\n'), ((140, 156), 'Dice.Dice', 'Dice', (['self.sides'], {}), '(self.sides)\n', (144, 156), False, 'from Dice import Dice\n')]
|
"""Command line tool to extract meaningful health info from accelerometer data."""
import accelerometer.accUtils
import argparse
import collections
import datetime
import accelerometer.device
import json
import os
import accelerometer.summariseEpoch
import sys
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from filter_data import data_filter
from import_npy import import_npy
def main():
"""
Application entry point responsible for parsing command line requests
"""
parser = argparse.ArgumentParser(
description="""A tool to extract physical activity information from
raw accelerometer files.""", add_help=True
)
# required
parser.add_argument('rawFile', metavar='input file', type=str,
help="""the <.cwa/.cwa.gz> file to process
(e.g. sample.cwa.gz). If the file path contains
spaces,it must be enclosed in quote marks
(e.g. \"../My Documents/sample.cwa\")
""")
#optional inputs
parser.add_argument('--startTime',
metavar='e.g. 1991-01-01T23:59', default=None,
type=str2date, help="""removes data before this
time in the final analysis
(default : %(default)s)""")
parser.add_argument('--endTime',
metavar='e.g 1991-01-01T23:59', default=None,
type=str2date, help="""removes data after this
time in the final analysis
(default : %(default)s)""")
parser.add_argument('--timeSeriesDateColumn',
metavar='True/False', default=False, type=str2bool,
help="""adds a date/time column to the timeSeries
file, so acceleration and imputation values can be
compared easily. This increases output filesize
(default : %(default)s)""")
parser.add_argument('--processRawFile',
metavar='True/False', default=True, type=str2bool,
help="""False will skip processing of the .cwa file
(the epoch.csv file must already exist for this to
work) (default : %(default)s)""")
parser.add_argument('--epochPeriod',
metavar='length', default=30, type=int,
help="""length in seconds of a single epoch (default
: %(default)ss, must be an integer)""")
parser.add_argument('--sampleRate',
metavar='Hz, or samples/second', default=100,
type=int, help="""resample data to n Hz (default
: %(default)ss, must be an integer)""")
parser.add_argument('--useAbs',
metavar='useAbs', default=False, type=str2bool,
help="""use abs(VM) instead of trunc(VM)
(default : %(default)s)""")
parser.add_argument('--skipFiltering',
metavar='True/False', default=False, type=str2bool,
help="""Skip filtering stage
(default : %(default)s)""")
# calibration parameters
parser.add_argument('--skipCalibration',
metavar='True/False', default=False, type=str2bool,
help="""skip calibration? (default : %(default)s)""")
parser.add_argument('--calOffset',
metavar=('x', 'y', 'z'),default=[0.0, 0.0, 0.0],
type=float, nargs=3,
help="""accelerometer calibration offset (default :
%(default)s)""")
parser.add_argument('--calSlope',
metavar=('x', 'y', 'z'), default=[1.0, 1.0, 1.0],
type=float, nargs=3,
help="""accelerometer calibration slope linking
offset to temperature (default : %(default)s)""")
parser.add_argument('--calTemp',
metavar=('x', 'y', 'z'), default=[0.0, 0.0, 0.0],
type=float, nargs=3,
help="""mean temperature in degrees Celsius of
stationary data for calibration
(default : %(default)s)""")
parser.add_argument('--meanTemp',
metavar="temp", default=20.0, type=float,
help="""mean calibration temperature in degrees
Celsius (default : %(default)s)""")
parser.add_argument('--stationaryStd',
metavar='mg', default=13, type=int,
help="""stationary mg threshold (default
: %(default)s mg))""")
parser.add_argument('--calibrationSphereCriteria',
metavar='mg', default=0.3, type=float,
help="""calibration sphere threshold (default
: %(default)s mg))""")
# activity parameters
parser.add_argument('--mgMVPA',
metavar="mg", default=100, type=int,
help="""MVPA threshold (default : %(default)s)""")
parser.add_argument('--mgVPA',
metavar="mg", default=425, type=int,
help="""VPA threshold (default : %(default)s)""")
# calling helper processess and conducting multi-threadings
parser.add_argument('--rawDataParser',
metavar="rawDataParser", default="AxivityAx3Epochs",
type=str,
help="""file containing a java program to process
raw .cwa binary file, must end with .class (omitted)
(default : %(default)s)""")
parser.add_argument('--javaHeapSpace',
metavar="amount in MB", default="", type=str,
help="""amount of heap space allocated to the java
subprocesses,useful for limiting RAM usage (default
: unlimited)""")
# activity classification arguments
parser.add_argument('--activityClassification',
metavar='True/False', default=True, type=str2bool,
help="""Use pre-trained random forest to predict
activity type
(default : %(default)s)""")
parser.add_argument('--activityModel', type=str,
default="activityModels/doherty2018.tar",
help="""trained activity model .tar file""")
parser.add_argument('--rawOutput',
metavar='True/False', default=False, type=str2bool,
help="""output calibrated and resampled raw data to
a .csv.gz file? NOTE: requires ~50MB per day.
(default : %(default)s)""")
parser.add_argument('--npyOutput',
metavar='True/False', default=True, type=str2bool,
help="""output calibrated and resampled raw data to
.npy file? NOTE: requires ~60MB per day.
(default : %(default)s)""")
parser.add_argument('--fftOutput',
metavar='True/False', default=False, type=str2bool,
help="""output FFT epochs to a .csv file? NOTE:
requires ~0.1GB per day. (default : %(default)s)""")
# optional outputs
parser.add_argument('--outputFolder', metavar='filename',default="",
help="""folder for all of the output files, \
unless specified using other options""")
parser.add_argument('--summaryFolder', metavar='filename',default="",
help="folder for -summary.json summary stats")
parser.add_argument('--epochFolder', metavar='filename', default="",
help="""folder -epoch.csv.gz - must be an existing
file if "-processRawFile" is set to False""")
parser.add_argument('--timeSeriesFolder', metavar='filename', default="",
help="folder for -timeSeries.csv.gz file")
parser.add_argument('--nonWearFolder', metavar='filename',default="",
help="folder for -nonWearBouts.csv.gz file")
parser.add_argument('--stationaryFolder', metavar='filename', default="",
help="folder -stationaryPoints.csv.gz file")
parser.add_argument('--rawFolder', metavar='filename', default="",
help="folder for raw .csv.gz file")
parser.add_argument('--verbose',
metavar='True/False', default=False, type=str2bool,
help="""enable verbose logging? (default :
%(default)s)""")
parser.add_argument('--deleteIntermediateFiles',
metavar='True/False', default=True, type=str2bool,
help="""True will remove extra "helper" files created
by the program (default : %(default)s)""")
parser.add_argument('--intensityDistribution',
metavar='True/False', default=False, type=str2bool,
help="""Save intensity distribution
(default : %(default)s)""")
#
# check that enough command line arguments are entered
#
if len(sys.argv) < 2:
msg = "\nInvalid input, please enter at least 1 parameter, e.g."
msg += "\npython accProcess.py data/sample.cwa.gz \n"
accelerometer.accUtils.toScreen(msg)
parser.print_help()
sys.exit(-1)
processingStartTime = datetime.datetime.now()
args = parser.parse_args()
##########################
# check input/output files/dirs exist and validate input args
##########################
if args.processRawFile is False:
#! TODO: this breaks for .cwa.gz files
if len(args.rawFile.split('.')) < 2:
args.rawFile += ".cwa" # TODO edge case since we still need a name?
elif not os.path.isfile(args.rawFile):
if args.rawFile:
print("error: specified file " + args.rawFile + " does not exist. Exiting..")
else:
print("error: no file specified. Exiting..")
sys.exit(-2)
# get file extension
rawFilePath, rawFileName = os.path.split(args.rawFile)
rawFileName = rawFileName.split('.')[0] # remove any extension
# check target output folders exist
for path in [args.summaryFolder, args.nonWearFolder, args.epochFolder,
args.stationaryFolder, args.timeSeriesFolder, args.outputFolder]:
if len(path) > 0 and not os.access(path, os.F_OK):
print("error: " + path + " is not a valid directory")
sys.exit(-3)
# assign output file names
if args.outputFolder == "" and rawFilePath != "":
args.outputFolder = rawFilePath + '/'
if args.summaryFolder == "":
args.summaryFolder = args.outputFolder
if args.nonWearFolder == "":
args.nonWearFolder = args.outputFolder
if args.epochFolder == "":
args.epochFolder = args.outputFolder
if args.stationaryFolder == "":
args.stationaryFolder = args.outputFolder
if args.timeSeriesFolder == "":
args.timeSeriesFolder = args.outputFolder
if args.rawFolder == "":
args.rawFolder = args.outputFolder
args.summaryFile = args.summaryFolder + rawFileName + "-summary.json"
args.nonWearFile = args.nonWearFolder + rawFileName + "-nonWearBouts.csv.gz"
args.epochFile = args.epochFolder + rawFileName + "-epoch.csv.gz"
args.stationaryFile = args.stationaryFolder + rawFileName + "-stationaryPoints.csv"
args.tsFile = args.timeSeriesFolder + rawFileName + "-timeSeries.csv.gz"
args.rawOutputFile = args.rawFolder + rawFileName + ".csv.gz"
args.npyOutputFile = args.rawFolder + rawFileName + ".npy"
# check user specified end time is not before start time
if args.startTime and args.endTime:
if args.startTime >= args.endTime:
print("start and end time arguments are invalid!")
print("startTime:", args.startTime.strftime("%Y-%m-%dT%H:%M"))
print("endTime:", args.endTime.strftime("%Y-%m-%dT%H:%M"))
sys.exit(-4)
# print processing options to screen
print("processing file " + args.rawFile + "' with these arguments:\n")
for key, value in sorted(vars(args).items()):
if not (isinstance(value, str) and len(value)==0):
print(key.ljust(15), ':', value)
print("\n")
##########################
# start processing file
##########################
summary = {}
# now process the .CWA file
if args.processRawFile:
summary['file-name'] = args.rawFile
accelerometer.device.processRawFileToEpoch(args.rawFile, args.epochFile,
args.stationaryFile, summary, skipCalibration=args.skipCalibration,
stationaryStd=args.stationaryStd, xIntercept=args.calOffset[0],
yIntercept=args.calOffset[1], zIntercept=args.calOffset[2],
xSlope=args.calSlope[0], ySlope=args.calSlope[1],
zSlope=args.calSlope[2], xTemp=args.calTemp[0],
yTemp=args.calTemp[1], zTemp=args.calTemp[2],
meanTemp=args.meanTemp, rawDataParser=args.rawDataParser,
javaHeapSpace=args.javaHeapSpace, skipFiltering=args.skipFiltering,
sampleRate=args.sampleRate, epochPeriod=args.epochPeriod,
useAbs=args.useAbs, activityClassification=args.activityClassification,
rawOutput=args.rawOutput, rawOutputFile=args.rawOutputFile,
npyOutput=args.npyOutput, npyOutputFile=args.npyOutputFile,
fftOutput=args.fftOutput, startTime=args.startTime,
endTime=args.endTime, verbose=args.verbose)
print(args.rawFile)
else:
summary['file-name'] = args.epochFile
data, time = import_npy(args.rawFile)
# Place your code here
##########################
# remove helper files and close program
##########################
if args.deleteIntermediateFiles:
try:
os.remove(args.stationaryFile)
os.remove(args.epochFile)
os.remove(args.rawFile[:-4] + '.npy')
except:
accelerometer.accUtils.toScreen('could not delete helper file')
# finally, print out processing summary message
processingEndTime = datetime.datetime.now()
processingTime = (processingEndTime - processingStartTime).total_seconds()
accelerometer.accUtils.toScreen("in total, processing took " + \
str(processingTime) + " seconds")
def str2bool(v):
"""
Used to parse true/false values from the command line. E.g. "True" -> True
"""
return v.lower() in ("yes", "true", "t", "1")
def str2date(v):
"""
Used to parse date values from the command line. E.g. "1994-11-30T12:00" -> time.datetime
"""
eg = "1994-11-30T12:00" # example date
if v.count("-")!=eg.count("-"):
print("ERROR: not enough dashes in date")
elif v.count("T")!=eg.count("T"):
print("ERROR: no T seperator in date")
elif v.count(":")!=eg.count(":"):
print("ERROR: no ':' seperator in date")
elif len(v.split("-")[0])!=4:
print("ERROR: year in date must be 4 numbers")
elif len(v.split("-")[1])!=2 and len(v.split("-")[1])!=1:
print("ERROR: month in date must be 1-2 numbers")
elif len(v.split("-")[2].split("T")[0])!=2 and len(v.split("-")[2].split("T")[0])!=1:
print("ERROR: day in date must be 1-2 numbers")
else:
return pd.datetime.strptime(v, "%Y-%m-%dT%H:%M")
print("please change your input date:")
print('"'+v+'"')
print("to match the example date format:")
print('"'+eg+'"')
raise ValueError("date in incorrect format")
if __name__ == '__main__':
main() # Standard boilerplate to call the main() function to begin the program.
|
[
"argparse.ArgumentParser",
"os.access",
"os.path.split",
"os.path.isfile",
"datetime.datetime.now",
"pandas.datetime.strptime",
"sys.exit",
"import_npy.import_npy",
"os.remove"
] |
[((520, 677), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {'description': '"""A tool to extract physical activity information from\n raw accelerometer files."""', 'add_help': '(True)'}), '(description=\n """A tool to extract physical activity information from\n raw accelerometer files."""\n , add_help=True)\n', (543, 677), False, 'import argparse\n'), ((10327, 10350), 'datetime.datetime.now', 'datetime.datetime.now', ([], {}), '()\n', (10348, 10350), False, 'import datetime\n'), ((11027, 11054), 'os.path.split', 'os.path.split', (['args.rawFile'], {}), '(args.rawFile)\n', (11040, 11054), False, 'import os\n'), ((14647, 14671), 'import_npy.import_npy', 'import_npy', (['args.rawFile'], {}), '(args.rawFile)\n', (14657, 14671), False, 'from import_npy import import_npy\n'), ((15165, 15188), 'datetime.datetime.now', 'datetime.datetime.now', ([], {}), '()\n', (15186, 15188), False, 'import datetime\n'), ((10288, 10300), 'sys.exit', 'sys.exit', (['(-1)'], {}), '(-1)\n', (10296, 10300), False, 'import sys\n'), ((10734, 10762), 'os.path.isfile', 'os.path.isfile', (['args.rawFile'], {}), '(args.rawFile)\n', (10748, 10762), False, 'import os\n'), ((10958, 10970), 'sys.exit', 'sys.exit', (['(-2)'], {}), '(-2)\n', (10966, 10970), False, 'import sys\n'), ((11458, 11470), 'sys.exit', 'sys.exit', (['(-3)'], {}), '(-3)\n', (11466, 11470), False, 'import sys\n'), ((12967, 12979), 'sys.exit', 'sys.exit', (['(-4)'], {}), '(-4)\n', (12975, 12979), False, 'import sys\n'), ((14878, 14908), 'os.remove', 'os.remove', (['args.stationaryFile'], {}), '(args.stationaryFile)\n', (14887, 14908), False, 'import os\n'), ((14921, 14946), 'os.remove', 'os.remove', (['args.epochFile'], {}), '(args.epochFile)\n', (14930, 14946), False, 'import os\n'), ((14959, 14996), 'os.remove', 'os.remove', (["(args.rawFile[:-4] + '.npy')"], {}), "(args.rawFile[:-4] + '.npy')\n", (14968, 14996), False, 'import os\n'), ((11354, 11378), 'os.access', 'os.access', (['path', 'os.F_OK'], {}), '(path, os.F_OK)\n', (11363, 11378), False, 'import os\n'), ((16356, 16397), 'pandas.datetime.strptime', 'pd.datetime.strptime', (['v', '"""%Y-%m-%dT%H:%M"""'], {}), "(v, '%Y-%m-%dT%H:%M')\n", (16376, 16397), True, 'import pandas as pd\n')]
|
from django.shortcuts import render
from .models import VetsInfoTable
# Create your views here.
def home(request):
context = {
"name": "Home"
}
return render(request, 'index.html', context)
def view_vets(request):
obj = VetsInfoTable.objects.all()
context = {
"vets_data": obj
}
return render(request, 'vets/vets.html', context)
|
[
"django.shortcuts.render"
] |
[((176, 214), 'django.shortcuts.render', 'render', (['request', '"""index.html"""', 'context'], {}), "(request, 'index.html', context)\n", (182, 214), False, 'from django.shortcuts import render\n'), ((339, 381), 'django.shortcuts.render', 'render', (['request', '"""vets/vets.html"""', 'context'], {}), "(request, 'vets/vets.html', context)\n", (345, 381), False, 'from django.shortcuts import render\n')]
|
import sys
import math
import random
# Figure out what we should name our output file, and how big it should be
if len(sys.argv) != 3: # Make sure we get a file argument, and only that
print("Incorrect number of arguments found, should be \"generate <file> 10^<x>\"")
for i in range(10):
with open("./gen/%s%d" % (sys.argv[1], i), "w") as file:
for x in range(pow(10, int(sys.argv[2]))):
xNum = random.randint(1, 10000)
yNum = random.randint(1, 10000)
file.write("%d %d\n" % (xNum, yNum))
|
[
"random.randint"
] |
[((425, 449), 'random.randint', 'random.randint', (['(1)', '(10000)'], {}), '(1, 10000)\n', (439, 449), False, 'import random\n'), ((469, 493), 'random.randint', 'random.randint', (['(1)', '(10000)'], {}), '(1, 10000)\n', (483, 493), False, 'import random\n')]
|
import pytest
from easydict import EasyDict
import numpy as np
import gym
from copy import deepcopy
from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure
from ding.envs.env.tests import DemoEnv
@pytest.mark.unittest
def test_an_implemented_env():
demo_env = DemoEnv({})
check_all(demo_env)
demonstrate_correct_procedure(DemoEnv)
@pytest.mark.unittest
def test_check_array_space():
seq_array = (np.array([1, 2, 3], dtype=np.int64), np.array([4., 5., 6.], dtype=np.float32))
seq_space = [gym.spaces.Box(low=0, high=10, shape=(3, ), dtype=np.int64) for _ in range(2)]
with pytest.raises(AssertionError):
check_array_space(seq_array, seq_space, 'test_sequence')
dict_array = {'a': np.array([1, 2, 3], dtype=np.int64), 'b': np.array([4., 5., 6.], dtype=np.float32)}
int_box = gym.spaces.Box(low=0, high=10, shape=(3, ), dtype=np.int64)
dict_space = {'a': deepcopy(int_box), 'b': deepcopy(int_box)}
with pytest.raises(AssertionError):
check_array_space(dict_array, dict_space, 'test_dict')
with pytest.raises(TypeError):
check_array_space(1, dict_space, 'test_type_error')
@pytest.mark.unittest
def test_check_different_memory():
int_seq = np.array([1, 2, 3], dtype=np.int64)
seq_array1 = (int_seq, np.array([4., 5., 6.], dtype=np.float32))
seq_array2 = (int_seq, np.array([4., 5., 6.], dtype=np.float32))
with pytest.raises(AssertionError):
check_different_memory(seq_array1, seq_array2, -1)
dict_array1 = {'a': np.array([4., 5., 6.], dtype=np.float32), 'b': int_seq}
dict_array2 = {'a': np.array([4., 5., 6.], dtype=np.float32), 'b': int_seq}
with pytest.raises(AssertionError):
check_different_memory(dict_array1, dict_array2, -1)
with pytest.raises(AssertionError):
check_different_memory(1, dict_array1, -1)
with pytest.raises(TypeError):
check_different_memory(1, 2, -1)
|
[
"ding.envs.env.tests.DemoEnv",
"ding.envs.env.demonstrate_correct_procedure",
"gym.spaces.Box",
"numpy.array",
"pytest.raises",
"ding.envs.env.check_array_space",
"ding.envs.env.check_different_memory",
"copy.deepcopy",
"ding.envs.env.check_all"
] |
[((321, 332), 'ding.envs.env.tests.DemoEnv', 'DemoEnv', (['{}'], {}), '({})\n', (328, 332), False, 'from ding.envs.env.tests import DemoEnv\n'), ((337, 356), 'ding.envs.env.check_all', 'check_all', (['demo_env'], {}), '(demo_env)\n', (346, 356), False, 'from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure\n'), ((361, 399), 'ding.envs.env.demonstrate_correct_procedure', 'demonstrate_correct_procedure', (['DemoEnv'], {}), '(DemoEnv)\n', (390, 399), False, 'from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure\n'), ((873, 931), 'gym.spaces.Box', 'gym.spaces.Box', ([], {'low': '(0)', 'high': '(10)', 'shape': '(3,)', 'dtype': 'np.int64'}), '(low=0, high=10, shape=(3,), dtype=np.int64)\n', (887, 931), False, 'import gym\n'), ((1271, 1306), 'numpy.array', 'np.array', (['[1, 2, 3]'], {'dtype': 'np.int64'}), '([1, 2, 3], dtype=np.int64)\n', (1279, 1306), True, 'import numpy as np\n'), ((471, 506), 'numpy.array', 'np.array', (['[1, 2, 3]'], {'dtype': 'np.int64'}), '([1, 2, 3], dtype=np.int64)\n', (479, 506), True, 'import numpy as np\n'), ((508, 551), 'numpy.array', 'np.array', (['[4.0, 5.0, 6.0]'], {'dtype': 'np.float32'}), '([4.0, 5.0, 6.0], dtype=np.float32)\n', (516, 551), True, 'import numpy as np\n'), ((567, 625), 'gym.spaces.Box', 'gym.spaces.Box', ([], {'low': '(0)', 'high': '(10)', 'shape': '(3,)', 'dtype': 'np.int64'}), '(low=0, high=10, shape=(3,), dtype=np.int64)\n', (581, 625), False, 'import gym\n'), ((655, 684), 'pytest.raises', 'pytest.raises', (['AssertionError'], {}), '(AssertionError)\n', (668, 684), False, 'import pytest\n'), ((694, 750), 'ding.envs.env.check_array_space', 'check_array_space', (['seq_array', 'seq_space', '"""test_sequence"""'], {}), "(seq_array, seq_space, 'test_sequence')\n", (711, 750), False, 'from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure\n'), ((775, 810), 'numpy.array', 'np.array', (['[1, 2, 3]'], {'dtype': 'np.int64'}), '([1, 2, 3], dtype=np.int64)\n', (783, 810), True, 'import numpy as np\n'), ((817, 860), 'numpy.array', 'np.array', (['[4.0, 5.0, 6.0]'], {'dtype': 'np.float32'}), '([4.0, 5.0, 6.0], dtype=np.float32)\n', (825, 860), True, 'import numpy as np\n'), ((956, 973), 'copy.deepcopy', 'deepcopy', (['int_box'], {}), '(int_box)\n', (964, 973), False, 'from copy import deepcopy\n'), ((980, 997), 'copy.deepcopy', 'deepcopy', (['int_box'], {}), '(int_box)\n', (988, 997), False, 'from copy import deepcopy\n'), ((1008, 1037), 'pytest.raises', 'pytest.raises', (['AssertionError'], {}), '(AssertionError)\n', (1021, 1037), False, 'import pytest\n'), ((1047, 1101), 'ding.envs.env.check_array_space', 'check_array_space', (['dict_array', 'dict_space', '"""test_dict"""'], {}), "(dict_array, dict_space, 'test_dict')\n", (1064, 1101), False, 'from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure\n'), ((1112, 1136), 'pytest.raises', 'pytest.raises', (['TypeError'], {}), '(TypeError)\n', (1125, 1136), False, 'import pytest\n'), ((1146, 1197), 'ding.envs.env.check_array_space', 'check_array_space', (['(1)', 'dict_space', '"""test_type_error"""'], {}), "(1, dict_space, 'test_type_error')\n", (1163, 1197), False, 'from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure\n'), ((1334, 1377), 'numpy.array', 'np.array', (['[4.0, 5.0, 6.0]'], {'dtype': 'np.float32'}), '([4.0, 5.0, 6.0], dtype=np.float32)\n', (1342, 1377), True, 'import numpy as np\n'), ((1403, 1446), 'numpy.array', 'np.array', (['[4.0, 5.0, 6.0]'], {'dtype': 'np.float32'}), '([4.0, 5.0, 6.0], dtype=np.float32)\n', (1411, 1446), True, 'import numpy as np\n'), ((1454, 1483), 'pytest.raises', 'pytest.raises', (['AssertionError'], {}), '(AssertionError)\n', (1467, 1483), False, 'import pytest\n'), ((1493, 1543), 'ding.envs.env.check_different_memory', 'check_different_memory', (['seq_array1', 'seq_array2', '(-1)'], {}), '(seq_array1, seq_array2, -1)\n', (1515, 1543), False, 'from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure\n'), ((1569, 1612), 'numpy.array', 'np.array', (['[4.0, 5.0, 6.0]'], {'dtype': 'np.float32'}), '([4.0, 5.0, 6.0], dtype=np.float32)\n', (1577, 1612), True, 'import numpy as np\n'), ((1649, 1692), 'numpy.array', 'np.array', (['[4.0, 5.0, 6.0]'], {'dtype': 'np.float32'}), '([4.0, 5.0, 6.0], dtype=np.float32)\n', (1657, 1692), True, 'import numpy as np\n'), ((1714, 1743), 'pytest.raises', 'pytest.raises', (['AssertionError'], {}), '(AssertionError)\n', (1727, 1743), False, 'import pytest\n'), ((1753, 1805), 'ding.envs.env.check_different_memory', 'check_different_memory', (['dict_array1', 'dict_array2', '(-1)'], {}), '(dict_array1, dict_array2, -1)\n', (1775, 1805), False, 'from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure\n'), ((1816, 1845), 'pytest.raises', 'pytest.raises', (['AssertionError'], {}), '(AssertionError)\n', (1829, 1845), False, 'import pytest\n'), ((1855, 1897), 'ding.envs.env.check_different_memory', 'check_different_memory', (['(1)', 'dict_array1', '(-1)'], {}), '(1, dict_array1, -1)\n', (1877, 1897), False, 'from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure\n'), ((1907, 1931), 'pytest.raises', 'pytest.raises', (['TypeError'], {}), '(TypeError)\n', (1920, 1931), False, 'import pytest\n'), ((1941, 1973), 'ding.envs.env.check_different_memory', 'check_different_memory', (['(1)', '(2)', '(-1)'], {}), '(1, 2, -1)\n', (1963, 1973), False, 'from ding.envs.env import check_array_space, check_different_memory, check_all, demonstrate_correct_procedure\n')]
|
import random
from floodsystem.utils import sorted_by_key # noqa
from floodsystem.geo import stations_by_distance, stations_within_radius, rivers_with_station, stations_by_river,rivers_by_station_number
from floodsystem.stationdata import build_station_list
'''def test_geo():
#Task 1A
#does the function give an output & if it's a list:
out = build_station_list()
assert type(out) == list
#checking that list is a reasonable length
assert len(out) >1700
assert len(out) <2500'''
#Task 1B
def test_stations_by_distance():
stations = build_station_list()
p = (52.2053, 0.1218)#putting in Cambridge value from task
out = stations_by_distance(stations, p)
#check that list is returned
assert type(out) == list
#check that items are tuples
assert type(out[0]) == tuple
#check that first station is Jesus Lock
assert out[0] == ('Cambridge Jes<NAME>', 'Cambridge', 0.840237595667494)
#check that furthest station is Penberth
assert out[-1] == ('Penberth', 'Penberth', 467.53431870130544)
#Task 1C
def test_stations_within_radius():
stations = build_station_list()
out = stations_within_radius(stations, (52.2053, 0.1218), 10)
#check that list is returned
assert type(out) == list
#checking first value, which is checking the sorting, too
assert out[0] == 'Bin Brook'
#checking length of list
assert len(out) == 11
#Task 1D
def test_rivers_with_station():
stations = build_station_list()
out = rivers_with_station(stations)
#check that out is a set
assert type(out) == set
#check that each item in list is a string
out = list(out)
assert type(out[0]) == str
#check that out is of a reasonable length - no. of stations might change in the future?
assert len(out) > 900
assert len(out) < 1000
#checking for duplicates
#if set(out) is shorter than list (out), then there are duplicates
assert len(out) == len(set(out))
def test_stations_by_rivers():
stations = build_station_list
out = stations_by_river(stations)
#check that output is a dictionary
assert type(out) == dict
#check number of stations listed for Aire:
aire = out['River Aire']
assert len(aire) ==24
#check that it's a list
assert type(out['River Thames']) == list
#Task1E
def test_rivers_by_station_number():
stations = build_station_list()
N = random.randint(0,9)
out = rivers_by_station_number(stations, N)
#check that output is a list
assert type(out)==list
#check that items are tuples
#assert(type(out[0])) == tuple
#check that list is of length N
assert len(out) == N
#checking that list is sorted by number of stations
ret = sorted_by_key(out, 1, reverse=True)#sorting the list by decreasing distance from p (the 3rd thing in tuple - distance_p)
assert ret == out
|
[
"floodsystem.geo.rivers_with_station",
"floodsystem.geo.stations_by_distance",
"floodsystem.geo.stations_within_radius",
"floodsystem.utils.sorted_by_key",
"floodsystem.geo.stations_by_river",
"floodsystem.geo.rivers_by_station_number",
"floodsystem.stationdata.build_station_list",
"random.randint"
] |
[((579, 599), 'floodsystem.stationdata.build_station_list', 'build_station_list', ([], {}), '()\n', (597, 599), False, 'from floodsystem.stationdata import build_station_list\n'), ((673, 706), 'floodsystem.geo.stations_by_distance', 'stations_by_distance', (['stations', 'p'], {}), '(stations, p)\n', (693, 706), False, 'from floodsystem.geo import stations_by_distance, stations_within_radius, rivers_with_station, stations_by_river, rivers_by_station_number\n'), ((1140, 1160), 'floodsystem.stationdata.build_station_list', 'build_station_list', ([], {}), '()\n', (1158, 1160), False, 'from floodsystem.stationdata import build_station_list\n'), ((1175, 1230), 'floodsystem.geo.stations_within_radius', 'stations_within_radius', (['stations', '(52.2053, 0.1218)', '(10)'], {}), '(stations, (52.2053, 0.1218), 10)\n', (1197, 1230), False, 'from floodsystem.geo import stations_by_distance, stations_within_radius, rivers_with_station, stations_by_river, rivers_by_station_number\n'), ((1523, 1543), 'floodsystem.stationdata.build_station_list', 'build_station_list', ([], {}), '()\n', (1541, 1543), False, 'from floodsystem.stationdata import build_station_list\n'), ((1554, 1583), 'floodsystem.geo.rivers_with_station', 'rivers_with_station', (['stations'], {}), '(stations)\n', (1573, 1583), False, 'from floodsystem.geo import stations_by_distance, stations_within_radius, rivers_with_station, stations_by_river, rivers_by_station_number\n'), ((2096, 2123), 'floodsystem.geo.stations_by_river', 'stations_by_river', (['stations'], {}), '(stations)\n', (2113, 2123), False, 'from floodsystem.geo import stations_by_distance, stations_within_radius, rivers_with_station, stations_by_river, rivers_by_station_number\n'), ((2431, 2451), 'floodsystem.stationdata.build_station_list', 'build_station_list', ([], {}), '()\n', (2449, 2451), False, 'from floodsystem.stationdata import build_station_list\n'), ((2460, 2480), 'random.randint', 'random.randint', (['(0)', '(9)'], {}), '(0, 9)\n', (2474, 2480), False, 'import random\n'), ((2490, 2527), 'floodsystem.geo.rivers_by_station_number', 'rivers_by_station_number', (['stations', 'N'], {}), '(stations, N)\n', (2514, 2527), False, 'from floodsystem.geo import stations_by_distance, stations_within_radius, rivers_with_station, stations_by_river, rivers_by_station_number\n'), ((2783, 2818), 'floodsystem.utils.sorted_by_key', 'sorted_by_key', (['out', '(1)'], {'reverse': '(True)'}), '(out, 1, reverse=True)\n', (2796, 2818), False, 'from floodsystem.utils import sorted_by_key\n')]
|
# -*- coding: utf-8 -*-
"""
Authors: <NAME>
UNESCO-IHE 2017
Contact: <EMAIL>
Repository: https://github.com/wateraccounting/wa
Module: Generator/Sheet3
"""
import os
def Create(Dir_Basin, Basin, Simulation, Dir_Basin_CSV_a, Dir_Basin_CSV_b):
"""
This functions create the monthly and yearly sheet 3 in pdf format, based on the csv files.
Parameters
----------
Dir_Basin : str
Path to all the output data of the Basin
Basin : str
Name of the basin
Simulation : int
Defines the simulation
Dir_Basin_CSV_a : str
Data path pointing to the CSV output files for sheet a
Dir_Basin_CSV_b : str
Data path pointing to the CSV output files for sheet b
"""
# import wa module
from watools.Sheets import create_sheet3
# Create output folder for PDF files
Dir_Basin_PDF = os.path.join(Dir_Basin, "Simulations", "Simulation_%d" %Simulation, "PDF")
if not os.path.exists(Dir_Basin_PDF):
os.mkdir(Dir_Basin_PDF)
# Create output filename for PDFs
FileName_Splitted = Dir_Basin_CSV_a.split('_')
Year = str(FileName_Splitted[-1].split('.')[0])
outFile_a = os.path.join(Dir_Basin_PDF,'Sheet3a_Sim%s_%s_%s.pdf' %(Simulation, Basin, Year))
outFile_b = os.path.join(Dir_Basin_PDF,'Sheet3b_Sim%s_%s_%s.pdf' %(Simulation, Basin, Year))
# Create PDFs
sheet3a_fh, sheet3b_fh = create_sheet3(Basin, str(Year), ['km3/year', 'kg/ha/year', 'kg/m3'], [Dir_Basin_CSV_a, Dir_Basin_CSV_b], [outFile_a, outFile_b])
return()
|
[
"os.path.exists",
"os.path.join",
"os.mkdir"
] |
[((866, 941), 'os.path.join', 'os.path.join', (['Dir_Basin', '"""Simulations"""', "('Simulation_%d' % Simulation)", '"""PDF"""'], {}), "(Dir_Basin, 'Simulations', 'Simulation_%d' % Simulation, 'PDF')\n", (878, 941), False, 'import os\n'), ((1173, 1259), 'os.path.join', 'os.path.join', (['Dir_Basin_PDF', "('Sheet3a_Sim%s_%s_%s.pdf' % (Simulation, Basin, Year))"], {}), "(Dir_Basin_PDF, 'Sheet3a_Sim%s_%s_%s.pdf' % (Simulation, Basin,\n Year))\n", (1185, 1259), False, 'import os\n'), ((1270, 1356), 'os.path.join', 'os.path.join', (['Dir_Basin_PDF', "('Sheet3b_Sim%s_%s_%s.pdf' % (Simulation, Basin, Year))"], {}), "(Dir_Basin_PDF, 'Sheet3b_Sim%s_%s_%s.pdf' % (Simulation, Basin,\n Year))\n", (1282, 1356), False, 'import os\n'), ((952, 981), 'os.path.exists', 'os.path.exists', (['Dir_Basin_PDF'], {}), '(Dir_Basin_PDF)\n', (966, 981), False, 'import os\n'), ((991, 1014), 'os.mkdir', 'os.mkdir', (['Dir_Basin_PDF'], {}), '(Dir_Basin_PDF)\n', (999, 1014), False, 'import os\n')]
|
from flask import Blueprint
from apps.auth.business.wxlogin import WxLoginBusiness
from apps.auth.extentions import validation, parse_json_form
from library.api.render import json_detail_render
wxlogin = Blueprint("wxlogin", __name__)
@wxlogin.route('/', methods=['POST'])
@validation('POST:wx_user_code')
def wxuser_index_handler():
"""
@api {post} /v1/wxlogin/ 登录 微信
@apiName WxLogin
@apiGroup 用户
@apiDescription 登录微信
@apiParam {string} user_code 用户编码
@apiParamExample {json} Request-Example:
{
"user_code":"j2qL3QjNXXwa_4A0WJFDNJyPEx88HTHytARgRbr176g"
}
@apiSuccessExample {json} Success-Response:
HTTP/1.1 200 OK
{
"code": 0,
"data": {
"token": "<PASSWORD>"
},
"message": ""
}
"""
user_code = parse_json_form('wx_user_code')
ret, data, msg = WxLoginBusiness.get_user(user_code[0])
return json_detail_render(ret, data, msg)
|
[
"apps.auth.extentions.parse_json_form",
"apps.auth.business.wxlogin.WxLoginBusiness.get_user",
"apps.auth.extentions.validation",
"library.api.render.json_detail_render",
"flask.Blueprint"
] |
[((206, 236), 'flask.Blueprint', 'Blueprint', (['"""wxlogin"""', '__name__'], {}), "('wxlogin', __name__)\n", (215, 236), False, 'from flask import Blueprint\n'), ((278, 309), 'apps.auth.extentions.validation', 'validation', (['"""POST:wx_user_code"""'], {}), "('POST:wx_user_code')\n", (288, 309), False, 'from apps.auth.extentions import validation, parse_json_form\n'), ((813, 844), 'apps.auth.extentions.parse_json_form', 'parse_json_form', (['"""wx_user_code"""'], {}), "('wx_user_code')\n", (828, 844), False, 'from apps.auth.extentions import validation, parse_json_form\n'), ((867, 905), 'apps.auth.business.wxlogin.WxLoginBusiness.get_user', 'WxLoginBusiness.get_user', (['user_code[0]'], {}), '(user_code[0])\n', (891, 905), False, 'from apps.auth.business.wxlogin import WxLoginBusiness\n'), ((918, 952), 'library.api.render.json_detail_render', 'json_detail_render', (['ret', 'data', 'msg'], {}), '(ret, data, msg)\n', (936, 952), False, 'from library.api.render import json_detail_render\n')]
|
from flask_socketio import SocketIO
NOTI = 'notification'
class MySocket():
def __init__(self, app, async_mode):
self.socketio = SocketIO(app, async_mode=async_mode)
def get_socketio(self):
return self.socketio
def noti_emit(self, msg, room=None):
if room:
self.socketio.emit(NOTI, {'data': msg}, namespace='/noti', room=room)
else:
self.socketio.emit(NOTI, {'data': msg}, namespace='/noti')
|
[
"flask_socketio.SocketIO"
] |
[((146, 182), 'flask_socketio.SocketIO', 'SocketIO', (['app'], {'async_mode': 'async_mode'}), '(app, async_mode=async_mode)\n', (154, 182), False, 'from flask_socketio import SocketIO\n')]
|
import pytest
from weaverbird.backends.sql_translator.metadata import SqlQueryMetadataManager
from weaverbird.backends.sql_translator.steps import translate_filter
from weaverbird.backends.sql_translator.types import SQLQuery
from weaverbird.pipeline.conditions import ComparisonCondition
from weaverbird.pipeline.steps import FilterStep
def test_translate_filter(mocker):
step = FilterStep(
name='filter', condition=ComparisonCondition(column='amount', operator='eq', value=10)
)
query = SQLQuery(
query_name='SELECT_STEP_0',
transformed_query='WITH SELECT_STEP_0 AS (SELECT TOTO, TATA FROM products)',
selection_query='SELECT TOTO, TATA FROM SELECT_STEP_0',
metadata_manager=SqlQueryMetadataManager(
tables_metadata={'table1': {'toto': 'text', 'tata': 'int'}},
),
)
mocker.patch(
'weaverbird.backends.sql_translator.steps.utils.query_transformation.apply_condition',
return_value='SELECT TOTO, TATA FROM SELECT_STEP_0 WHERE amount = 10',
)
res = translate_filter(step, query, index=1)
assert (
res.transformed_query
== 'WITH SELECT_STEP_0 AS (SELECT TOTO, TATA FROM products), FILTER_STEP_1 AS (SELECT TOTO, TATA FROM '
'SELECT_STEP_0 WHERE amount = 10)'
)
assert res.selection_query == 'SELECT TOTO, TATA FROM FILTER_STEP_1'
def test_translate_filter_error(mocker):
step = FilterStep(
name='filter', condition=ComparisonCondition(column='amount', operator='eq', value=10)
)
query = SQLQuery(
query_name='SELECT_STEP_0',
transformed_query='WITH SELECT_STEP_0 AS (SELECT * FROM products), SELECT * FROM SELECT_STEP_0',
selection_query='SELECT * FROM SELECT_STEP_0',
metadata_manager=SqlQueryMetadataManager(
tables_metadata={'table1': {'toto': 'text', 'tata': 'int'}},
),
)
mocker.patch(
'weaverbird.backends.sql_translator.steps.filter.apply_condition',
side_effect=NotImplementedError,
)
with pytest.raises(NotImplementedError):
translate_filter(step, query, index=1)
|
[
"weaverbird.backends.sql_translator.metadata.SqlQueryMetadataManager",
"weaverbird.pipeline.conditions.ComparisonCondition",
"pytest.raises",
"weaverbird.backends.sql_translator.steps.translate_filter"
] |
[((1055, 1093), 'weaverbird.backends.sql_translator.steps.translate_filter', 'translate_filter', (['step', 'query'], {'index': '(1)'}), '(step, query, index=1)\n', (1071, 1093), False, 'from weaverbird.backends.sql_translator.steps import translate_filter\n'), ((2045, 2079), 'pytest.raises', 'pytest.raises', (['NotImplementedError'], {}), '(NotImplementedError)\n', (2058, 2079), False, 'import pytest\n'), ((2089, 2127), 'weaverbird.backends.sql_translator.steps.translate_filter', 'translate_filter', (['step', 'query'], {'index': '(1)'}), '(step, query, index=1)\n', (2105, 2127), False, 'from weaverbird.backends.sql_translator.steps import translate_filter\n'), ((432, 493), 'weaverbird.pipeline.conditions.ComparisonCondition', 'ComparisonCondition', ([], {'column': '"""amount"""', 'operator': '"""eq"""', 'value': '(10)'}), "(column='amount', operator='eq', value=10)\n", (451, 493), False, 'from weaverbird.pipeline.conditions import ComparisonCondition\n'), ((732, 820), 'weaverbird.backends.sql_translator.metadata.SqlQueryMetadataManager', 'SqlQueryMetadataManager', ([], {'tables_metadata': "{'table1': {'toto': 'text', 'tata': 'int'}}"}), "(tables_metadata={'table1': {'toto': 'text', 'tata':\n 'int'}})\n", (755, 820), False, 'from weaverbird.backends.sql_translator.metadata import SqlQueryMetadataManager\n'), ((1470, 1531), 'weaverbird.pipeline.conditions.ComparisonCondition', 'ComparisonCondition', ([], {'column': '"""amount"""', 'operator': '"""eq"""', 'value': '(10)'}), "(column='amount', operator='eq', value=10)\n", (1489, 1531), False, 'from weaverbird.pipeline.conditions import ComparisonCondition\n'), ((1781, 1869), 'weaverbird.backends.sql_translator.metadata.SqlQueryMetadataManager', 'SqlQueryMetadataManager', ([], {'tables_metadata': "{'table1': {'toto': 'text', 'tata': 'int'}}"}), "(tables_metadata={'table1': {'toto': 'text', 'tata':\n 'int'}})\n", (1804, 1869), False, 'from weaverbird.backends.sql_translator.metadata import SqlQueryMetadataManager\n')]
|
import json
import sqlite3
import typing
from typing import Optional, Dict
from copy import deepcopy
from contextlib import suppress
from bson.objectid import ObjectId
from pymongo import MongoClient
from pymongo.errors import ConfigurationError
class MongoStorageException(Exception):
pass
class Mongo:
def __init__(self, connection_string, *, collection=None, db_name=None):
if collection is None:
raise MongoStorageException('`collection` is required')
self._driver = MongoClient(connection_string)
with suppress(ConfigurationError):
db = self._driver.get_database(db_name)
if db is None:
raise MongoStorageException(
'You have to specify database name '
'either in connection string or as `db_name` parameter')
self._collection = db.get_collection(collection)
def create(self, data: dict, auto_generate_id=False):
data = deepcopy(data)
if "id" in data:
data["_id"] = data["id"]
del data["id"]
elif not auto_generate_id:
raise MongoStorageException("`id` required")
object_id = self._collection.insert_one(data).inserted_id
if isinstance(object_id, ObjectId):
object_id = str(object_id)
return object_id
def get(self, object_id, fail_if_not_exists=False):
query = object_id if isinstance(object_id, dict) else {"_id": object_id}
data = self._collection.find_one(query)
if fail_if_not_exists and not data:
raise MongoStorageException(f"Object not found. Query: {query}")
if data:
data["id"] = data["_id"]
del data["_id"]
return data
def get_all(self, query):
for document in self._collection.find(query):
document["id"] = document["_id"]
del document["_id"]
yield document
def save(self, data: dict):
object_id = data.get("id")
if object_id:
data["_id"] = object_id
del data["id"]
query = {"_id": object_id}
result = self._collection.update_one(query, {"$set": data})
if result.matched_count == 0:
raise MongoStorageException(f"Object `{object_id}` not found")
data["id"] = object_id
else:
object_id = str(self._collection.insert_one(data).inserted_id)
if isinstance(object_id, ObjectId):
object_id = str(object_id)
data["id"] = object_id
return object_id
def delete(self, object_id, check_deleted_count=True):
result = self._collection.delete_one({"_id": object_id})
if check_deleted_count and result.deleted_count != 1:
raise MongoStorageException(f"Object `{object_id}` not found")
def close(self):
self._driver.close()
class Sqlite:
def __init__(self, dirpath: str, *, collection: str = None, db_name: str = None) -> None:
db_filepath = f'{dirpath}/{db_name}'
self._connection = sqlite3.connect(db_filepath)
self._table_name = collection
self.__execute_sql(
f'CREATE TABLE IF NOT EXISTS {collection}'
'(id INTEGER PRIMARY KEY AUTOINCREMENT, data TEXT)'
)
def __execute_sql(self, sql: str, *args) -> sqlite3.Cursor:
sql = sql.strip().upper()
cursor = self._connection.execute(sql, args)
if not sql.startswith('SELECT'):
self._connection.commit()
return cursor
def create(self, data: dict, auto_generate_id: bool = False) -> int:
table = self._table_name
if auto_generate_id:
if 'id' in data:
del data['id']
sql = f'INSERT INTO {table}(data) VALUES(?)'
cursor = self.__execute_sql(sql, json.dumps(data))
else:
object_id = data['id']
if object_id <= 0:
raise Exception(f'Invalid id `{object_id}`. Id must be >= 0')
cursor = self.__execute_sql(f'INSERT INTO {table}(id, data) VALUES(?, ?)',
object_id, json.dumps(data))
return cursor.lastrowid
def get(self, object_id: int, fail_if_not_exists: bool = False) -> Dict:
query = object_id if isinstance(object_id, dict) else {'id': object_id}
objects = self.get_all(query)
result = list(objects)
if fail_if_not_exists and not result:
raise Exception(f'Object not found. Query: {query}')
return result and result[0]
def get_all(self, query: Optional[Dict] = None) -> typing.Generator:
if query is None:
query = {}
table = self._table_name
args = tuple()
if 'id' in query:
sql = f'SELECT id, data FROM {table} WHERE id=?'
args = (query['id'],)
else:
sql = f'SELECT id, data FROM {table}'
cursor = self.__execute_sql(sql, *args)
objects = cursor.fetchall()
if not objects:
return tuple()
for object_id, json_data in objects:
data = json.loads(json_data)
data['id'] = object_id
if self._check_query(data, query):
yield data
def _check_query(self, document: dict, query: dict) -> bool:
matched = True
for k, query_value in query.items():
key_value = document
for name in k.split('.'):
key_value = key_value.get(name) if isinstance(key_value, dict) else None
if key_value is None:
break
if isinstance(query_value, dict):
matched = self._check_query(key_value, query_value)
else:
matched = key_value == query_value
if not matched:
return False
return matched
def save(self, data: dict) -> int:
object_id = data['id']
if not object_id:
object_id = self.create(data, auto_generate_id=True)
else:
table = self._table_name
sql = f'UPDATE {table} SET data=? WHERE id=?'
cursor = self.__execute_sql(sql, json.dumps(data), object_id)
if cursor.rowcount == 0:
raise Exception(f'Object `{object_id}` not found')
return object_id
def delete(self, object_id: int, check_deleted_count: bool = True) -> None:
table = self._table_name
sql = f'DELETE FROM {table} WHERE id=?'
cursor = self.__execute_sql(sql, object_id)
if check_deleted_count and cursor.rowcount != 1:
raise Exception(f'Object `{object_id}` not found')
def close(self) -> None:
self._connection.commit()
self._connection.close()
|
[
"json.loads",
"sqlite3.connect",
"json.dumps",
"contextlib.suppress",
"copy.deepcopy",
"pymongo.MongoClient"
] |
[((512, 542), 'pymongo.MongoClient', 'MongoClient', (['connection_string'], {}), '(connection_string)\n', (523, 542), False, 'from pymongo import MongoClient\n'), ((959, 973), 'copy.deepcopy', 'deepcopy', (['data'], {}), '(data)\n', (967, 973), False, 'from copy import deepcopy\n'), ((3086, 3114), 'sqlite3.connect', 'sqlite3.connect', (['db_filepath'], {}), '(db_filepath)\n', (3101, 3114), False, 'import sqlite3\n'), ((556, 584), 'contextlib.suppress', 'suppress', (['ConfigurationError'], {}), '(ConfigurationError)\n', (564, 584), False, 'from contextlib import suppress\n'), ((5162, 5183), 'json.loads', 'json.loads', (['json_data'], {}), '(json_data)\n', (5172, 5183), False, 'import json\n'), ((3861, 3877), 'json.dumps', 'json.dumps', (['data'], {}), '(data)\n', (3871, 3877), False, 'import json\n'), ((4175, 4191), 'json.dumps', 'json.dumps', (['data'], {}), '(data)\n', (4185, 4191), False, 'import json\n'), ((6230, 6246), 'json.dumps', 'json.dumps', (['data'], {}), '(data)\n', (6240, 6246), False, 'import json\n')]
|
from testing_config import BaseTestConfig
from application.models import User
from application.models import Chatroom
import json
from application.utils import auth
class TestMatch(BaseTestConfig):
test_group = {
"name": "test_group",
"tag": "Poker",
}
test_group2 = {
"name": "test_group2",
"tag": "Study",
}
tag_p = {"query_tag": "Poker"}
tag_s = {"query_tag": "Study"}
tag_o = {"query_tag": "Outdoor"}
tag_l = {"query_tag": "Life"}
tag_t = {"query_tag": "Test"}
testrm_1 = {"room_id": 1}
testrm_2 = {"room_id": 185}
testrm_3 = {"room_id": 4}
def test_get_suggestions(self):
res = self.app.post(
"/api/get_suggestions",
data=json.dumps(self.tag_p),
content_type='application/json'
)
res1 = self.app.post(
"/api/get_suggestions",
data=json.dumps(self.tag_s),
content_type='application/json'
)
res2 = self.app.post(
"/api/get_suggestions",
data=json.dumps(self.tag_o),
content_type='application/json'
)
self.assertEqual(res.status_code,200)
self.assertEqual(res1.status_code,200)
self.assertEqual(res2.status_code,200)
res3 = self.app.post(
"/api/get_suggestions",
data=json.dumps(self.tag_t),
content_type='application/json'
)
self.assertEqual(res2.status_code,200)
def test_create_group(self):
res = self.app.post(
"/api/create_group",
data=json.dumps(self.test_group),
content_type='application/json'
)
self.assertEqual(json.loads(res.data.decode("utf-8"))["results"], 2)
self.assertEqual(res.status_code, 200)
res = self.app.post(
"/api/create_group",
data=json.dumps(self.test_group2),
content_type='application/json'
)
self.assertEqual(json.loads(res.data.decode("utf-8"))["results"], 3)
# def test_join_chatroom(self):
# res = self.app.post(
# "/api/join_chatroom",
# data=json.dumps(self.testrm_1),
# content_type='application/json'
# )
# res1 = self.app.post(
# "/api/join_chatroom",
# data=json.dumps(self.testrm_2),
# content_type='application/json'
# )
# res2 = self.app.post(
# "/api/join_chatroom",
# data=json.dumps(self.testrm_3),
# content_type='application/json'
# )
# self.assertEqual(res.status_code,201)
# self.assertEqual(res1.status_code,201)
# self.assertEqual(res2.status_code,201)
|
[
"json.dumps"
] |
[((748, 770), 'json.dumps', 'json.dumps', (['self.tag_p'], {}), '(self.tag_p)\n', (758, 770), False, 'import json\n'), ((909, 931), 'json.dumps', 'json.dumps', (['self.tag_s'], {}), '(self.tag_s)\n', (919, 931), False, 'import json\n'), ((1071, 1093), 'json.dumps', 'json.dumps', (['self.tag_o'], {}), '(self.tag_o)\n', (1081, 1093), False, 'import json\n'), ((1373, 1395), 'json.dumps', 'json.dumps', (['self.tag_t'], {}), '(self.tag_t)\n', (1383, 1395), False, 'import json\n'), ((1620, 1647), 'json.dumps', 'json.dumps', (['self.test_group'], {}), '(self.test_group)\n', (1630, 1647), False, 'import json\n'), ((1918, 1946), 'json.dumps', 'json.dumps', (['self.test_group2'], {}), '(self.test_group2)\n', (1928, 1946), False, 'import json\n')]
|
import sys
sys.path.insert(0, '/home/hena/caffe-ocr/buildcmake/install/python')
sys.path.insert(0, '/home/hena/tool/protobuf-3.1.0/python')
import caffe
import math
import numpy as np
def SoftMax(net_ans):
tmp_net = [math.exp(i) for i in net_ans]
sum_exp = sum(tmp_net)
return [i/sum_exp for i in tmp_net]
class AggregationCrossEntropyLayer(caffe.Layer):
"""
Comput the Aggregation Cross Entropy loss for ocr rec plan
"""
def setup(self, bottom, top):
print("==============================================================Hi")
self.dict_size = 1220
if len(bottom) != 2:
raise Exception("Need two inputs to computer loss.")
def reshape(self, bottom, top):
self.diff = np.zeros_like(bottom[0].data, dtype=np.float32)
# top[0].reshape(*bottom[0].data.shape)
def forward(self, bottom, top):
print("==============================================================Hi1")
# score = bottom[0].data
# label = bottom[1].data
# print(score)
# print(type(score))
# print(score.shape)
# T_ = len(score)
self.diff[...] = bottom[0].data - bottom[1].data
top[0].data[...] = np.sum(self.diff**2) / bottom[0].num / 2.
def backward(self, top, propagate_down, bottom):
for i in range(2):
if not propagate_down[i]:
continue
if i == 0:
sign = 1
else:
sign = -1
bottom[i].diff[...] = sign * self.diff / bottom[i].num
def get_n_k(self, label):
pass
|
[
"numpy.sum",
"math.exp",
"sys.path.insert",
"numpy.zeros_like"
] |
[((11, 79), 'sys.path.insert', 'sys.path.insert', (['(0)', '"""/home/hena/caffe-ocr/buildcmake/install/python"""'], {}), "(0, '/home/hena/caffe-ocr/buildcmake/install/python')\n", (26, 79), False, 'import sys\n'), ((80, 139), 'sys.path.insert', 'sys.path.insert', (['(0)', '"""/home/hena/tool/protobuf-3.1.0/python"""'], {}), "(0, '/home/hena/tool/protobuf-3.1.0/python')\n", (95, 139), False, 'import sys\n'), ((223, 234), 'math.exp', 'math.exp', (['i'], {}), '(i)\n', (231, 234), False, 'import math\n'), ((747, 794), 'numpy.zeros_like', 'np.zeros_like', (['bottom[0].data'], {'dtype': 'np.float32'}), '(bottom[0].data, dtype=np.float32)\n', (760, 794), True, 'import numpy as np\n'), ((1234, 1256), 'numpy.sum', 'np.sum', (['(self.diff ** 2)'], {}), '(self.diff ** 2)\n', (1240, 1256), True, 'import numpy as np\n')]
|
"""
Tools for creating and working with Line (Station) Grids
"""
from typing import Union
import pyproj
import numpy as np
_atype = Union[type(None), np.ndarray]
_ptype = Union[type(None), pyproj.Proj]
class StaHGrid:
"""
Stations Grid
EXAMPLES:
--------
>>> x = arange(8)
>>> y = arange(8)*2-1
>>> grd = pyroms.grid.StaHGrid(x, y)
>>> print grd.x
[4.5 4.5 4.5 4.5 4.5 4.5 4.5]
"""
def __init__(self, x: np.ndarray, y: np.ndarray, angle: _atype = None):
assert x.ndim == 1 and y.ndim == 1 and x.shape == y.shape, \
'x and y must be 2D arrays of the same size.'
mask = np.isnan(x) | np.isnan(y)
if np.any(mask):
x = np.ma.masked_where(mask, x)
y = np.ma.masked_where(mask, y)
self.spherical = False
self._x, self._y = x, y
if angle is None:
self.angle = np.zeros(len(self.y))
else:
self.angle = angle
return
x = property(lambda self: self._x)
y = property(lambda self: self._y)
class StaHGridGeo(StaHGrid):
"""
Stations Grid
EXAMPLES:
--------
>>> lon = arange(8)
>>> lat = arange(8)*2-1
>>> proj = pyproj()
>>> grd = pyroms.grid.StaHGridGeo(lon, lat, proj)
>>> print grd.x
[xxx, xxx, xxx, xxx, xxx, xxx, xxx, xxx]
"""
def __init__(self, lon: np.ndarray, lat: np.ndarray,
x: _atype = None, y: _atype = None,
angle: _atype = None, proj: _ptype = None):
self.spherical = True
self._lon, self._lat = lon, lat
self.proj = proj
if x is not None and y is not None:
super(StaHGridGeo, self).__init__(x, y, angle)
self.spherical = True
else:
if proj is not None:
self._x, self._y = proj(lon, lat)
else:
raise ValueError('Projection transformer must be ' +
'provided if x/y are missing.')
return
@property
def lon(self):
return self._lon
@lon.setter
def lon(self, lon):
if self.proj is not None:
self.__init__(lon, self._lat, angle=self.angle, proj=self.proj)
else:
self._lon = lon
@property
def lat(self):
return self._lat
@lat.setter
def lat(self, lat):
if self.proj is not None:
self.__init__(self._lon, lat, angle=self.angle, proj=self.proj)
else:
self._lat = lat
|
[
"numpy.isnan",
"numpy.any",
"numpy.ma.masked_where"
] |
[((690, 702), 'numpy.any', 'np.any', (['mask'], {}), '(mask)\n', (696, 702), True, 'import numpy as np\n'), ((653, 664), 'numpy.isnan', 'np.isnan', (['x'], {}), '(x)\n', (661, 664), True, 'import numpy as np\n'), ((667, 678), 'numpy.isnan', 'np.isnan', (['y'], {}), '(y)\n', (675, 678), True, 'import numpy as np\n'), ((720, 747), 'numpy.ma.masked_where', 'np.ma.masked_where', (['mask', 'x'], {}), '(mask, x)\n', (738, 747), True, 'import numpy as np\n'), ((764, 791), 'numpy.ma.masked_where', 'np.ma.masked_where', (['mask', 'y'], {}), '(mask, y)\n', (782, 791), True, 'import numpy as np\n')]
|
import numpy as np
from ..reco.disp import disp_vector
import astropy.units as u
import matplotlib.pyplot as plt
from ctapipe.visualization import CameraDisplay
__all__ = [
'overlay_disp_vector',
'overlay_hillas_major_axis',
'overlay_source',
'display_dl1_event',
]
def display_dl1_event(event, camera_geometry, tel_id=1, axes=None, **kwargs):
"""
Display a DL1 event (image and pulse time map) side by side
Parameters
----------
event: ctapipe event
tel_id: int
axes: list of `matplotlib.pyplot.axes` of shape (2,) or None
kwargs: kwargs for `ctapipe.visualization.CameraDisplay`
Returns
-------
axes: `matplotlib.pyplot.axes`
"""
if axes is None:
fig, axes = plt.subplots(1, 2, figsize=(12, 5))
image = event.dl1.tel[tel_id].image
peak_time = event.dl1.tel[tel_id].peak_time
if image is None or peak_time is None:
raise Exception(f"There is no calibrated image or pulse time map for telescope {tel_id}")
d1 = CameraDisplay(camera_geometry, image, ax=axes[0], **kwargs)
d1.add_colorbar(ax=axes[0])
d2 = CameraDisplay(camera_geometry, peak_time, ax=axes[1], **kwargs)
d2.add_colorbar(ax=axes[1])
return axes
def overlay_source(display, source_pos_x, source_pos_y, **kwargs):
"""
Display the source (event) position in the camera
Parameters
----------
display: `ctapipe.visualization.CameraDisplay`
source_pos_x: `astropy.units.Quantity`
source_pos_y: `astropy.units.Quantity`
kwargs: args for `matplotlib.pyplot.scatter`
Returns
-------
`matplotlib.pyplot.axes`
"""
kwargs['marker'] = 'x' if 'marker' not in kwargs else kwargs['marker']
kwargs['color'] = 'red' if 'color' not in kwargs else kwargs['color']
display.axes.scatter(source_pos_x, source_pos_y, **kwargs)
def overlay_disp_vector(display, disp, hillas, **kwargs):
"""
Overlay disp vector on a CameraDisplay
Parameters
----------
display: `ctapipe.visualization.CameraDisplay`
disp: `DispContainer`
hillas: `ctapipe.containers.HillasParametersContainer`
kwargs: args for `matplotlib.pyplot.quiver`
"""
assert np.isfinite([hillas.x.value, hillas.y.value]).all()
if not np.isfinite([disp.dx.value, disp.dy.value]).all():
disp_vector(disp)
display.axes.quiver(hillas.x, hillas.y,
disp.dx, disp.dy,
units='xy', scale=1*u.m,
angles='xy',
**kwargs,
)
display.axes.quiver(hillas.x.value, hillas.y.value, disp.dx.value, disp.dy.value, units='xy', scale=1)
def overlay_hillas_major_axis(display, hillas, **kwargs):
"""
Overlay hillas ellipse major axis on a CameraDisplay.
Parameters
----------
display: `ctapipe.visualization.CameraDisplay`
hillas: `ctapipe.containers.HillaParametersContainer`
kwargs: args for `matplotlib.pyplot.plot`
"""
kwargs['color'] = 'black' if 'color' not in kwargs else kwargs['color']
length = hillas.length * 2
x = -length + 2 * length * np.arange(10) / 10
display.axes.plot(hillas.x + x * np.cos(hillas.psi.to(u.rad).value),
hillas.y + x * np.sin(hillas.psi.to(u.rad).value),
**kwargs,
)
|
[
"matplotlib.pyplot.subplots",
"numpy.isfinite",
"ctapipe.visualization.CameraDisplay",
"numpy.arange"
] |
[((1019, 1078), 'ctapipe.visualization.CameraDisplay', 'CameraDisplay', (['camera_geometry', 'image'], {'ax': 'axes[0]'}), '(camera_geometry, image, ax=axes[0], **kwargs)\n', (1032, 1078), False, 'from ctapipe.visualization import CameraDisplay\n'), ((1120, 1183), 'ctapipe.visualization.CameraDisplay', 'CameraDisplay', (['camera_geometry', 'peak_time'], {'ax': 'axes[1]'}), '(camera_geometry, peak_time, ax=axes[1], **kwargs)\n', (1133, 1183), False, 'from ctapipe.visualization import CameraDisplay\n'), ((742, 777), 'matplotlib.pyplot.subplots', 'plt.subplots', (['(1)', '(2)'], {'figsize': '(12, 5)'}), '(1, 2, figsize=(12, 5))\n', (754, 777), True, 'import matplotlib.pyplot as plt\n'), ((2201, 2246), 'numpy.isfinite', 'np.isfinite', (['[hillas.x.value, hillas.y.value]'], {}), '([hillas.x.value, hillas.y.value])\n', (2212, 2246), True, 'import numpy as np\n'), ((2264, 2307), 'numpy.isfinite', 'np.isfinite', (['[disp.dx.value, disp.dy.value]'], {}), '([disp.dx.value, disp.dy.value])\n', (2275, 2307), True, 'import numpy as np\n'), ((3142, 3155), 'numpy.arange', 'np.arange', (['(10)'], {}), '(10)\n', (3151, 3155), True, 'import numpy as np\n')]
|
import numpy as np
import random
from scipy.stats import skew as scipy_skew
from skimage.transform import resize as skimage_resize
from QFlow import config
## set of functions for loading and preparing a dataset for training.
def get_num_min_class(labels):
'''
Get the number of the minimum represented class in label vector.
Used for resampling data.
input:
labels: np.ndarray of labels
outputs:
num_samples: int number of samples for minimum class
'''
# use argmax as example's class
argmax_labels = np.argmax(labels, axis=-1)
# max of num_samples is all one label
num_samples = labels.shape[0]
for i in range(labels.shape[-1]):
lab_elems = np.sum(argmax_labels==i)
if lab_elems < num_samples:
num_samples = lab_elems
return num_samples
def resample_data(features, state_labels, labels=None, seed=None):
'''
Resample data to be evenly distributed across classes in labels by cutting
number of examples for each class to be equal to the number of examples
in the least represented class. (classes assumed to be last axis of
labels). Shuffles after resampling.
inputs:
features: ndarray of features to be resampled. Resample along first axis.
state_labels: ndarray of labels to be used for resampling
labels: ndarray of labels to be resampled.
return_state: bool specifying whether to return state labels
seed: Seed of random number generator for shuffling idxs during resample
and for shuffling resampled features and labels.
outputs:
features: list of resampled features
labels: list of resampled labels
'''
rng = np.random.default_rng(seed)
num_samples = get_num_min_class(state_labels)
features_resamp = []; state_labels_resamp = []; labels_resamp = []
for i in range(state_labels.shape[-1]):
s_idxs = state_labels.argmax(axis=-1)==i
# first get full array of single state
features_s_full = features[s_idxs]
state_labels_s_full = state_labels[s_idxs]
if labels is not None:
labels_s_full = labels[s_idxs]
# then get idxs (0-length), shuffle, and slice to num_samples
# shuffle idxs to be sure labels and features are shuffled together
idxs = list(range(features_s_full.shape[0]))
rng.shuffle(idxs)
features_resamp.append(features_s_full[idxs[:num_samples]])
state_labels_resamp.append(state_labels_s_full[idxs[:num_samples]])
if labels is not None:
labels_resamp.append(labels_s_full[idxs[:num_samples]])
features_resamp_arr = np.concatenate(features_resamp, axis=0)
state_labels_resamp_arr = np.concatenate(state_labels_resamp, axis=0)
if labels is not None:
labels_resamp_arr = np.concatenate(labels_resamp, axis=0)
idxs = list(range(features_resamp_arr.shape[0]))
rng.shuffle(idxs)
if labels is not None:
return features_resamp_arr[idxs], labels_resamp_arr[idxs]
elif labels is None:
return features_resamp_arr[idxs], state_labels_resamp_arr[idxs]
def noise_mag_to_class(state_labels, noise_mags,
low_thresholds=None, high_thresholds=None):
'''
Function to convert noise magnitudes to noise classes.
Noise class thresholds are defined here. Thresholds for states
order is: no dot, left dot, central dot, right dot, double dot
Default low thresholds is the linear extrapolation to 100 % accuracy
of an average noisy-trained model vs. noise_mag. Default high
thresholds are from linear extrapolation to 0 % accuracy of an
average noisy trained model vs. noise_mag.
inputs:
state_labels: list of state labels. shape assumed to be
(num_examples, num_states).
noise_mags: list of float noise_mags for state_labels. shape assumed
to be (num_examples, ).
low_thresholds: list of floats of shape (num_state, ) specifying
high signal to noise class thresholds.
high_thresholds: list of floats of shape (num_state, ) specifying
high signal to noise class thresholds.
'''
# set number of noise classes and states.
# length of thresholds must be equal to num_states.
# no num_quality_classes != 3 are supported.
num_quality_classes = config.NUM_QUALITY_CLASSES
num_states = config.NUM_STATES
# set default thresholds
if high_thresholds is None:
high_thresholds = [1.22, 1.00, 1.21, 0.68, 2.00]
if low_thresholds is None:
low_thresholds = [0.31, 0.32, 0.41, 0.05, 0.47]
low_thresholds = np.array(low_thresholds)
high_thresholds = np.array(high_thresholds)
quality_classes = np.zeros(noise_mags.shape+(num_quality_classes,))
# use fractional labels by taking weighted average after
# applying thresholds
num_states = state_labels.shape[-1]
# get per state classes then sum across last axis later
per_state_classes = np.zeros(
noise_mags.shape + (num_quality_classes,) + (num_states,))
# use boolean indexing to define classes from noise mags/threshold arrays
for i in range(num_states):
per_state_classes[noise_mags <= low_thresholds[i],0, i] = 1
per_state_classes[(noise_mags > low_thresholds[i]) &\
(noise_mags <= high_thresholds[i]), 1, i] = 1
per_state_classes[noise_mags > high_thresholds[i], 2, i] = 1
# multiply each first axis element then sum across last axes
quality_classes = np.einsum('ijk,ik->ij', per_state_classes, state_labels)
return quality_classes
def get_data(f, train_test_split=0.9,
dat_key='sensor', label_key='state',
resample=True, seed=None,
low_thresholds=None, high_thresholds=None):
'''
Reads in the subregion data and converts it to a format useful for training
Note that the data is shuffled after reading in.
inputs:
f: one of:
str path to .npz file containing cropped data
dict of cropped data.
train_test_split: float fraction of data to use for training.
resample: bool specifying whether to resample data to get even state
representation.
seed: int random seed for file shuffling.
label_key: string key for data used for the label. One of:
'data_quality', 'noise_mag_factor', 'state'.
low_threshold: list of noise levels to use for high/moderate signal
to noise ratio threshold.
high_threshold: list of noise levels to use for moderate/low signal
to noise ratio threshold.
outputs:
train_data: np.ndarray of training data.
train_labels: np.ndarray of training labels.
eval_data: np.ndarray of training data.
eval_labels: np.ndarray of training labels.
'''
# treat f as path, or if TypeError treat as dict.
try:
dict_of_dicts = np.load(f, allow_pickle = True)
file_on_disk = True
except TypeError:
dict_of_dicts = f
file_on_disk = False
files = list(dict_of_dicts.keys())
random.Random(seed).shuffle(files)
inp = []
oup_state = []
# if we want a nonstate label load it so we can resample
if label_key!='state':
oup_labels = []
else:
oup_labels = None
train_labels = None
eval_labels = None
# if label is noise class, we need to get noise mag labels first
# then process to turn the mag into a class label
if label_key == 'data_quality':
data_quality = True
label_key = 'noise_mag_factor'
else:
data_quality = False
for file in files:
# for compressed data, file is the key of the dict of dicts
if file_on_disk:
data_dict = dict_of_dicts[file].item()
else:
data_dict = dict_of_dicts[file]
dat = data_dict[dat_key]
# generates a list of arrays
inp.append(dat.reshape(config.SUB_SIZE,config.SUB_SIZE,1))
oup_state.append(data_dict['state']) # generates a list of arrays
if oup_labels is not None:
oup_labels.append(data_dict[label_key])
inp = np.array(inp) # converts the list to np.array
oup_state = np.array(oup_state) # converts the list to np.array
if oup_labels is not None:
oup_labels = np.array(oup_labels)
# split data into train and evaluatoin data/labels
n_samples = inp.shape[0]
print("Total number of samples :", n_samples)
n_train = int(train_test_split * n_samples)
train_data = inp[:n_train]
print("Training data info:", train_data.shape)
train_states = oup_state[:n_train]
if oup_labels is not None:
train_labels = oup_labels[:n_train]
eval_data = inp[n_train:]
print("Evaluation data info:", eval_data.shape)
eval_states = oup_state[n_train:]
if oup_labels is not None:
eval_labels = oup_labels[n_train:]
# convert noise mag to class before resampling/getting noise mags if
# needed because resampling doesnt return state labels
if data_quality:
train_labels = noise_mag_to_class(
train_states, train_labels,
low_thresholds=low_thresholds,
high_thresholds=high_thresholds,
)
eval_labels = noise_mag_to_class(
eval_states, eval_labels,
low_thresholds=low_thresholds,
high_thresholds=high_thresholds,
)
# resample to make state representation even
if resample:
train_data, train_labels = resample_data(
train_data, train_states, train_labels)
eval_data, eval_labels = resample_data(
eval_data, eval_states, eval_labels)
elif not resample and label_key=='state':
train_labels = train_states
eval_labels = eval_states
# expand dim of labels to make sure that they have proper shape
if oup_labels is not None and len(train_labels.shape)==1:
np.expand_dims(train_labels, 1)
if oup_labels is not None and len(eval_labels.shape)==1:
np.expand_dims(eval_labels, 1)
return train_data, train_labels, eval_data, eval_labels
## preprocess functions
def gradient(x):
'''
Take gradient of an ndarray in specified direction. Thin wrapper around
np.gradient(). Also note that x -> axis=1 and y-> axis=0
input:
x: An numpy ndarray to take the gradient of
output:
numpy ndarray containing gradient in x direction.
'''
return np.gradient(x, axis=1)
def apply_threshold(x, threshold_val=10, threshold_to=0):
'''
Thresholds an numpy ndarray to remove
Args:
x = numpy array with data to be filtered
threshold_val = percentile below which to set values to zero
'''
x[x < np.abs(np.percentile(x.flatten(),threshold_val))] = threshold_to
return x
def apply_clipping(x, clip_val=3, clip_to='clip_val'):
'''
Clip input symmetrically at clip_val number of std devs.
Do not zscore norm x, but apply thresholds using normed x
'''
x_clipped = np.copy(x)
mean = np.mean(x)
std = np.std(x)
norm_x = (x - mean) / std
# set clipped values to either the mean or clip threshold
if clip_to.lower() == 'clip_val':
x_clipped[norm_x < -clip_val] = -clip_val * std + mean
x_clipped[norm_x > clip_val] = clip_val * std + mean
elif clip_to.lower() == 'mean':
x_clipped[norm_x < -clip_val] = mean
x_clipped[norm_x > clip_val] = mean
else:
raise KeyError('"clip_to" option not valid: ' +str(clip_to) +\
'Valid options: clip_val, mean')
return x_clipped
def autoflip_skew(data):
'''
Autoflip a numpy ndarray based on the skew of the values
(effective for gradient data).
'''
skew_sign = np.sign(scipy_skew(np.ravel(data)))
return data*skew_sign
def zscore_norm(x):
'''
Takes a numpy ndarray and returns a z-score normalized version
'''
return (x-x.mean())/x.std()
class Preprocessor():
def __init__(self, autoflip=False, denoising=[],
clip_val=None, thresh_val=None):
'''
Class for doing preprocessing of data.
inputs:
autoflip: bool specifying whether to autoflip data.
denoising: list of str specifying denoising to apply to data.
clip_val: value for clipping denoising. Unused if 'clip' not in
denoising.
thresh_val
'''
self.autoflip = autoflip
valid_denoising = ['threshold', 'clip']
if not set(denoising).issubset(valid_denoising):
raise ValueError(
'invalid denoising ', denoising,
' Valid values:', valid_denoising)
self.denoising = denoising
self.clip_val = clip_val
self.thresh_val = thresh_val
def proc_subimage(self, x):
'''
Takes the gradient of the measured data, applies denoising if specified,
normalizes, autoflips if specified,
and then adjusts the size (if necessary)
Args:
x = an array with data
'''
# take gradient
x = gradient(x)
# apply thresholding
if 'threshold' in self.denoising:
if self.threshold_val is not None:
grad_x = apply_threshold(x, self.threshold_val)
else:
grad_x = apply_threshold(x)
# apply clipping
if 'clip' in self.denoising:
if self.clip_val is not None:
grad_x = apply_clipping(grad_x, self.clip_val)
else:
grad_x = apply_clipping(grad_x)
# normalize with zscore normalization
x = zscore_norm(x)
# autoflip by skew of image gradient
if self.autoflip:
x = autoflip_skew(x)
target_shape = (config.SUB_SIZE, config.SUB_SIZE, 1)
if x.shape != target_shape:
x = skimage_resize(x, target_shape)
return x
def proc_subimage_set(self, x_arr):
'''
Loop through subimages and apply preprocessing to each one.
inputs:
x: full dataset of images. First axis assumed to be example index.
returns:
Full dataset of images with same shape, processed.
'''
return np.array([self.proc_subimage(x) for x in x_arr])
|
[
"numpy.copy",
"numpy.mean",
"numpy.random.default_rng",
"random.Random",
"numpy.argmax",
"numpy.array",
"numpy.zeros",
"numpy.sum",
"numpy.einsum",
"numpy.concatenate",
"numpy.std",
"numpy.expand_dims",
"numpy.gradient",
"numpy.ravel",
"skimage.transform.resize",
"numpy.load"
] |
[((558, 584), 'numpy.argmax', 'np.argmax', (['labels'], {'axis': '(-1)'}), '(labels, axis=-1)\n', (567, 584), True, 'import numpy as np\n'), ((1729, 1756), 'numpy.random.default_rng', 'np.random.default_rng', (['seed'], {}), '(seed)\n', (1750, 1756), True, 'import numpy as np\n'), ((2684, 2723), 'numpy.concatenate', 'np.concatenate', (['features_resamp'], {'axis': '(0)'}), '(features_resamp, axis=0)\n', (2698, 2723), True, 'import numpy as np\n'), ((2754, 2797), 'numpy.concatenate', 'np.concatenate', (['state_labels_resamp'], {'axis': '(0)'}), '(state_labels_resamp, axis=0)\n', (2768, 2797), True, 'import numpy as np\n'), ((4690, 4714), 'numpy.array', 'np.array', (['low_thresholds'], {}), '(low_thresholds)\n', (4698, 4714), True, 'import numpy as np\n'), ((4737, 4762), 'numpy.array', 'np.array', (['high_thresholds'], {}), '(high_thresholds)\n', (4745, 4762), True, 'import numpy as np\n'), ((4786, 4837), 'numpy.zeros', 'np.zeros', (['(noise_mags.shape + (num_quality_classes,))'], {}), '(noise_mags.shape + (num_quality_classes,))\n', (4794, 4837), True, 'import numpy as np\n'), ((5049, 5116), 'numpy.zeros', 'np.zeros', (['(noise_mags.shape + (num_quality_classes,) + (num_states,))'], {}), '(noise_mags.shape + (num_quality_classes,) + (num_states,))\n', (5057, 5116), True, 'import numpy as np\n'), ((5598, 5654), 'numpy.einsum', 'np.einsum', (['"""ijk,ik->ij"""', 'per_state_classes', 'state_labels'], {}), "('ijk,ik->ij', per_state_classes, state_labels)\n", (5607, 5654), True, 'import numpy as np\n'), ((8279, 8292), 'numpy.array', 'np.array', (['inp'], {}), '(inp)\n', (8287, 8292), True, 'import numpy as np\n'), ((8341, 8360), 'numpy.array', 'np.array', (['oup_state'], {}), '(oup_state)\n', (8349, 8360), True, 'import numpy as np\n'), ((10654, 10676), 'numpy.gradient', 'np.gradient', (['x'], {'axis': '(1)'}), '(x, axis=1)\n', (10665, 10676), True, 'import numpy as np\n'), ((11221, 11231), 'numpy.copy', 'np.copy', (['x'], {}), '(x)\n', (11228, 11231), True, 'import numpy as np\n'), ((11243, 11253), 'numpy.mean', 'np.mean', (['x'], {}), '(x)\n', (11250, 11253), True, 'import numpy as np\n'), ((11264, 11273), 'numpy.std', 'np.std', (['x'], {}), '(x)\n', (11270, 11273), True, 'import numpy as np\n'), ((720, 746), 'numpy.sum', 'np.sum', (['(argmax_labels == i)'], {}), '(argmax_labels == i)\n', (726, 746), True, 'import numpy as np\n'), ((2853, 2890), 'numpy.concatenate', 'np.concatenate', (['labels_resamp'], {'axis': '(0)'}), '(labels_resamp, axis=0)\n', (2867, 2890), True, 'import numpy as np\n'), ((7025, 7054), 'numpy.load', 'np.load', (['f'], {'allow_pickle': '(True)'}), '(f, allow_pickle=True)\n', (7032, 7054), True, 'import numpy as np\n'), ((8445, 8465), 'numpy.array', 'np.array', (['oup_labels'], {}), '(oup_labels)\n', (8453, 8465), True, 'import numpy as np\n'), ((10076, 10107), 'numpy.expand_dims', 'np.expand_dims', (['train_labels', '(1)'], {}), '(train_labels, 1)\n', (10090, 10107), True, 'import numpy as np\n'), ((10177, 10207), 'numpy.expand_dims', 'np.expand_dims', (['eval_labels', '(1)'], {}), '(eval_labels, 1)\n', (10191, 10207), True, 'import numpy as np\n'), ((7206, 7225), 'random.Random', 'random.Random', (['seed'], {}), '(seed)\n', (7219, 7225), False, 'import random\n'), ((11976, 11990), 'numpy.ravel', 'np.ravel', (['data'], {}), '(data)\n', (11984, 11990), True, 'import numpy as np\n'), ((14116, 14147), 'skimage.transform.resize', 'skimage_resize', (['x', 'target_shape'], {}), '(x, target_shape)\n', (14130, 14147), True, 'from skimage.transform import resize as skimage_resize\n')]
|
import pyglet
print('Loading resources')
def center_image(image):
"""Sets an image's anchor point to its center"""
image.anchor_x = image.width / 2
image.anchor_y = image.height / 2
# Tell pyglet where to find the resources
pyglet.resource.path = ['./resources', './resources/backgrounds']
pyglet.resource.reindex()
images = list()
# Load the three main resources and get them to draw centered
tank_body_img = pyglet.resource.image('tank_body.png')
images.append(tank_body_img)
tank_head_img = pyglet.resource.image('tank_head.png')
images.append(tank_head_img)
boxlife_img = pyglet.resource.image('boxlife.png')
images.append(boxlife_img)
boxlife_dead_img = pyglet.resource.image('boxlife_dead.png')
images.append(boxlife_dead_img)
wheel_img = pyglet.resource.image('wheel.png')
images.append(wheel_img)
thread_img = pyglet.resource.image('thread.png')
images.append(thread_img)
motorbike_chassis_img = pyglet.resource.image('motorbike_chassis.png')
images.append(motorbike_chassis_img)
mb_wheel_img = pyglet.resource.image('mb_wheel.png')
images.append(mb_wheel_img)
mb_holder_img = pyglet.resource.image('mb_holder.png')
images.append(mb_holder_img)
vbv_chassis_img = pyglet.resource.image('vbv_chassis.png')
images.append(vbv_chassis_img)
vbv_wheels_img = pyglet.resource.image('vbv_wheels.png')
images.append(vbv_wheels_img)
vbv_platform_img = pyglet.resource.image('vbv_platform.png')
images.append(vbv_platform_img)
vb_net_img = pyglet.resource.image('vb_net.png')
images.append(vb_net_img)
vb_ball_img = pyglet.resource.image('vb_ball.png')
images.append(vb_ball_img)
game1_button_img = pyglet.resource.image('game1.png')
images.append(game1_button_img)
game1_button_hover_img = pyglet.resource.image('game1_hover.png')
images.append(game1_button_hover_img)
game2_button_img = pyglet.resource.image('game2.png')
images.append(game2_button_img)
game2_button_hover_img = pyglet.resource.image('game2_hover.png')
images.append(game2_button_hover_img)
game3_button_img = pyglet.resource.image('game3.png')
images.append(game3_button_img)
game3_button_hover_img = pyglet.resource.image('game3_hover.png')
images.append(game3_button_hover_img)
game1_hs_button_img = pyglet.resource.image('game1_hs.png')
images.append(game1_hs_button_img)
game1_hs_button_hover_img = pyglet.resource.image('game1_hs_hover.png')
images.append(game1_hs_button_hover_img)
game2_hs_button_img = pyglet.resource.image('game2_hs.png')
images.append(game2_hs_button_img)
game2_hs_button_hover_img = pyglet.resource.image('game2_hs_hover.png')
images.append(game2_hs_button_hover_img)
menu_button_img = pyglet.resource.image('menu.png')
images.append(menu_button_img)
gravity_button_img = pyglet.resource.image('gravity.png')
images.append(gravity_button_img)
fullscreen_button_img = pyglet.resource.image('fullscreen.png')
images.append(fullscreen_button_img)
restart_button_img = pyglet.resource.image('restart_button.png')
images.append(restart_button_img)
enter_button_img = pyglet.resource.image('enter_button.png')
images.append(enter_button_img)
enter_button_hover_img = pyglet.resource.image('enter_button_hover.png')
images.append(enter_button_hover_img)
circle_meter_img = pyglet.resource.image('circle_meter.png')
images.append(circle_meter_img)
pointer_img = pyglet.resource.image('pointer.png')
images.append(pointer_img)
finishflag_img = pyglet.resource.image('finishflag.png')
images.append(finishflag_img)
goal_meter_img = pyglet.resource.image('goal_meter.png')
images.append(goal_meter_img)
bg_goal_meter_img = pyglet.resource.image('bg_goal_meter.png')
images.append(bg_goal_meter_img)
background_img = pyglet.resource.image('background.png')
images.append(background_img)
for image in images:
center_image(image)
# load backgrounds
parallax_bgs = list()
layer_counts = (3, 2, 2, 2, 3, 4)
for bg_i, layer_count in enumerate(layer_counts):
bg_set = list()
for layer_i in range(layer_count):
bg_set.append(pyglet.resource.image('{}layer_{}.png'.format(bg_i, layer_i)))
parallax_bgs.append(tuple(bg_set))
parallax_bgs = tuple(parallax_bgs)
# Load sfx without streaming
engine_sfx = pyglet.media.load('./resources/engine_sfx.wav', streaming=False)
bg_music = pyglet.media.load('./resources/bg_music.wav', streaming=False)
print('Resource loading successful')
|
[
"pyglet.resource.image",
"pyglet.resource.reindex",
"pyglet.media.load"
] |
[((305, 330), 'pyglet.resource.reindex', 'pyglet.resource.reindex', ([], {}), '()\n', (328, 330), False, 'import pyglet\n'), ((427, 465), 'pyglet.resource.image', 'pyglet.resource.image', (['"""tank_body.png"""'], {}), "('tank_body.png')\n", (448, 465), False, 'import pyglet\n'), ((512, 550), 'pyglet.resource.image', 'pyglet.resource.image', (['"""tank_head.png"""'], {}), "('tank_head.png')\n", (533, 550), False, 'import pyglet\n'), ((595, 631), 'pyglet.resource.image', 'pyglet.resource.image', (['"""boxlife.png"""'], {}), "('boxlife.png')\n", (616, 631), False, 'import pyglet\n'), ((679, 720), 'pyglet.resource.image', 'pyglet.resource.image', (['"""boxlife_dead.png"""'], {}), "('boxlife_dead.png')\n", (700, 720), False, 'import pyglet\n'), ((766, 800), 'pyglet.resource.image', 'pyglet.resource.image', (['"""wheel.png"""'], {}), "('wheel.png')\n", (787, 800), False, 'import pyglet\n'), ((840, 875), 'pyglet.resource.image', 'pyglet.resource.image', (['"""thread.png"""'], {}), "('thread.png')\n", (861, 875), False, 'import pyglet\n'), ((927, 973), 'pyglet.resource.image', 'pyglet.resource.image', (['"""motorbike_chassis.png"""'], {}), "('motorbike_chassis.png')\n", (948, 973), False, 'import pyglet\n'), ((1027, 1064), 'pyglet.resource.image', 'pyglet.resource.image', (['"""mb_wheel.png"""'], {}), "('mb_wheel.png')\n", (1048, 1064), False, 'import pyglet\n'), ((1110, 1148), 'pyglet.resource.image', 'pyglet.resource.image', (['"""mb_holder.png"""'], {}), "('mb_holder.png')\n", (1131, 1148), False, 'import pyglet\n'), ((1197, 1237), 'pyglet.resource.image', 'pyglet.resource.image', (['"""vbv_chassis.png"""'], {}), "('vbv_chassis.png')\n", (1218, 1237), False, 'import pyglet\n'), ((1287, 1326), 'pyglet.resource.image', 'pyglet.resource.image', (['"""vbv_wheels.png"""'], {}), "('vbv_wheels.png')\n", (1308, 1326), False, 'import pyglet\n'), ((1377, 1418), 'pyglet.resource.image', 'pyglet.resource.image', (['"""vbv_platform.png"""'], {}), "('vbv_platform.png')\n", (1398, 1418), False, 'import pyglet\n'), ((1465, 1500), 'pyglet.resource.image', 'pyglet.resource.image', (['"""vb_net.png"""'], {}), "('vb_net.png')\n", (1486, 1500), False, 'import pyglet\n'), ((1542, 1578), 'pyglet.resource.image', 'pyglet.resource.image', (['"""vb_ball.png"""'], {}), "('vb_ball.png')\n", (1563, 1578), False, 'import pyglet\n'), ((1626, 1660), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game1.png"""'], {}), "('game1.png')\n", (1647, 1660), False, 'import pyglet\n'), ((1719, 1759), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game1_hover.png"""'], {}), "('game1_hover.png')\n", (1740, 1759), False, 'import pyglet\n'), ((1818, 1852), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game2.png"""'], {}), "('game2.png')\n", (1839, 1852), False, 'import pyglet\n'), ((1911, 1951), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game2_hover.png"""'], {}), "('game2_hover.png')\n", (1932, 1951), False, 'import pyglet\n'), ((2010, 2044), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game3.png"""'], {}), "('game3.png')\n", (2031, 2044), False, 'import pyglet\n'), ((2103, 2143), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game3_hover.png"""'], {}), "('game3_hover.png')\n", (2124, 2143), False, 'import pyglet\n'), ((2205, 2242), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game1_hs.png"""'], {}), "('game1_hs.png')\n", (2226, 2242), False, 'import pyglet\n'), ((2307, 2350), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game1_hs_hover.png"""'], {}), "('game1_hs_hover.png')\n", (2328, 2350), False, 'import pyglet\n'), ((2415, 2452), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game2_hs.png"""'], {}), "('game2_hs.png')\n", (2436, 2452), False, 'import pyglet\n'), ((2517, 2560), 'pyglet.resource.image', 'pyglet.resource.image', (['"""game2_hs_hover.png"""'], {}), "('game2_hs_hover.png')\n", (2538, 2560), False, 'import pyglet\n'), ((2621, 2654), 'pyglet.resource.image', 'pyglet.resource.image', (['"""menu.png"""'], {}), "('menu.png')\n", (2642, 2654), False, 'import pyglet\n'), ((2708, 2744), 'pyglet.resource.image', 'pyglet.resource.image', (['"""gravity.png"""'], {}), "('gravity.png')\n", (2729, 2744), False, 'import pyglet\n'), ((2804, 2843), 'pyglet.resource.image', 'pyglet.resource.image', (['"""fullscreen.png"""'], {}), "('fullscreen.png')\n", (2825, 2843), False, 'import pyglet\n'), ((2903, 2946), 'pyglet.resource.image', 'pyglet.resource.image', (['"""restart_button.png"""'], {}), "('restart_button.png')\n", (2924, 2946), False, 'import pyglet\n'), ((3001, 3042), 'pyglet.resource.image', 'pyglet.resource.image', (['"""enter_button.png"""'], {}), "('enter_button.png')\n", (3022, 3042), False, 'import pyglet\n'), ((3101, 3148), 'pyglet.resource.image', 'pyglet.resource.image', (['"""enter_button_hover.png"""'], {}), "('enter_button_hover.png')\n", (3122, 3148), False, 'import pyglet\n'), ((3207, 3248), 'pyglet.resource.image', 'pyglet.resource.image', (['"""circle_meter.png"""'], {}), "('circle_meter.png')\n", (3228, 3248), False, 'import pyglet\n'), ((3296, 3332), 'pyglet.resource.image', 'pyglet.resource.image', (['"""pointer.png"""'], {}), "('pointer.png')\n", (3317, 3332), False, 'import pyglet\n'), ((3378, 3417), 'pyglet.resource.image', 'pyglet.resource.image', (['"""finishflag.png"""'], {}), "('finishflag.png')\n", (3399, 3417), False, 'import pyglet\n'), ((3466, 3505), 'pyglet.resource.image', 'pyglet.resource.image', (['"""goal_meter.png"""'], {}), "('goal_meter.png')\n", (3487, 3505), False, 'import pyglet\n'), ((3557, 3599), 'pyglet.resource.image', 'pyglet.resource.image', (['"""bg_goal_meter.png"""'], {}), "('bg_goal_meter.png')\n", (3578, 3599), False, 'import pyglet\n'), ((3651, 3690), 'pyglet.resource.image', 'pyglet.resource.image', (['"""background.png"""'], {}), "('background.png')\n", (3672, 3690), False, 'import pyglet\n'), ((4154, 4218), 'pyglet.media.load', 'pyglet.media.load', (['"""./resources/engine_sfx.wav"""'], {'streaming': '(False)'}), "('./resources/engine_sfx.wav', streaming=False)\n", (4171, 4218), False, 'import pyglet\n'), ((4230, 4292), 'pyglet.media.load', 'pyglet.media.load', (['"""./resources/bg_music.wav"""'], {'streaming': '(False)'}), "('./resources/bg_music.wav', streaming=False)\n", (4247, 4292), False, 'import pyglet\n')]
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
__author__ = 'ipetrash'
# SOURCE: https://github.com/gil9red/VideoStreamingWithEncryption/blob/37cf7f501460a286ec44a20db7b2403e8cb05d97/server_GUI_Qt/inner_libs/gui/SelectDirBox.py
import os
from PyQt5.QtWidgets import QWidget, QLineEdit, QLabel, QPushButton, QHBoxLayout, QFileDialog, QStyle
from PyQt5.QtCore import pyqtSignal
class SelectDirBox(QWidget):
valueChanged = pyqtSignal(str)
valueEdited = pyqtSignal(str)
def __init__(self, value='', visible_label=True):
super().__init__()
self._label = QLabel('Directory:')
self._label.setVisible(visible_label)
self._value = QLineEdit()
self._value.textChanged.connect(self.valueChanged.emit)
self._value.textEdited.connect(self.valueEdited.emit)
icon_open_dir = self.style().standardIcon(QStyle.SP_DirOpenIcon)
action_open_dir = self._value.addAction(icon_open_dir, QLineEdit.TrailingPosition)
action_open_dir.setToolTip('Open directory')
action_open_dir.triggered.connect(self._on_open_dir)
self._button_select_path = QPushButton('...')
self._button_select_path.setFixedWidth(24)
self._button_select_path.setToolTip('Select directory')
self._button_select_path.clicked.connect(self._on_select_path)
self.setValue(value)
layout = QHBoxLayout()
layout.setSpacing(5)
layout.setContentsMargins(0, 0, 0, 0)
layout.addWidget(self._label)
layout.addWidget(self._value, stretch=1)
layout.addWidget(self._button_select_path)
self.setLayout(layout)
def setValue(self, value: str):
self._value.setText(value)
self._value.setToolTip(value)
def getValue(self) -> str:
return self._value.text()
def _on_select_path(self):
path = QFileDialog.getExistingDirectory(self, None, self._value.text())
if not path:
return
self.setValue(path)
def _on_open_dir(self):
path = self._value.text()
if os.path.isdir(path):
os.startfile(path)
def resizeEvent(self, event):
super().resizeEvent(event)
self._button_select_path.setFixedHeight(self._value.height())
if __name__ == '__main__':
from PyQt5.QtWidgets import QApplication
app = QApplication([])
mw = SelectDirBox()
mw.valueChanged.connect(
lambda value: print(f'Selected directory: {value}')
)
mw.show()
app.exec()
|
[
"PyQt5.QtCore.pyqtSignal",
"PyQt5.QtWidgets.QHBoxLayout",
"os.path.isdir",
"PyQt5.QtWidgets.QLabel",
"PyQt5.QtWidgets.QApplication",
"PyQt5.QtWidgets.QPushButton",
"os.startfile",
"PyQt5.QtWidgets.QLineEdit"
] |
[((432, 447), 'PyQt5.QtCore.pyqtSignal', 'pyqtSignal', (['str'], {}), '(str)\n', (442, 447), False, 'from PyQt5.QtCore import pyqtSignal\n'), ((466, 481), 'PyQt5.QtCore.pyqtSignal', 'pyqtSignal', (['str'], {}), '(str)\n', (476, 481), False, 'from PyQt5.QtCore import pyqtSignal\n'), ((2350, 2366), 'PyQt5.QtWidgets.QApplication', 'QApplication', (['[]'], {}), '([])\n', (2362, 2366), False, 'from PyQt5.QtWidgets import QApplication\n'), ((587, 607), 'PyQt5.QtWidgets.QLabel', 'QLabel', (['"""Directory:"""'], {}), "('Directory:')\n", (593, 607), False, 'from PyQt5.QtWidgets import QWidget, QLineEdit, QLabel, QPushButton, QHBoxLayout, QFileDialog, QStyle\n'), ((677, 688), 'PyQt5.QtWidgets.QLineEdit', 'QLineEdit', ([], {}), '()\n', (686, 688), False, 'from PyQt5.QtWidgets import QWidget, QLineEdit, QLabel, QPushButton, QHBoxLayout, QFileDialog, QStyle\n'), ((1130, 1148), 'PyQt5.QtWidgets.QPushButton', 'QPushButton', (['"""..."""'], {}), "('...')\n", (1141, 1148), False, 'from PyQt5.QtWidgets import QWidget, QLineEdit, QLabel, QPushButton, QHBoxLayout, QFileDialog, QStyle\n'), ((1383, 1396), 'PyQt5.QtWidgets.QHBoxLayout', 'QHBoxLayout', ([], {}), '()\n', (1394, 1396), False, 'from PyQt5.QtWidgets import QWidget, QLineEdit, QLabel, QPushButton, QHBoxLayout, QFileDialog, QStyle\n'), ((2073, 2092), 'os.path.isdir', 'os.path.isdir', (['path'], {}), '(path)\n', (2086, 2092), False, 'import os\n'), ((2106, 2124), 'os.startfile', 'os.startfile', (['path'], {}), '(path)\n', (2118, 2124), False, 'import os\n')]
|
'''
====================================================================
Copyright (c) 2016-2017 <NAME>. All rights reserved.
This software is licensed as described in the file LICENSE.txt,
which you should have received as part of this distribution.
====================================================================
wb_git_project.py
'''
import sys
import os
import pathlib
import wb_annotate_node
import wb_platform_specific
import wb_git_callback_server
import git
import git.exc
import git.index
GitCommandError = git.exc.GitCommandError
def gitInit( app, progress_handler, wc_path ):
progress = Progress( progress_handler )
try:
git.repo.Repo.init( str(wc_path) )
return True
except GitCommandError:
for line in progress.allErrorLines():
app.log.error( line )
return False
__callback_server = None
git_extra_environ = {}
def initCallbackServer( app ):
#pylint disable=global-statement
global __callback_server
assert __callback_server is None, 'Cannot call initCallbackServer twice'
__callback_server = wb_git_callback_server.WbGitCallbackServer( app )
__callback_server.start()
if sys.platform == 'win32':
callback = wb_platform_specific.getAppDir() / 'scm-workbench-git-callback.exe'
if not callback.exists():
app.log.info( 'Cannot find %s' % (callback,) )
# assume in development environment
callback = wb_platform_specific.getAppDir() / 'scm_workbench_git_callback.cmd'
else:
callback = wb_platform_specific.getAppDir() / 'scm-workbench-git-callback'
if not callback.exists():
app.log.error( 'Cannot find %s' % (callback,) )
return
if 'GIT_ASKPASS' in os.environ:
app.log.info( "Using user's GIT_ASKPASS program %s" % (os.environ[ 'GIT_ASKPASS' ],) )
else:
git_extra_environ['GIT_ASKPASS'] = '"%s" askpass' % (str(callback),)
app.log.info( "Using Workbench's GIT_ASKPASS program" )
git_extra_environ['GIT_SEQUENCE_EDITOR'] = '"%s" sequence-editor' % (str(callback),)
git_extra_environ['GIT_EDITOR'] = '"%s" editor' % (str(callback),)
app.log.info( "Setup Workbench's GIT callback program" )
def setCallbackCredentialsHandler( handler ):
__callback_server.setCallbackCredentialsHandler( handler )
def setCallbackRebaseSequenceHandler( handler ):
__callback_server.setCallbackRebaseSequenceHandler( handler )
def setCallbackRebaseEditorHandler( handler ):
__callback_server.setCallbackRebaseEditorHandler( handler )
def setCallbackReply( code, value ):
__callback_server.setReply( code, value )
class GitProject:
def __init__( self, app, prefs_project, ui_components ):
self.app = app
self.ui_components = ui_components
self.debugLog = self.app.debug_options.debugLogGitProject
self.debugLogTree = self.app.debug_options.debugLogGitUpdateTree
self.prefs_project = prefs_project
# repo will be setup on demand - this speeds up start up especically on macOS
self.__repo = None
self.index = None
self.tree = GitProjectTreeNode( self, prefs_project.name, pathlib.Path( '.' ) )
self.flat_tree = GitProjectTreeNode( self, prefs_project.name, pathlib.Path( '.' ) )
self.all_file_state = {}
self.__stale_index = False
self.__num_staged_files = 0
self.__num_modified_files = 0
def getMasterBranchName( self ):
if self.prefs_project.master_branch_name is None:
return 'master'
else:
return self.prefs_project.master_branch_name
def setMasterBranchName( self, master_branch_name ):
if master_branch_name == 'master':
self.prefs_project.master_branch_name = None
else:
self.prefs_project.master_branch_name = master_branch_name
def cloneFrom( self, url, progress_handler ):
assert self.__repo is None
progress = Progress( progress_handler )
try:
self.__repo = git.repo.Repo.clone_from( url, str(self.prefs_project.path), progress )
return True
except GitCommandError:
for line in progress.allErrorLines():
self.app.log.error( line )
return False
def repo( self ):
# setup repo on demand
if self.__repo is None:
self.__repo = git.Repo( str( self.prefs_project.path ) )
self.__repo.git.update_environment( **git_extra_environ )
return self.__repo
def scmType( self ):
return 'git'
# return a new GitProject that can be used in another thread
def newInstance( self ):
return GitProject( self.app, self.prefs_project, self.ui_components )
def isNotEqual( self, other ):
return self.prefs_project.name != other.prefs_project.name
def __repr__( self ):
return '<GitProject: %s (id:%d>' % (self.prefs_project.name, id(self))
def pathForGit( self, path ):
assert isinstance( path, pathlib.Path )
# return abs path
return str( self.projectPath() / path )
def pathForWb( self, str_path ):
assert type( str_path ) == str
wb_path = pathlib.Path( str_path )
if wb_path.is_absolute():
wb_path = wb_path.relative_to( self.projectPath() )
return wb_path
def hasCommits( self ):
try:
self.repo().head.ref.commit
return True
except ValueError:
return False
def projectName( self ):
return self.prefs_project.name
def projectPath( self ):
return pathlib.Path( self.prefs_project.path )
def configReader( self, level ):
return self.repo().config_reader( level )
def configWriter( self, level ):
return self.repo().config_writer( level )
def addRemote( self, name, url ):
self.repo().create_remote( name, url )
def getHeadCommit( self ):
return self.repo().head.ref.commit
def getShortCommitId( self, commit_id, size=7 ):
return self.repo().git.rev_parse( commit_id, short=size )
def switchToBranch( self, branch ):
self.cmdCheckout( branch )
def getBranchName( self ):
return self.repo().head.ref.name
def getAllBranchNames( self ):
all_branch_names = sorted( [b.name for b in self.repo().branches] )
# detect the case of a new, empty git repo
if len(all_branch_names) == 0:
all_branch_names = [self.getBranchName()]
return all_branch_names
def getTrackingBranchName( self ):
tracking_branch = self.repo().head.ref.tracking_branch()
return tracking_branch.name if tracking_branch is not None else None
def getTrackingBranchCommit( self ):
tracking_branch = self.repo().head.ref.tracking_branch()
if tracking_branch is None:
return None
return tracking_branch.commit
def getRemote( self, name ):
try:
return self.repo().remote( name )
except ValueError:
return None
def cmdAddRemote( self, name, url ):
git.Remote.create( self.repo(), name, url )
def cmdDeleteRemote( self, name ):
git.Remote.remove( self.repo(), name )
def cmdUpdateRemote( self, name, url ):
remote = self.getRemote( name )
remote.set_url( url, remote.url )
def numStagedFiles( self ):
return self.__num_staged_files
def numModifiedFiles( self ):
return self.__num_modified_files
def saveChanges( self ):
self.debugLog( 'saveChanges() __stale_index %r' % (self.__stale_index,) )
if self.__stale_index:
self.updateState( 'QQQ' )
self.__stale_index = False
def updateState( self, tree_leaf ):
self.debugLog( 'updateState( %r ) repo=%s' % (tree_leaf, self.projectPath()) )
# rebuild the tree
self.tree = GitProjectTreeNode( self, self.prefs_project.name, pathlib.Path( '.' ) )
self.flat_tree = GitProjectTreeNode( self, self.prefs_project.name, pathlib.Path( '.' ) )
if not self.projectPath().exists():
self.app.log.error( T_('Project %(name)s folder %(folder)s has been deleted') %
{'name': self.projectName()
,'folder': self.projectPath()} )
self.all_file_state = {}
else:
self.__calculateStatus()
for path in self.all_file_state:
self.__updateTree( path )
self.dumpTree()
def __calculateStatus( self ):
self.all_file_state = {}
repo_root = self.projectPath()
git_dir = repo_root / '.git'
all_folders = set( [repo_root] )
while len(all_folders) > 0:
folder = all_folders.pop()
for filename in folder.iterdir():
abs_path = folder / filename
repo_relative = abs_path.relative_to( repo_root )
if abs_path.is_dir():
if abs_path != git_dir:
all_folders.add( abs_path )
self.all_file_state[ repo_relative ] = WbGitFileState( self, repo_relative )
self.all_file_state[ repo_relative ].setIsDir()
else:
self.all_file_state[ repo_relative ] = WbGitFileState( self, repo_relative )
# ----------------------------------------
# can only get info from the index if there is at least 1 commit
self.index = git.index.IndexFile( self.repo() )
if self.hasCommits():
head_vs_index = self.index.diff( self.repo().head.commit )
index_vs_working = self.index.diff( None )
else:
head_vs_index = []
index_vs_working = []
# each ref to self.repo().untracked_files creates a new object
# cache the value once/update
untracked_files = self.repo().untracked_files
for entry in self.index.entries.values():
filepath = pathlib.Path( entry.path )
if filepath not in self.all_file_state:
# filepath has been deleted
self.all_file_state[ filepath ] = WbGitFileState( self, filepath )
self.all_file_state[ filepath ].setIndexEntry( entry )
self.__num_staged_files = 0
for diff in head_vs_index:
self.__num_staged_files += 1
filepath = pathlib.Path( diff.b_path )
if filepath not in self.all_file_state:
self.all_file_state[ filepath ] = WbGitFileState( self, filepath )
if diff.renamed:
self.all_file_state[ pathlib.Path( diff.rename_from ) ]._addStaged( diff )
else:
self.all_file_state[ filepath ]._addStaged( diff )
self.__num_modified_files = 0
for diff in index_vs_working:
self.__num_modified_files += 1
filepath = pathlib.Path( diff.a_path )
if filepath not in self.all_file_state:
self.all_file_state[ filepath ] = WbGitFileState( self, filepath )
self.all_file_state[ filepath ]._addUnstaged( diff )
for path in untracked_files:
filepath = pathlib.Path( path )
if filepath not in self.all_file_state:
self.all_file_state[ filepath ] = WbGitFileState( self, filepath )
self.all_file_state[ filepath ]._setUntracked()
def __updateTree( self, path ):
assert isinstance( path, pathlib.Path ), 'path %r' % (path,)
self.debugLogTree( '__updateTree path %r' % (path,) )
node = self.tree
self.debugLogTree( '__updateTree path.parts %r' % (path.parts,) )
for index, name in enumerate( path.parts[0:-1] ):
self.debugLogTree( '__updateTree name %r at node %r' % (name,node) )
if not node.hasFolder( name ):
node.addFolder( name, GitProjectTreeNode( self, name, pathlib.Path( *path.parts[0:index+1] ) ) )
node = node.getFolder( name )
self.debugLogTree( '__updateTree addFile %r to node %r' % (path, node) )
node.addFileByName( path )
self.flat_tree.addFileByPath( path )
def dumpTree( self ):
if self.debugLogTree.isEnabled():
self.tree._dumpTree( 0 )
#------------------------------------------------------------
#
# functions to retrive interesting info from the repo
#
#------------------------------------------------------------
def hasFileState( self, filename ):
assert isinstance( filename, pathlib.Path )
return filename in self.all_file_state
def getFileState( self, filename ):
assert isinstance( filename, pathlib.Path )
# status only has enties for none CURRENT status files
return self.all_file_state[ filename ]
def getReportStagedFiles( self ):
all_staged_files = []
for filename, file_state in self.all_file_state.items():
if file_state.isStagedNew():
all_staged_files.append( (T_('New file'), filename, None) )
elif file_state.isStagedModified():
all_staged_files.append( (T_('Modified'), filename, None) )
elif file_state.isStagedDeleted():
all_staged_files.append( (T_('Deleted'), filename, None) )
elif file_state.isStagedRenamed():
all_staged_files.append( (T_('Renamed'), file_state.renamedFromFilename(), file_state.renamedToFilename()) )
return all_staged_files
def getReportUntrackedFiles( self ):
all_untracked_files = []
for filename, file_state in self.all_file_state.items():
if file_state.isUncontrolled():
all_untracked_files.append( (T_('New file'), filename) )
elif file_state.isUnstagedModified():
all_untracked_files.append( (T_('Modified'), filename) )
elif file_state.isUnstagedDeleted():
all_untracked_files.append( (T_('Deleted'), filename) )
return all_untracked_files
def canPush( self ):
if not self.hasCommits():
return False
remote_commit = self.getTrackingBranchCommit()
if remote_commit is None:
return False
head_commit = self.repo().head.ref.commit
return head_commit != remote_commit
def canPull( self ):
return self.repo().head.ref.tracking_branch() is not None
def getUnpushedCommits( self ):
tracking_commit = self.getTrackingBranchCommit()
if tracking_commit is None:
return []
last_pushed_commit_id = tracking_commit.hexsha
all_unpushed_commits = []
for commit in self.repo().iter_commits( None ):
commit_id = commit.hexsha
if last_pushed_commit_id == commit_id:
break
all_unpushed_commits.append( commit )
return all_unpushed_commits
#------------------------------------------------------------
#
# all functions starting with "cmd" are like the git <cmd> in behavior
#
#------------------------------------------------------------
def cmdCheckout( self, branch_name ):
try:
branch = self.repo().branches[ branch_name ]
branch.checkout()
except GitCommandError as e:
self.app.log.error( str(e) )
def cmdStage( self, filename ):
self.debugLog( 'cmdStage( %r )' % (filename,) )
self.repo().git.add( filename )
self.__stale_index = True
def cmdUnstage( self, rev, filename ):
self.debugLog( 'cmdUnstage( %r )' % (filename,) )
self.repo().git.reset( 'HEAD', filename, mixed=True )
self.__stale_index = True
def cmdRevert( self, rev, file_state ):
self.debugLog( 'cmdRevert( %r, %s:%r )' % (rev, file_state.relativePath(), file_state) )
try:
if file_state.isStagedRenamed():
self.debugLog( 'cmdRevert renamedFromFilename %r renamedToFilename %r' %
(file_state.renamedFromFilename(), file_state.renamedToFilename()) )
self.repo().git.reset( rev, file_state.renamedFromFilename(), mixed=True )
self.repo().git.reset( rev, file_state.renamedToFilename(), mixed=True )
self.repo().git.checkout( rev, file_state.renamedFromFilename() )
self.cmdDelete( file_state.renamedToFilename() )
elif( file_state.isStagedNew()
or file_state.isStagedModified() ):
self.repo().git.reset( rev, file_state.relativePath(), mixed=True )
else:
self.repo().git.checkout( rev, file_state.relativePath() )
except GitCommandError as e:
if e.stderr is not None:
# stderr unfortuently is prefixed with "\n stderr: '"
self.app.log.error( e.stderr.split( "'", 1 )[1][:-1] )
else:
self.app.log.error( str(e) )
self.__stale_index = True
def cmdDelete( self, filename ):
(self.prefs_project.path / filename).unlink()
self.__stale_index = True
def cmdRename( self, filename, new_filename ):
filestate = self.getFileState( filename )
if filestate.isControlled():
self.repo().git.mv( filename, new_filename )
else:
abs_path = filestate.absolutePath()
new_abs_path = self.prefs_project.path / new_filename
try:
abs_path.rename( new_abs_path )
except IOError as e:
self.app.log.error( 'Renamed failed - %s' % (e,) )
self.__stale_index = True
def cmdRebase( self, commit_id, all_rebase_commands, new_commit_message=None ):
all_text = []
for command in all_rebase_commands:
all_text.append( ' '.join( command ) )
all_text.append( '' )
rebase_commands = '\n'.join( all_text )
def rebaseHandler( filename ):
if self.debugLog.isEnabled():
with open( filename, 'r', encoding='utf-8' ) as f:
for line in f:
self.debugLog( 'Old Rebase: %r' % (line,) )
with open( filename, 'w', encoding='utf-8' ) as f:
if self.debugLog.isEnabled():
for line in all_text:
self.debugLog( 'New Rebase: %r' %(line,) )
f.write( rebase_commands )
return 0, ''
def newCommitMessage( filename ):
if self.debugLog.isEnabled():
with open( filename, 'r', encoding='utf-8' ) as f:
for line in f:
self.debugLog( 'Old Commit Message: %r' % (line,) )
with open( filename, 'w', encoding='utf-8' ) as f:
if self.debugLog.isEnabled():
for line in new_commit_message.split('\n'):
self.debugLog( 'New Commit Message: %r' % (line,) )
f.write( new_commit_message )
return 0, ''
def unexpectedCallback( filename ):
return 1, 'Unexpected callback with %r' % (filename,)
setCallbackRebaseSequenceHandler( rebaseHandler )
if new_commit_message is None:
setCallbackRebaseEditorHandler( unexpectedCallback )
else:
setCallbackRebaseEditorHandler( newCommitMessage )
rc, stdout, stderr = self.repo().git.execute(
[git.Git.GIT_PYTHON_GIT_EXECUTABLE, 'rebase', '--interactive', '%s^1' % (commit_id,)],
with_extended_output=True,
with_exceptions=False,
universal_newlines=False, # GitPython bug will TB if true
stdout_as_string=True )
self.debugLog( '%s rebase --interactive %s -> rc %d' %
(git.Git.GIT_PYTHON_GIT_EXECUTABLE, commit_id, rc) )
if rc != 0:
# assume need to abort rebase on failure
self.repo().git.execute(
[git.Git.GIT_PYTHON_GIT_EXECUTABLE, 'rebase', '--abort'],
with_extended_output=True,
with_exceptions=False,
universal_newlines=False, # GitPython bug will TB if true
stdout_as_string=True )
setCallbackRebaseSequenceHandler( None )
setCallbackRebaseEditorHandler( None )
return rc, stdout.replace( '\r', '' ).split('\n'), stderr.replace( '\r', '' ).split('\n')
def cmdCreateTag( self, tag_name, ref ):
self.repo().create_tag( tag_name, ref=ref )
def cmdDiffFolder( self, folder, head, staged ):
if head and staged:
return self.repo().git.diff( 'HEAD', self.pathForGit( folder ), staged=staged )
elif staged:
return self.repo().git.diff( self.pathForGit( folder ), staged=True )
elif head:
return self.repo().git.diff( 'HEAD', self.pathForGit( folder ), staged=False )
else:
return self.repo().git.diff( self.pathForGit( folder ), staged=False )
def cmdDiffWorkingVsCommit( self, filename, commit ):
return self.repo().git.diff( commit, self.pathForGit( filename ), staged=False )
def cmdDiffStagedVSCommit( self, filename, commit ):
return self.repo().git.diff( commit, self.pathForGit( filename ), staged=True )
def cmdDiffCommitVsCommit( self, filename, old_commit, new_commit ):
return self.repo().git.diff( old_commit, new_commit, '--', self.pathForGit( filename ) )
def cmdShow( self, what ):
return self.repo().git.show( what )
def getTextLinesForCommit( self, filepath, commit_id ):
assert isinstance( filepath, pathlib.Path ), 'expecting pathlib.Path got %r' % (filepath,)
# git show wants a posix path, it does not work with '\' path seperators
git_filepath = pathlib.PurePosixPath( filepath )
text = self.cmdShow( '%s:%s' % (commit_id, git_filepath) )
all_lines = text.split('\n')
if all_lines[-1] == '':
return all_lines[:-1]
else:
return all_lines
def cmdCommit( self, message ):
self.__stale_index = True
return self.index.commit( message )
def cmdCommitLogAfterCommitId( self, commit_id ):
if not self.hasCommits():
return []
all_commit_logs = []
for commit in self.repo().iter_commits( None ):
if commit.hexsha == commit_id:
break
all_commit_logs.append( GitCommitLogNode( commit ) )
return all_commit_logs
def cmdCommitLogForRepository( self, progress_callback, limit=None, since=None, until=None, rev=None, paths='' ):
if not self.hasCommits():
return []
all_commit_logs = []
kwds = {}
if limit is not None:
kwds['max_count'] = limit
if since is not None:
kwds['since'] = since
if since is not None:
kwds['until'] = until
for commit in self.repo().iter_commits( rev, paths, **kwds ):
all_commit_logs.append( GitCommitLogNode( commit ) )
total = len(all_commit_logs)
progress_callback( 0, total )
self.__addCommitChangeInformation( progress_callback, all_commit_logs )
progress_callback( total, total )
return all_commit_logs
def cmdCommitLogForFile( self, progress_callback, filename, limit=None, since=None, until=None, rev=None ):
return self.cmdCommitLogForRepository( progress_callback, paths=filename, limit=limit, since=since, until=until, rev=rev )
def cmdTagsForRepository( self ):
tag_name_by_id = {}
for tag in self.repo().tags:
try:
tag_name_by_id[ tag.commit.hexsha ] = tag.name
except ValueError:
# cannot get the tag - may be a deteched ref
pass
return tag_name_by_id
def doesTagExist( self, tag_name ):
return tag_name in self.repo().tags
def __addCommitChangeInformation( self, progress_callback, all_commit_logs ):
# now calculate what was added, deleted and modified in each commit
total = len(all_commit_logs)
for offset in range( total ):
progress_callback( offset, total )
all_files = all_commit_logs[ offset ].commitStats().files
new_tree = all_commit_logs[ offset ].commitTree()
old_tree = all_commit_logs[ offset ].commitPreviousTree()
all_new = {}
self.__treeToDict( all_files, new_tree, all_new )
new_set = set(all_new)
if old_tree is None:
all_commit_logs[ offset ]._addChanges( new_set, set(), [], set() )
else:
all_old = {}
self.__treeToDict( all_files, old_tree, all_old )
old_set = set(all_old)
all_added = new_set - old_set
all_deleted = old_set - new_set
all_renamed = []
# look for renames
if len(all_added) > 0 and len(all_deleted) > 0:
all_old_id_to_name = {}
for name in all_deleted:
all_old_id_to_name[ all_old[ name ] ] = name
for name in list(all_added):
id_ = all_new[ name ]
if id_ in all_old_id_to_name:
old_name = all_old_id_to_name[ id_ ]
# converted svn repos can have trees that cannot
# be used to figure out the rename
# for example when the checkin deletes a folder
# which cannot be expressed in git trees
if( old_name in all_added
and old_name in all_deleted ):
all_added.remove( name )
all_deleted.remove( old_name )
all_renamed.append( (name, old_name) )
all_modified = set()
for key in all_new:
if( key in all_old
and all_new[ key ] != all_old[ key ] ):
all_modified.add( key )
all_commit_logs[ offset ]._addChanges( all_added, all_deleted, all_renamed, all_modified )
def __treeToDict( self, all_files, tree, all_entries ):
for file in all_files:
all_parts = file.split('/')
node = tree
# walk down the tree (aka folders) until we have
# the tree that has the blob (aka file) in it
# tree.path is the full name of the folder
for index in range(1, len(all_parts)):
prefix = '/'.join( all_parts[:index] )
for child in node.trees:
if child.path == prefix:
node = child
break
# blob.path is the full path to the file
for blob in node:
if blob.path == file:
all_entries[ blob.path ] = blob.hexsha
break
def cmdAnnotationForFile( self, filename, rev=None ):
if rev is None:
rev = 'HEAD'
all_annotate_nodes = []
line_num = 0
for commit, all_lines in self.repo().blame( rev, self.pathForGit( filename ) ):
commit_id = commit.hexsha
for line_text in all_lines:
line_num += 1
all_annotate_nodes.append(
wb_annotate_node.AnnotateNode( line_num, line_text, commit_id ) )
return all_annotate_nodes
def cmdCommitLogForAnnotateFile( self, filename, all_commit_ids ):
all_commit_logs = {}
for commit_id in all_commit_ids:
commit = self.repo().commit( commit_id )
all_commit_logs[ commit_id ] = GitCommitLogNode( commit )
return all_commit_logs
def cmdPull( self, progress_callback, info_callback ):
tracking_branch = self.repo().head.ref.tracking_branch()
remote = self.repo().remote( tracking_branch.remote_name )
progress = Progress( progress_callback )
try:
for info in remote.pull( progress=progress ):
info_callback( info )
for line in progress.allDroppedLines():
self.app.log.info( line )
except GitCommandError:
for line in progress.allErrorLines():
self.app.log.error( line )
raise
def cmdPush( self, progress_callback, info_callback ):
progress = Progress( progress_callback )
tracking_branch = self.repo().head.ref.tracking_branch()
remote = self.repo().remote( tracking_branch.remote_name )
try:
for info in remote.push( progress=progress ):
info_callback( info )
for line in progress.allDroppedLines():
self.app.log.info( line )
except GitCommandError:
for line in progress.allErrorLines():
self.app.log.error( line )
raise
def cmdStashSave( self, message=None ):
cmd = [git.Git.GIT_PYTHON_GIT_EXECUTABLE, 'stash', 'push']
if message is not None:
cmd.append( '--message' )
cmd.append( message )
rc, stdout, stderr = self.repo().git.execute(
cmd,
with_extended_output=True,
with_exceptions=False,
universal_newlines=False, # GitPython bug will TB if true
stdout_as_string=True )
self.debugLog( '%s stash save -> rc %d' % (git.Git.GIT_PYTHON_GIT_EXECUTABLE, rc) )
if rc != 0:
for line in stderr.split( '\n' ):
line = line.strip()
self.app.log.error( line )
return rc == 0
def cmdStashPop( self, stash_id ):
cmd = [git.Git.GIT_PYTHON_GIT_EXECUTABLE, 'stash', 'pop', '--quiet', stash_id]
self.debugLog( 'cmdStashPop: %r' % (cmd,) )
rc, stdout, stderr = self.repo().git.execute(
cmd,
with_extended_output=True,
with_exceptions=False,
universal_newlines=False, # GitPython bug will TB if true
stdout_as_string=True )
self.debugLog( '%s stash apply %s -> rc %d' % (git.Git.GIT_PYTHON_GIT_EXECUTABLE, stash_id, rc) )
for line in stdout.split( '\n' ):
line = line.strip()
self.app.log.info( line )
if rc != 0:
for line in stderr.split( '\n' ):
line = line.strip()
self.app.log.error( line )
return rc == 0
def cmdStashList( self ):
rc, stdout, stderr = self.repo().git.execute(
[git.Git.GIT_PYTHON_GIT_EXECUTABLE, 'stash', 'list'],
with_extended_output=True,
with_exceptions=False,
universal_newlines=False, # GitPython bug will TB if true
stdout_as_string=True )
self.debugLog( '%s stash list -> rc %d' % (git.Git.GIT_PYTHON_GIT_EXECUTABLE, rc) )
if rc != 0:
for line in stderr.split( '\n' ):
line = line.strip()
self.app.log.error( line )
return []
all_stashes = []
for line in stdout.split( '\n' ):
line = line.strip()
if line == '':
continue
stash_id, stash_branch, stash_message = line.split( ': ', 2 )
for branch_prefix in ('WIP on ', 'On '):
if stash_branch.startswith( branch_prefix ):
stash_branch = stash_branch[len(branch_prefix):]
break
all_stashes.append( WbGitStashInfo( stash_id, stash_branch, stash_message ) )
return all_stashes
class WbGitStashInfo:
def __init__( self, stash_id, stash_branch, stash_message ):
self.stash_id = stash_id
self.stash_branch = stash_branch
self.stash_message = stash_message
def __repr__( self ):
return ('<WbGitStashInfo: id=%s branch=%s msg=%s>' %
(self.stash_id, self.stash_branch, self.stash_message))
class WbGitFileState:
def __init__( self, project, filepath ):
assert isinstance( project, GitProject ),'expecting GitProject got %r' % (project,)
assert isinstance( filepath, pathlib.Path ), 'expecting pathlib.Path got %r' % (filepath,)
self.__project = project
self.__filepath = filepath
self.__is_dir = False
self.__index_entry = None
self.__unstaged_diff = None
self.__staged_diff = None
self.__untracked = False
# from the above calculate the following
self.__state_calculated = False
self.__staged_is_modified = False
self.__unstaged_is_modified = False
self.__staged_abbrev = None
self.__unstaged_abbrev = None
self.__head_blob = None
self.__staged_blob = None
def __repr__( self ):
return ('<WbGitFileState: calc %r, S=%r, U=%r' %
(self.__state_calculated, self.__staged_abbrev, self.__unstaged_abbrev))
def relativePath( self ):
return self.__filepath
def absolutePath( self ):
return self.__project.projectPath() / self.__filepath
def renamedToFilename( self ):
assert self.isStagedRenamed()
return pathlib.Path( self.__staged_diff.rename_from )
def renamedFromFilename( self ):
assert self.isStagedRenamed()
return pathlib.Path( self.__staged_diff.rename_to )
def setIsDir( self ):
self.__is_dir = True
def isDir( self ):
return self.__is_dir
def setIndexEntry( self, index_entry ):
self.__index_entry = index_entry
def _addStaged( self, diff ):
self.__state_calculated = False
self.__staged_diff = diff
def _addUnstaged( self, diff ):
self.__state_calculated = False
self.__unstaged_diff = diff
def _setUntracked( self ):
self.__untracked = True
# from the provided info work out
# interesting properies
def __calculateState( self ):
if self.__state_calculated:
return
if self.__staged_diff is None:
self.__staged_abbrev = ''
else:
if self.__staged_diff.renamed:
self.__staged_abbrev = 'R'
elif self.__staged_diff.deleted_file:
self.__staged_abbrev = 'A'
elif self.__staged_diff.new_file:
self.__staged_abbrev = 'D'
else:
self.__staged_abbrev = 'M'
self.__staged_is_modified = True
self.__head_blob = self.__staged_diff.b_blob
self.__staged_blob = self.__staged_diff.a_blob
if self.__unstaged_diff is None:
self.__unstaged_abbrev = ''
else:
if self.__unstaged_diff.deleted_file:
self.__unstaged_abbrev = 'D'
elif self.__unstaged_diff.new_file:
self.__unstaged_abbrev = 'A'
else:
self.__unstaged_abbrev = 'M'
self.__unstaged_is_modified = True
if self.__head_blob is None:
self.__head_blob = self.__unstaged_diff.a_blob
self.__state_calculated = True
def getStagedAbbreviatedStatus( self ):
self.__calculateState()
return self.__staged_abbrev
def getUnstagedAbbreviatedStatus( self ):
self.__calculateState()
return self.__unstaged_abbrev
#------------------------------------------------------------
def isControlled( self ):
if self.__staged_diff is not None:
return True
return self.__index_entry is not None
def isUncontrolled( self ):
return self.__untracked
def isIgnored( self ):
if self.__staged_diff is not None:
return False
if self.__index_entry is not None:
return False
# untracked files have had ignored files striped out
if self.__untracked:
return False
return True
# ------------------------------
def isStagedNew( self ):
self.__calculateState()
return self.__staged_abbrev == 'A'
def isStagedModified( self ):
self.__calculateState()
return self.__staged_abbrev == 'M'
def isStagedDeleted( self ):
self.__calculateState()
return self.__staged_abbrev == 'D'
def isStagedRenamed( self ):
self.__calculateState()
return self.__staged_abbrev == 'R'
def isUnstagedModified( self ):
self.__calculateState()
return self.__unstaged_abbrev == 'M'
def isUnstagedDeleted( self ):
self.__calculateState()
return self.__unstaged_abbrev == 'D'
# ------------------------------------------------------------
def canCommit( self ):
return self.__staged_abbrev != ''
def canStage( self ):
return self.__unstaged_abbrev != '' or self.__untracked
def canUnstage( self ):
return self.__staged_abbrev != ''
def canRevert( self ):
return (self.isUnstagedDeleted()
or self.isUnstagedModified()
or self.isStagedNew()
or self.isStagedRenamed()
or self.isStagedDeleted()
or self.isStagedModified())
# ------------------------------------------------------------
def canDiffHeadVsStaged( self ):
self.__calculateState()
return self.__staged_is_modified
def canDiffStagedVsWorking( self ):
self.__calculateState()
return self.__unstaged_is_modified and self.__staged_is_modified
def canDiffHeadVsWorking( self ):
self.__calculateState()
return self.__unstaged_is_modified
def getTextLinesWorking( self ):
path = self.absolutePath()
with path.open( encoding='utf-8' ) as f:
all_lines = f.read().split( '\n' )
if all_lines[-1] == '':
return all_lines[:-1]
else:
return all_lines
def getTextLinesHead( self ):
return self.__getTextLinesFromBlob( self.getHeadBlob() )
def getTextLinesStaged( self ):
return self.__getTextLinesFromBlob( self.getStagedBlob() )
def __getTextLinesFromBlob( self, blob ):
data = blob.data_stream.read()
text = data.decode( 'utf-8' )
all_lines = text.split('\n')
if all_lines[-1] == '':
return all_lines[:-1]
else:
return all_lines
def getTextLinesForCommit( self, commit_id ):
git_filepath = pathlib.PurePosixPath( self.__filepath )
text = self.__project.cmdShow( '%s:%s' % (commit_id, git_filepath) )
all_lines = text.split('\n')
if all_lines[-1] == '':
return all_lines[:-1]
else:
return all_lines
def getHeadBlob( self ):
return self.__head_blob
def getStagedBlob( self ):
return self.__staged_blob
class GitCommitLogNode:
def __init__( self, commit ):
self.__commit = commit
self.__all_changes = []
def _addChanges( self, all_added, all_deleted, all_renamed, all_modified ):
for name in all_added:
self.__all_changes.append( ('A', name, '' ) )
for name in all_deleted:
self.__all_changes.append( ('D', name, '' ) )
for name, old_name in all_renamed:
self.__all_changes.append( ('R', name, old_name ) )
for name in all_modified:
self.__all_changes.append( ('M', name, '' ) )
def commitStats( self ):
return self.__commit.stats
def commitTree( self ):
return self.__commit.tree
def commitPreviousTree( self ):
if len(self.__commit.parents) == 0:
return None
previous_commit = self.__commit.parents[0]
return previous_commit.tree
def commitId( self ):
return self.__commit.hexsha
def commitIdString( self ):
return self.__commit.hexsha
def commitAuthor( self ):
return self.__commit.author.name
def commitAuthorEmail( self ):
return self.__commit.author.email
def commitDate( self ):
return self.__commit.committed_datetime
def commitMessage( self ):
return self.__commit.message
def commitMessageHeadline( self ):
return self.__commit.message.split('\n')[0]
def commitFileChanges( self ):
return self.__all_changes
class GitProjectTreeNode:
def __init__( self, project, name, path ):
self.project = project
self.name = name
self.is_by_path = False
self.__path = path
self.__all_folders = {}
self.__all_files = {}
def __repr__( self ):
return '<GitProjectTreeNode: project %r, path %s>' % (self.project, self.__path)
def updateTreeNode( self ):
pass
def isByPath( self ):
return self.is_by_path
def addFileByName( self, path ):
assert path.name != ''
self.__all_files[ path.name ] = path
def addFileByPath( self, path ):
assert path.name != ''
self.is_by_path = True
path = path
self.__all_files[ path ] = path
def getAllFileNames( self ):
return self.__all_files.keys()
def addFolder( self, name, node ):
assert type(name) == str and name != '', 'name %r, node %r' % (name, node)
assert isinstance( node, GitProjectTreeNode )
self.__all_folders[ name ] = node
def getFolder( self, name ):
assert type(name) == str
return self.__all_folders[ name ]
def getAllFolderNodes( self ):
return self.__all_folders.values()
def getAllFolderNames( self ):
return self.__all_folders.keys()
def hasFolder( self, name ):
assert type(name) == str
return name in self.__all_folders
def _dumpTree( self, indent ):
self.project.debugLog( 'dump: %*s%r' % (indent, '', self) )
for file in sorted( self.__all_files ):
self.project.debugLog( 'dump %*s file: %r' % (indent, '', file) )
for folder in sorted( self.__all_folders ):
self.__all_folders[ folder ]._dumpTree( indent+4 )
def isNotEqual( self, other ):
return (self.relativePath() != other.relativePath()
or self.project.isNotEqual( other.project ))
def __lt__( self, other ):
return self.name < other.name
def relativePath( self ):
return self.__path
def absolutePath( self ):
return self.project.projectPath() / self.__path
def getStatusEntry( self, name ):
path = self.__all_files[ name ]
if path in self.project.all_file_state:
entry = self.project.all_file_state[ path ]
else:
entry = WbGitFileState( self.project, path )
return entry
class Progress(git.RemoteProgress):
def __init__( self, progress_call_back ):
self.progress_call_back = progress_call_back
super().__init__()
self.__all_dropped_lines = []
all_update_stages = {
git.RemoteProgress.COUNTING: 'Counting',
git.RemoteProgress.COMPRESSING: 'Compressing',
git.RemoteProgress.WRITING: 'Writing',
git.RemoteProgress.RECEIVING: 'Receiving',
git.RemoteProgress.RESOLVING: 'Resolving',
git.RemoteProgress.FINDING_SOURCES: 'Finding Sources',
git.RemoteProgress.CHECKING_OUT: 'Checking Out',
}
def update( self, op_code, cur_count, max_count=None, message='' ):
stage_name = self.all_update_stages.get( op_code&git.RemoteProgress.OP_MASK, 'Unknown' )
is_begin = op_code&git.RemoteProgress.BEGIN != 0
is_end = op_code&git.RemoteProgress.END != 0
self.progress_call_back( is_begin, is_end, stage_name, cur_count, max_count, message )
def line_dropped( self, line ):
if line.startswith( 'POST git-upload-pack' ):
return
self.__all_dropped_lines.append( line )
def allErrorLines( self ):
return self.error_lines + self.__all_dropped_lines
def allDroppedLines( self ):
return self.__all_dropped_lines
|
[
"pathlib.Path",
"wb_git_callback_server.WbGitCallbackServer",
"wb_annotate_node.AnnotateNode",
"pathlib.PurePosixPath",
"wb_platform_specific.getAppDir"
] |
[((1105, 1152), 'wb_git_callback_server.WbGitCallbackServer', 'wb_git_callback_server.WbGitCallbackServer', (['app'], {}), '(app)\n', (1147, 1152), False, 'import wb_git_callback_server\n'), ((5252, 5274), 'pathlib.Path', 'pathlib.Path', (['str_path'], {}), '(str_path)\n', (5264, 5274), False, 'import pathlib\n'), ((5672, 5709), 'pathlib.Path', 'pathlib.Path', (['self.prefs_project.path'], {}), '(self.prefs_project.path)\n', (5684, 5709), False, 'import pathlib\n'), ((22110, 22141), 'pathlib.PurePosixPath', 'pathlib.PurePosixPath', (['filepath'], {}), '(filepath)\n', (22131, 22141), False, 'import pathlib\n'), ((33901, 33945), 'pathlib.Path', 'pathlib.Path', (['self.__staged_diff.rename_from'], {}), '(self.__staged_diff.rename_from)\n', (33913, 33945), False, 'import pathlib\n'), ((34039, 34081), 'pathlib.Path', 'pathlib.Path', (['self.__staged_diff.rename_to'], {}), '(self.__staged_diff.rename_to)\n', (34051, 34081), False, 'import pathlib\n'), ((39232, 39270), 'pathlib.PurePosixPath', 'pathlib.PurePosixPath', (['self.__filepath'], {}), '(self.__filepath)\n', (39253, 39270), False, 'import pathlib\n'), ((1238, 1270), 'wb_platform_specific.getAppDir', 'wb_platform_specific.getAppDir', ([], {}), '()\n', (1268, 1270), False, 'import wb_platform_specific\n'), ((1567, 1599), 'wb_platform_specific.getAppDir', 'wb_platform_specific.getAppDir', ([], {}), '()\n', (1597, 1599), False, 'import wb_platform_specific\n'), ((3197, 3214), 'pathlib.Path', 'pathlib.Path', (['"""."""'], {}), "('.')\n", (3209, 3214), False, 'import pathlib\n'), ((3290, 3307), 'pathlib.Path', 'pathlib.Path', (['"""."""'], {}), "('.')\n", (3302, 3307), False, 'import pathlib\n'), ((8043, 8060), 'pathlib.Path', 'pathlib.Path', (['"""."""'], {}), "('.')\n", (8055, 8060), False, 'import pathlib\n'), ((8141, 8158), 'pathlib.Path', 'pathlib.Path', (['"""."""'], {}), "('.')\n", (8153, 8158), False, 'import pathlib\n'), ((10120, 10144), 'pathlib.Path', 'pathlib.Path', (['entry.path'], {}), '(entry.path)\n', (10132, 10144), False, 'import pathlib\n'), ((10530, 10555), 'pathlib.Path', 'pathlib.Path', (['diff.b_path'], {}), '(diff.b_path)\n', (10542, 10555), False, 'import pathlib\n'), ((11044, 11069), 'pathlib.Path', 'pathlib.Path', (['diff.a_path'], {}), '(diff.a_path)\n', (11056, 11069), False, 'import pathlib\n'), ((11333, 11351), 'pathlib.Path', 'pathlib.Path', (['path'], {}), '(path)\n', (11345, 11351), False, 'import pathlib\n'), ((1470, 1502), 'wb_platform_specific.getAppDir', 'wb_platform_specific.getAppDir', ([], {}), '()\n', (1500, 1502), False, 'import wb_platform_specific\n'), ((27898, 27959), 'wb_annotate_node.AnnotateNode', 'wb_annotate_node.AnnotateNode', (['line_num', 'line_text', 'commit_id'], {}), '(line_num, line_text, commit_id)\n', (27927, 27959), False, 'import wb_annotate_node\n'), ((12072, 12110), 'pathlib.Path', 'pathlib.Path', (['*path.parts[0:index + 1]'], {}), '(*path.parts[0:index + 1])\n', (12084, 12110), False, 'import pathlib\n'), ((10761, 10791), 'pathlib.Path', 'pathlib.Path', (['diff.rename_from'], {}), '(diff.rename_from)\n', (10773, 10791), False, 'import pathlib\n')]
|
import logging
import os
import random
import time
import datetime
import sys
import math
from screen import Screen
from scorer import Scorer
from trigger import Trigger
from psychopy import core, event, sound
from psychopy.hardware import keyboard
from pupil_labs import PupilCore
from datalog import Datalog
from config.configSample import CONF
#########################################################################
######################################
# Initialize screen, logger and inputs
logging.basicConfig(
level=CONF["loggingLevel"],
format='%(asctime)s-%(levelname)s-%(message)s',
) # This is a log for debugging the script, and prints messages to the terminal
# needs to be first, so that if it doesn't succeed, it doesn't freeze everything
eyetracker = PupilCore(ip=CONF["pupillometry"]
["ip"], port=CONF["pupillometry"]["port"], shouldRecord=CONF["recordEyetracking"])
trigger = Trigger(CONF["trigger"]["serial_device"],
CONF["sendTriggers"], CONF["trigger"]["labels"])
screen = Screen(CONF)
datalog = Datalog(OUTPUT_FOLDER=os.path.join(
'output', CONF["participant"] + "_" + CONF["session"],
datetime.datetime.now().strftime("%Y-%m-%d")), CONF=CONF) # This is for saving data
kb = keyboard.Keyboard()
mainClock = core.MonotonicClock() # starts clock for timestamping events
alarm = sound.Sound(os.path.join('sounds', CONF["instructions"]["alarm"]),
stereo=True)
questionnaireReminder = sound.Sound(os.path.join(
'sounds', CONF["instructions"]["questionnaireReminder"]), stereo=True)
scorer = Scorer()
logging.info('Initialization completed')
#########################################################################
def quitExperimentIf(shouldQuit):
"Quit experiment if condition is met"
if shouldQuit:
trigger.send("Quit")
scorer.getScore()
logging.info('quit experiment')
eyetracker.stop_recording()
trigger.reset()
sys.exit(2)
def onFlip(stimName, logName):
"send trigger on flip, set keyboard clock, and save timepoint"
trigger.send(stimName)
kb.clock.reset() # this starts the keyboard clock as soon as stimulus appears
datalog[logName] = mainClock.getTime()
##############
# Introduction
##############
# Display overview of session
screen.show_overview()
core.wait(CONF["timing"]["overview"])
# Optionally, display instructions
if CONF["showInstructions"]:
screen.show_instructions()
key = event.waitKeys()
quitExperimentIf(key[0] == 'q')
eyetracker.start_recording(os.path.join(
CONF["participant"], CONF["session"], CONF["task"]["name"]))
# Blank screen for initial rest
screen.show_blank()
logging.info('Starting blank period')
trigger.send("StartBlank")
core.wait(CONF["timing"]["rest"])
trigger.send("EndBlank")
# Cue start of the experiment
screen.show_cue("START")
trigger.send("Start")
core.wait(CONF["timing"]["cue"])
#################
# Main experiment
#################
# customize
datalog["trialID"] = trigger.sendTriggerId()
eyetracker.send_trigger("Stim", {"id": 1, "condition": "sample"})
datalog["pupilSize"] = eyetracker.getPupildiameter()
# save data to file
datalog.flush()
###########
# Concluion
###########
# End main experiment
screen.show_cue("DONE!")
trigger.send("End")
core.wait(CONF["timing"]["cue"])
# Blank screen for final rest
screen.show_blank()
logging.info('Starting blank period')
trigger.send("StartBlank")
core.wait(CONF["timing"]["rest"])
trigger.send("EndBlank")
logging.info('Finished')
scorer.getScore()
trigger.reset()
eyetracker.stop_recording()
questionnaireReminder.play()
core.wait(2)
|
[
"logging.basicConfig",
"psychopy.event.waitKeys",
"screen.Screen",
"os.path.join",
"pupil_labs.PupilCore",
"logging.info",
"datetime.datetime.now",
"psychopy.hardware.keyboard.Keyboard",
"sys.exit",
"psychopy.core.MonotonicClock",
"scorer.Scorer",
"trigger.Trigger",
"psychopy.core.wait"
] |
[((503, 603), 'logging.basicConfig', 'logging.basicConfig', ([], {'level': "CONF['loggingLevel']", 'format': '"""%(asctime)s-%(levelname)s-%(message)s"""'}), "(level=CONF['loggingLevel'], format=\n '%(asctime)s-%(levelname)s-%(message)s')\n", (522, 603), False, 'import logging\n'), ((784, 903), 'pupil_labs.PupilCore', 'PupilCore', ([], {'ip': "CONF['pupillometry']['ip']", 'port': "CONF['pupillometry']['port']", 'shouldRecord': "CONF['recordEyetracking']"}), "(ip=CONF['pupillometry']['ip'], port=CONF['pupillometry']['port'],\n shouldRecord=CONF['recordEyetracking'])\n", (793, 903), False, 'from pupil_labs import PupilCore\n'), ((936, 1031), 'trigger.Trigger', 'Trigger', (["CONF['trigger']['serial_device']", "CONF['sendTriggers']", "CONF['trigger']['labels']"], {}), "(CONF['trigger']['serial_device'], CONF['sendTriggers'], CONF[\n 'trigger']['labels'])\n", (943, 1031), False, 'from trigger import Trigger\n'), ((1055, 1067), 'screen.Screen', 'Screen', (['CONF'], {}), '(CONF)\n', (1061, 1067), False, 'from screen import Screen\n'), ((1269, 1288), 'psychopy.hardware.keyboard.Keyboard', 'keyboard.Keyboard', ([], {}), '()\n', (1286, 1288), False, 'from psychopy.hardware import keyboard\n'), ((1302, 1323), 'psychopy.core.MonotonicClock', 'core.MonotonicClock', ([], {}), '()\n', (1321, 1323), False, 'from psychopy import core, event, sound\n'), ((1609, 1617), 'scorer.Scorer', 'Scorer', ([], {}), '()\n', (1615, 1617), False, 'from scorer import Scorer\n'), ((1620, 1660), 'logging.info', 'logging.info', (['"""Initialization completed"""'], {}), "('Initialization completed')\n", (1632, 1660), False, 'import logging\n'), ((2364, 2401), 'psychopy.core.wait', 'core.wait', (["CONF['timing']['overview']"], {}), "(CONF['timing']['overview'])\n", (2373, 2401), False, 'from psychopy import core, event, sound\n'), ((2723, 2760), 'logging.info', 'logging.info', (['"""Starting blank period"""'], {}), "('Starting blank period')\n", (2735, 2760), False, 'import logging\n'), ((2789, 2822), 'psychopy.core.wait', 'core.wait', (["CONF['timing']['rest']"], {}), "(CONF['timing']['rest'])\n", (2798, 2822), False, 'from psychopy import core, event, sound\n'), ((2926, 2958), 'psychopy.core.wait', 'core.wait', (["CONF['timing']['cue']"], {}), "(CONF['timing']['cue'])\n", (2935, 2958), False, 'from psychopy import core, event, sound\n'), ((3335, 3367), 'psychopy.core.wait', 'core.wait', (["CONF['timing']['cue']"], {}), "(CONF['timing']['cue'])\n", (3344, 3367), False, 'from psychopy import core, event, sound\n'), ((3419, 3456), 'logging.info', 'logging.info', (['"""Starting blank period"""'], {}), "('Starting blank period')\n", (3431, 3456), False, 'import logging\n'), ((3485, 3518), 'psychopy.core.wait', 'core.wait', (["CONF['timing']['rest']"], {}), "(CONF['timing']['rest'])\n", (3494, 3518), False, 'from psychopy import core, event, sound\n'), ((3546, 3570), 'logging.info', 'logging.info', (['"""Finished"""'], {}), "('Finished')\n", (3558, 3570), False, 'import logging\n'), ((3663, 3675), 'psychopy.core.wait', 'core.wait', (['(2)'], {}), '(2)\n', (3672, 3675), False, 'from psychopy import core, event, sound\n'), ((1385, 1438), 'os.path.join', 'os.path.join', (['"""sounds"""', "CONF['instructions']['alarm']"], {}), "('sounds', CONF['instructions']['alarm'])\n", (1397, 1438), False, 'import os\n'), ((1510, 1579), 'os.path.join', 'os.path.join', (['"""sounds"""', "CONF['instructions']['questionnaireReminder']"], {}), "('sounds', CONF['instructions']['questionnaireReminder'])\n", (1522, 1579), False, 'import os\n'), ((2508, 2524), 'psychopy.event.waitKeys', 'event.waitKeys', ([], {}), '()\n', (2522, 2524), False, 'from psychopy import core, event, sound\n'), ((2590, 2662), 'os.path.join', 'os.path.join', (["CONF['participant']", "CONF['session']", "CONF['task']['name']"], {}), "(CONF['participant'], CONF['session'], CONF['task']['name'])\n", (2602, 2662), False, 'import os\n'), ((1897, 1928), 'logging.info', 'logging.info', (['"""quit experiment"""'], {}), "('quit experiment')\n", (1909, 1928), False, 'import logging\n'), ((1997, 2008), 'sys.exit', 'sys.exit', (['(2)'], {}), '(2)\n', (2005, 2008), False, 'import sys\n'), ((1178, 1201), 'datetime.datetime.now', 'datetime.datetime.now', ([], {}), '()\n', (1199, 1201), False, 'import datetime\n')]
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# This file is subject to the terms and conditions defined in
# file 'LICENSE.md', which is part of this source code package.
#
import time
import uuid
from kubernetes.K8sCronJob import K8sCronJob
from kubernetes.K8sPod import K8sPod
from kubernetes.models.v2alpha1.CronJob import CronJob
from kubernetes.K8sExceptions import CronJobAlreadyRunningException
from tests import _constants
from tests import _utils
from tests.BaseTest import BaseTest
class K8sCronJobTests(BaseTest):
def setUp(self):
_utils.cleanup_cronjobs()
_utils.cleanup_jobs()
_utils.cleanup_pods()
def tearDown(self):
_utils.cleanup_cronjobs()
_utils.cleanup_jobs()
_utils.cleanup_pods()
# --------------------------------------------------------------------------------- init
def test_init_no_args(self):
try:
K8sCronJob()
self.fail("Should not fail.")
except SyntaxError:
pass
except IOError:
pass
except Exception as err:
self.fail("Unhandled exception: [ {0} ]".format(err))
def test_init_with_invalid_config(self):
config = object()
with self.assertRaises(SyntaxError):
K8sCronJob(config=config)
def test_init_with_invalid_name(self):
name = object()
with self.assertRaises(SyntaxError):
_utils.create_cronjob(name=name)
def test_init_with_name(self):
name = "yomama"
rc = _utils.create_cronjob(name=name)
self.assertIsNotNone(rc)
self.assertIsInstance(rc, K8sCronJob)
self.assertEqual(rc.name, name)
# --------------------------------------------------------------------------------- containers
def test_containers(self):
c_name = "redis"
c_image = "redis:latest"
c_image_2 = "redis:3.2.3"
container = _utils.create_container(name=c_name, image=c_image)
name = "job-{}".format(uuid.uuid4())
cj = _utils.create_cronjob(name=name)
cj.add_container(container)
self.assertEqual(1, len(cj.containers))
self.assertIn(c_name, cj.container_image)
self.assertEqual(c_image, cj.container_image[c_name])
container = _utils.create_container(name=c_name, image=c_image_2)
cj.add_container(container)
self.assertEqual(1, len(cj.containers))
self.assertEqual(c_image_2, cj.container_image[c_name])
# --------------------------------------------------------------------------------- imagePullSecrets
def test_add_image_pull_secrets(self):
cfg = _utils.create_config()
cfg.pull_secret = [
{'name': 'secret-name'},
{'name': 'other-secret-name'},
{'name': 'secret-name'} # duplicate
]
cj = _utils.create_cronjob(config=cfg, name="yo")
self.assertEqual(2, len(cj.image_pull_secrets)) # duplicate not present
# --------------------------------------------------------------------------------- api - create
def test_api_create(self):
name = "job-{}".format(uuid.uuid4())
job = CronJob(_constants.scheduledjob())
k8s_cronjob = _utils.create_cronjob(name=name)
k8s_cronjob.model = job
if _utils.is_reachable(k8s_cronjob.config):
k8s_cronjob.create()
self.assertIsInstance(k8s_cronjob, K8sCronJob)
def test_api_create_long_running_with_concurrency(self):
name = "job-{}".format(uuid.uuid4())
job = CronJob(_constants.scheduledjob_90())
k8s_cronjob = _utils.create_cronjob(name=name)
k8s_cronjob.model = job
k8s_cronjob.concurrency_policy = "Allow"
if _utils.is_reachable(k8s_cronjob.config):
k8s_cronjob.create()
self.assertIsInstance(k8s_cronjob, K8sCronJob)
self.assertEqual('Allow', k8s_cronjob.concurrency_policy)
def test_api_create_long_running_no_concurrency(self):
name = "job-{}".format(uuid.uuid4())
job = CronJob(_constants.scheduledjob_90())
k8s_cronjob = _utils.create_cronjob(name=name)
k8s_cronjob.model = job
k8s_cronjob.concurrency_policy = "Forbid"
k8s_cronjob.starting_deadline_seconds = 10
if _utils.is_reachable(k8s_cronjob.config):
k8s_cronjob.create()
self.assertIsInstance(k8s_cronjob, K8sCronJob)
self.assertEqual('Forbid', k8s_cronjob.concurrency_policy)
self.assertEqual(10, k8s_cronjob.starting_deadline_seconds)
# --------------------------------------------------------------------------------- api - list
def test_list(self):
name = "job-{}".format(uuid.uuid4())
job = CronJob(_constants.scheduledjob_90())
k8s_cronjob = _utils.create_cronjob(name=name)
k8s_cronjob.model = job
k8s_cronjob.concurrency_policy = "Forbid"
k8s_cronjob.starting_deadline_seconds = 10
if _utils.is_reachable(k8s_cronjob.config):
k8s_cronjob.create()
crons = k8s_cronjob.list()
for c in crons:
self.assertIsInstance(c, K8sCronJob)
# --------------------------------------------------------------------------------- api - last scheduled time
def test_last_schedule_time(self):
name = "job-{}".format(uuid.uuid4())
job = CronJob(_constants.scheduledjob_90())
k8s_cronjob = _utils.create_cronjob(name=name)
k8s_cronjob.model = job
k8s_cronjob.concurrency_policy = "Forbid"
k8s_cronjob.starting_deadline_seconds = 10
if _utils.is_reachable(k8s_cronjob.config):
k8s_cronjob.create()
while not k8s_cronjob.last_schedule_time:
k8s_cronjob.get()
time.sleep(2)
lst = k8s_cronjob.last_schedule_time
self.assertIsNotNone(lst)
self.assertIsInstance(lst, str)
# --------------------------------------------------------------------------------- api - pod
def test_pod(self):
name = "job-{}".format(uuid.uuid4())
model = CronJob(_constants.scheduledjob_90())
cj = _utils.create_cronjob(name=name)
cj.model = model
cj.concurrency_policy = "Forbid"
cj.starting_deadline_seconds = 10
if _utils.is_reachable(cj.config):
cj.create()
while not cj.last_schedule_time:
cj.get()
time.sleep(2)
pod = cj.pod
self.assertIsInstance(pod, K8sPod)
# --------------------------------------------------------------------------------- api - run
def test_run_already_running(self):
name = "job-{}".format(uuid.uuid4())
model = CronJob(_constants.scheduledjob_90())
cj = _utils.create_cronjob(name=name)
cj.model = model
cj.concurrency_policy = "Forbid"
cj.starting_deadline_seconds = 10
if _utils.is_reachable(cj.config):
cj.create()
while not cj.last_schedule_time:
cj.get()
time.sleep(2)
with self.assertRaises(CronJobAlreadyRunningException):
cj.run()
def test_run(self):
name = "job-{}".format(uuid.uuid4())
model = CronJob(_constants.scheduledjob_90())
cj = _utils.create_cronjob(name=name)
cj.model = model
cj.concurrency_policy = "Forbid"
cj.starting_deadline_seconds = 10
if _utils.is_reachable(cj.config):
cj.create()
self.assertFalse(cj.suspend)
cj.run()
self.assertFalse(cj.suspend)
# --------------------------------------------------------------------------------- api - activeDeadlineSeconds
def test_active_deadline_seconds(self):
ads = 50
cfg = _utils.create_config()
cj = CronJob(_constants.cronjob())
k8s = K8sCronJob(config=cfg, name="yo")
k8s.model = cj
self.assertIsNone(k8s.active_deadline_seconds)
k8s.active_deadline_seconds = ads
self.assertIsNotNone(k8s.active_deadline_seconds)
self.assertEqual(k8s.active_deadline_seconds, ads)
def test_observe_active_deadline_seconds(self):
cfg = _utils.create_config()
cj = CronJob(_constants.cronjob_exit_1())
k8s = K8sCronJob(config=cfg, name="yo")
k8s.model = cj
if _utils.is_reachable(cfg):
k8s.create()
self.assertIsInstance(k8s, K8sCronJob)
|
[
"tests._utils.cleanup_cronjobs",
"kubernetes.K8sCronJob.K8sCronJob",
"tests._utils.cleanup_jobs",
"tests._utils.create_config",
"tests._utils.create_container",
"time.sleep",
"uuid.uuid4",
"tests._utils.create_cronjob",
"tests._utils.is_reachable",
"tests._constants.cronjob",
"tests._utils.cleanup_pods",
"tests._constants.scheduledjob",
"tests._constants.scheduledjob_90",
"tests._constants.cronjob_exit_1"
] |
[((563, 588), 'tests._utils.cleanup_cronjobs', '_utils.cleanup_cronjobs', ([], {}), '()\n', (586, 588), False, 'from tests import _utils\n'), ((597, 618), 'tests._utils.cleanup_jobs', '_utils.cleanup_jobs', ([], {}), '()\n', (616, 618), False, 'from tests import _utils\n'), ((627, 648), 'tests._utils.cleanup_pods', '_utils.cleanup_pods', ([], {}), '()\n', (646, 648), False, 'from tests import _utils\n'), ((682, 707), 'tests._utils.cleanup_cronjobs', '_utils.cleanup_cronjobs', ([], {}), '()\n', (705, 707), False, 'from tests import _utils\n'), ((716, 737), 'tests._utils.cleanup_jobs', '_utils.cleanup_jobs', ([], {}), '()\n', (735, 737), False, 'from tests import _utils\n'), ((746, 767), 'tests._utils.cleanup_pods', '_utils.cleanup_pods', ([], {}), '()\n', (765, 767), False, 'from tests import _utils\n'), ((1547, 1579), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (1568, 1579), False, 'from tests import _utils\n'), ((1943, 1994), 'tests._utils.create_container', '_utils.create_container', ([], {'name': 'c_name', 'image': 'c_image'}), '(name=c_name, image=c_image)\n', (1966, 1994), False, 'from tests import _utils\n'), ((2054, 2086), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (2075, 2086), False, 'from tests import _utils\n'), ((2304, 2357), 'tests._utils.create_container', '_utils.create_container', ([], {'name': 'c_name', 'image': 'c_image_2'}), '(name=c_name, image=c_image_2)\n', (2327, 2357), False, 'from tests import _utils\n'), ((2670, 2692), 'tests._utils.create_config', '_utils.create_config', ([], {}), '()\n', (2690, 2692), False, 'from tests import _utils\n'), ((2873, 2917), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'config': 'cfg', 'name': '"""yo"""'}), "(config=cfg, name='yo')\n", (2894, 2917), False, 'from tests import _utils\n'), ((3249, 3281), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (3270, 3281), False, 'from tests import _utils\n'), ((3325, 3364), 'tests._utils.is_reachable', '_utils.is_reachable', (['k8s_cronjob.config'], {}), '(k8s_cronjob.config)\n', (3344, 3364), False, 'from tests import _utils\n'), ((3640, 3672), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (3661, 3672), False, 'from tests import _utils\n'), ((3766, 3805), 'tests._utils.is_reachable', '_utils.is_reachable', (['k8s_cronjob.config'], {}), '(k8s_cronjob.config)\n', (3785, 3805), False, 'from tests import _utils\n'), ((4149, 4181), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (4170, 4181), False, 'from tests import _utils\n'), ((4327, 4366), 'tests._utils.is_reachable', '_utils.is_reachable', (['k8s_cronjob.config'], {}), '(k8s_cronjob.config)\n', (4346, 4366), False, 'from tests import _utils\n'), ((4849, 4881), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (4870, 4881), False, 'from tests import _utils\n'), ((5027, 5066), 'tests._utils.is_reachable', '_utils.is_reachable', (['k8s_cronjob.config'], {}), '(k8s_cronjob.config)\n', (5046, 5066), False, 'from tests import _utils\n'), ((5496, 5528), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (5517, 5528), False, 'from tests import _utils\n'), ((5674, 5713), 'tests._utils.is_reachable', '_utils.is_reachable', (['k8s_cronjob.config'], {}), '(k8s_cronjob.config)\n', (5693, 5713), False, 'from tests import _utils\n'), ((6234, 6266), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (6255, 6266), False, 'from tests import _utils\n'), ((6387, 6417), 'tests._utils.is_reachable', '_utils.is_reachable', (['cj.config'], {}), '(cj.config)\n', (6406, 6417), False, 'from tests import _utils\n'), ((6868, 6900), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (6889, 6900), False, 'from tests import _utils\n'), ((7021, 7051), 'tests._utils.is_reachable', '_utils.is_reachable', (['cj.config'], {}), '(cj.config)\n', (7040, 7051), False, 'from tests import _utils\n'), ((7408, 7440), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (7429, 7440), False, 'from tests import _utils\n'), ((7561, 7591), 'tests._utils.is_reachable', '_utils.is_reachable', (['cj.config'], {}), '(cj.config)\n', (7580, 7591), False, 'from tests import _utils\n'), ((7913, 7935), 'tests._utils.create_config', '_utils.create_config', ([], {}), '()\n', (7933, 7935), False, 'from tests import _utils\n'), ((7993, 8026), 'kubernetes.K8sCronJob.K8sCronJob', 'K8sCronJob', ([], {'config': 'cfg', 'name': '"""yo"""'}), "(config=cfg, name='yo')\n", (8003, 8026), False, 'from kubernetes.K8sCronJob import K8sCronJob\n'), ((8331, 8353), 'tests._utils.create_config', '_utils.create_config', ([], {}), '()\n', (8351, 8353), False, 'from tests import _utils\n'), ((8418, 8451), 'kubernetes.K8sCronJob.K8sCronJob', 'K8sCronJob', ([], {'config': 'cfg', 'name': '"""yo"""'}), "(config=cfg, name='yo')\n", (8428, 8451), False, 'from kubernetes.K8sCronJob import K8sCronJob\n'), ((8486, 8510), 'tests._utils.is_reachable', '_utils.is_reachable', (['cfg'], {}), '(cfg)\n', (8505, 8510), False, 'from tests import _utils\n'), ((921, 933), 'kubernetes.K8sCronJob.K8sCronJob', 'K8sCronJob', ([], {}), '()\n', (931, 933), False, 'from kubernetes.K8sCronJob import K8sCronJob\n'), ((1290, 1315), 'kubernetes.K8sCronJob.K8sCronJob', 'K8sCronJob', ([], {'config': 'config'}), '(config=config)\n', (1300, 1315), False, 'from kubernetes.K8sCronJob import K8sCronJob\n'), ((1441, 1473), 'tests._utils.create_cronjob', '_utils.create_cronjob', ([], {'name': 'name'}), '(name=name)\n', (1462, 1473), False, 'from tests import _utils\n'), ((2026, 2038), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (2036, 2038), False, 'import uuid\n'), ((3164, 3176), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (3174, 3176), False, 'import uuid\n'), ((3200, 3225), 'tests._constants.scheduledjob', '_constants.scheduledjob', ([], {}), '()\n', (3223, 3225), False, 'from tests import _constants\n'), ((3551, 3563), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (3561, 3563), False, 'import uuid\n'), ((3587, 3615), 'tests._constants.scheduledjob_90', '_constants.scheduledjob_90', ([], {}), '()\n', (3613, 3615), False, 'from tests import _constants\n'), ((4060, 4072), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (4070, 4072), False, 'import uuid\n'), ((4096, 4124), 'tests._constants.scheduledjob_90', '_constants.scheduledjob_90', ([], {}), '()\n', (4122, 4124), False, 'from tests import _constants\n'), ((4760, 4772), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (4770, 4772), False, 'import uuid\n'), ((4796, 4824), 'tests._constants.scheduledjob_90', '_constants.scheduledjob_90', ([], {}), '()\n', (4822, 4824), False, 'from tests import _constants\n'), ((5407, 5419), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (5417, 5419), False, 'import uuid\n'), ((5443, 5471), 'tests._constants.scheduledjob_90', '_constants.scheduledjob_90', ([], {}), '()\n', (5469, 5471), False, 'from tests import _constants\n'), ((6152, 6164), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (6162, 6164), False, 'import uuid\n'), ((6190, 6218), 'tests._constants.scheduledjob_90', '_constants.scheduledjob_90', ([], {}), '()\n', (6216, 6218), False, 'from tests import _constants\n'), ((6786, 6798), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (6796, 6798), False, 'import uuid\n'), ((6824, 6852), 'tests._constants.scheduledjob_90', '_constants.scheduledjob_90', ([], {}), '()\n', (6850, 6852), False, 'from tests import _constants\n'), ((7326, 7338), 'uuid.uuid4', 'uuid.uuid4', ([], {}), '()\n', (7336, 7338), False, 'import uuid\n'), ((7364, 7392), 'tests._constants.scheduledjob_90', '_constants.scheduledjob_90', ([], {}), '()\n', (7390, 7392), False, 'from tests import _constants\n'), ((7957, 7977), 'tests._constants.cronjob', '_constants.cronjob', ([], {}), '()\n', (7975, 7977), False, 'from tests import _constants\n'), ((8375, 8402), 'tests._constants.cronjob_exit_1', '_constants.cronjob_exit_1', ([], {}), '()\n', (8400, 8402), False, 'from tests import _constants\n'), ((5852, 5865), 'time.sleep', 'time.sleep', (['(2)'], {}), '(2)\n', (5862, 5865), False, 'import time\n'), ((6529, 6542), 'time.sleep', 'time.sleep', (['(2)'], {}), '(2)\n', (6539, 6542), False, 'import time\n'), ((7163, 7176), 'time.sleep', 'time.sleep', (['(2)'], {}), '(2)\n', (7173, 7176), False, 'import time\n')]
|
'''
This file implements various optimization methods, including
-- SGD with gradient norm clipping
-- AdaGrad
-- AdaDelta
-- Adam
Transparent to switch between CPU / GPU.
@author: <NAME> (<EMAIL>)
'''
import random
from collections import OrderedDict
import numpy as np
import theano
import theano.tensor as T
from theano.sandbox.cuda.basic_ops import HostFromGpu
from theano.sandbox.cuda.var import CudaNdarraySharedVariable
from theano.printing import debugprint
from .initialization import default_mrng
def create_optimization_updates(
cost, params, method="sgd",
max_norm=5, updates=None, gradients=None,
lr=0.01, eps=None, rho=0.99, gamma=0.999,
beta1=0.9, beta2=0.999, momentum=0.0):
_momentum = momentum
lr = theano.shared(np.float64(lr).astype(theano.config.floatX))
rho = theano.shared(np.float64(rho).astype(theano.config.floatX))
beta1 = theano.shared(np.float64(beta1).astype(theano.config.floatX))
beta2 = theano.shared(np.float64(beta2).astype(theano.config.floatX))
momentum = theano.shared(np.float64(momentum).astype(theano.config.floatX))
gamma = theano.shared(np.float64(gamma).astype(theano.config.floatX))
if eps is None:
eps = 1e-8 if method.lower() != "esgd" else 1e-4
eps = np.float64(eps).astype(theano.config.floatX)
gparams = T.grad(cost, params) if gradients is None else gradients
g_norm = 0
for g in gparams:
g_norm = g_norm + g.norm(2)**2
g_norm = T.sqrt(g_norm)
# max_norm is useful for sgd
if method != "sgd": max_norm = None
if max_norm is not None and max_norm is not False:
max_norm = theano.shared(np.float64(max_norm).astype(theano.config.floatX))
shrink_factor = T.minimum(max_norm, g_norm + eps) / (g_norm + eps)
gparams_clipped = [ ]
for g in gparams:
g = shrink_factor * g
gparams_clipped.append(g)
gparams = gparams_clipped
if updates is None:
updates = OrderedDict()
gsums = create_accumulators(params) if method != "sgd" or _momentum > 0.0 else \
[ None for p in params ]
xsums = create_accumulators(params) if method != "sgd" and method != "adagrad" else None
if method == "sgd":
create_sgd_updates(updates, params, gparams, gsums, lr, momentum)
elif method == "adagrad":
create_adagrad_updates(updates, params, gparams, gsums, lr, eps)
elif method == "adadelta":
create_adadelta_updates(updates, params, gparams, gsums, xsums, lr, eps, rho)
elif method == "adam":
create_adam_updates(updates, params, gparams, gsums, xsums, lr, eps, beta1, beta2)
elif method == "esgd":
create_esgd_updates(updates, params, gparams, gsums, xsums, lr, eps, gamma, momentum)
else:
raise Exception("Unknown optim method: {}\n".format(method))
if method == "adadelta":
lr = rho
return updates, lr, g_norm, gsums, xsums, max_norm
def is_subtensor_op(p):
if hasattr(p, 'owner') and hasattr(p.owner, 'op'):
return isinstance(p.owner.op, T.AdvancedSubtensor1) or \
isinstance(p.owner.op, T.Subtensor)
return False
def get_subtensor_op_inputs(p):
origin, indexes = p.owner.inputs
if hasattr(origin, 'owner') and hasattr(origin.owner, 'op') and \
isinstance(origin.owner.op, HostFromGpu):
origin = origin.owner.inputs[0]
assert isinstance(origin, CudaNdarraySharedVariable)
return origin, indexes
def get_similar_subtensor(matrix, indexes, param_op):
'''
So far there is only two possible subtensor operation used.
'''
if isinstance(param_op.owner.op, T.AdvancedSubtensor1):
return matrix[indexes]
else:
# indexes is start index in this case
return matrix[indexes:]
def create_accumulators(params):
accums = [ ]
for p in params:
if is_subtensor_op(p):
origin, _ = get_subtensor_op_inputs(p)
acc = theano.shared(np.zeros_like(origin.get_value(borrow=True), \
dtype=theano.config.floatX))
else:
acc = theano.shared(np.zeros_like(p.get_value(borrow=True), \
dtype=theano.config.floatX))
accums.append(acc)
return accums
def create_sgd_updates(updates, params, gparams, gsums, lr, momentum):
has_momentum = momentum.get_value() > 0.0
for p, g, acc in zip(params, gparams, gsums):
if is_subtensor_op(p):
origin, indexes = get_subtensor_op_inputs(p)
if has_momentum:
acc_slices = get_similar_subtensor(acc, indexes, p)
new_acc = acc_slices*momentum + g
updates[acc] = T.set_subtensor(acc_slices, new_acc)
else:
new_acc = g
updates[origin] = T.inc_subtensor(p, - lr * new_acc)
else:
if has_momentum:
new_acc = acc*momentum + g
updates[acc] = new_acc
else:
new_acc = g
updates[p] = p - lr * new_acc
def create_adagrad_updates(updates, params, gparams, gsums, lr, eps):
for p, g, acc in zip(params, gparams, gsums):
if is_subtensor_op(p):
origin, indexes = get_subtensor_op_inputs(p)
#acc_slices = acc[indexes]
acc_slices = get_similar_subtensor(acc, indexes, p)
new_acc = acc_slices + g**2
updates[acc] = T.set_subtensor(acc_slices, new_acc)
updates[origin] = T.inc_subtensor(p, \
- lr * (g / T.sqrt(new_acc + eps)))
else:
new_acc = acc + g**2
updates[acc] = new_acc
updates[p] = p - lr * (g / T.sqrt(new_acc + eps))
#updates[p] = p - lr * (g / (T.sqrt(new_acc) + eps))
# which one to use?
def create_adadelta_updates(updates, params, gparams, gsums, xsums,\
lr, eps, rho):
for p, g, gacc, xacc in zip(params, gparams, gsums, xsums):
if is_subtensor_op(p):
origin, indexes = get_subtensor_op_inputs(p)
gacc_slices = gacc[indexes]
xacc_slices = xacc[indexes]
new_gacc = rho * gacc_slices + (1.0-rho) * g**2
d = -T.sqrt((xacc_slices + eps)/(new_gacc + eps)) * g
new_xacc = rho * xacc_slices + (1.0-rho) * d**2
updates[gacc] = T.set_subtensor(gacc_slices, new_gacc)
updates[xacc] = T.set_subtensor(xacc_slices, new_xacc)
updates[origin] = T.inc_subtensor(p, d)
else:
new_gacc = rho * gacc + (1.0-rho) * g**2
d = -T.sqrt((xacc + eps)/(new_gacc + eps)) * g
new_xacc = rho * xacc + (1.0-rho) * d**2
updates[gacc] = new_gacc
updates[xacc] = new_xacc
updates[p] = p + d
def create_adam_updates(updates, params, gparams, gsums, xsums, \
lr, eps, beta1, beta2):
i = theano.shared(np.float64(0.0).astype(theano.config.floatX))
i_t = i + 1.0
omb1_t = 1.0 - beta1**i_t
omb2_t = 1.0 - beta2**i_t
lr_t = lr * (T.sqrt(omb2_t) / omb1_t)
for p, g, m, v in zip(params, gparams, gsums, xsums):
if is_subtensor_op(p):
origin, indexes = get_subtensor_op_inputs(p)
m_sub = m[indexes]
v_sub = v[indexes]
m_t = beta1*m_sub + (1.0-beta1)*g
v_t = beta2*v_sub + (1.0-beta2)*T.sqr(g)
g_t = m_t / (T.sqrt(v_t) + eps)
updates[m] = T.set_subtensor(m_sub, m_t)
updates[v] = T.set_subtensor(v_sub, v_t)
updates[origin] = T.inc_subtensor(p, -lr_t*g_t)
else:
m_t = beta1*m + (1.0-beta1)*g
v_t = beta2*v + (1.0-beta2)*T.sqr(g)
g_t = m_t / (T.sqrt(v_t) + eps)
updates[m] = m_t
updates[v] = v_t
updates[p] = p - lr_t*g_t
updates[i] = i_t
def create_esgd_updates(updates, params, gparams, gsums, xsums, lr, eps, gamma, momentum):
has_momentum = momentum.get_value() > 0.0
samples = [ default_mrng.normal(size=p.shape, avg=0, std=1,
dtype=theano.config.floatX) for p in params ]
HVs = T.Lop(gparams, params, samples)
i = theano.shared(np.float64(0.0).astype(theano.config.floatX))
i_t = i + 1.0
omg_t = 1.0 - gamma**i_t
for p, g, m, D, Hv in zip(params, gparams, gsums, xsums, HVs):
if is_subtensor_op(p):
raise Exception("ESGD subtensor update not implemented!")
else:
D_t = D * gamma + T.sqr(Hv) * (1.0-gamma)
if has_momentum:
m_t = m*momentum + g
updates[m] = m_t
else:
m_t = g
g_t = m_t / ( T.sqrt(D_t/omg_t + eps) )
#g_t = m_t / ( T.sqrt(D_t + eps) )
updates[D] = D_t
updates[p] = p - lr*g_t
updates[i] = i_t
|
[
"theano.tensor.Lop",
"collections.OrderedDict",
"theano.tensor.sqrt",
"theano.tensor.minimum",
"numpy.float64",
"theano.tensor.sqr",
"theano.tensor.set_subtensor",
"theano.tensor.inc_subtensor",
"theano.tensor.grad"
] |
[((1566, 1580), 'theano.tensor.sqrt', 'T.sqrt', (['g_norm'], {}), '(g_norm)\n', (1572, 1580), True, 'import theano.tensor as T\n'), ((8330, 8361), 'theano.tensor.Lop', 'T.Lop', (['gparams', 'params', 'samples'], {}), '(gparams, params, samples)\n', (8335, 8361), True, 'import theano.tensor as T\n'), ((1419, 1439), 'theano.tensor.grad', 'T.grad', (['cost', 'params'], {}), '(cost, params)\n', (1425, 1439), True, 'import theano.tensor as T\n'), ((2075, 2088), 'collections.OrderedDict', 'OrderedDict', ([], {}), '()\n', (2086, 2088), False, 'from collections import OrderedDict\n'), ((1359, 1374), 'numpy.float64', 'np.float64', (['eps'], {}), '(eps)\n', (1369, 1374), True, 'import numpy as np\n'), ((1819, 1852), 'theano.tensor.minimum', 'T.minimum', (['max_norm', '(g_norm + eps)'], {}), '(max_norm, g_norm + eps)\n', (1828, 1852), True, 'import theano.tensor as T\n'), ((4945, 4978), 'theano.tensor.inc_subtensor', 'T.inc_subtensor', (['p', '(-lr * new_acc)'], {}), '(p, -lr * new_acc)\n', (4960, 4978), True, 'import theano.tensor as T\n'), ((5572, 5608), 'theano.tensor.set_subtensor', 'T.set_subtensor', (['acc_slices', 'new_acc'], {}), '(acc_slices, new_acc)\n', (5587, 5608), True, 'import theano.tensor as T\n'), ((6520, 6558), 'theano.tensor.set_subtensor', 'T.set_subtensor', (['gacc_slices', 'new_gacc'], {}), '(gacc_slices, new_gacc)\n', (6535, 6558), True, 'import theano.tensor as T\n'), ((6587, 6625), 'theano.tensor.set_subtensor', 'T.set_subtensor', (['xacc_slices', 'new_xacc'], {}), '(xacc_slices, new_xacc)\n', (6602, 6625), True, 'import theano.tensor as T\n'), ((6656, 6677), 'theano.tensor.inc_subtensor', 'T.inc_subtensor', (['p', 'd'], {}), '(p, d)\n', (6671, 6677), True, 'import theano.tensor as T\n'), ((7244, 7258), 'theano.tensor.sqrt', 'T.sqrt', (['omb2_t'], {}), '(omb2_t)\n', (7250, 7258), True, 'import theano.tensor as T\n'), ((7645, 7672), 'theano.tensor.set_subtensor', 'T.set_subtensor', (['m_sub', 'm_t'], {}), '(m_sub, m_t)\n', (7660, 7672), True, 'import theano.tensor as T\n'), ((7698, 7725), 'theano.tensor.set_subtensor', 'T.set_subtensor', (['v_sub', 'v_t'], {}), '(v_sub, v_t)\n', (7713, 7725), True, 'import theano.tensor as T\n'), ((7756, 7787), 'theano.tensor.inc_subtensor', 'T.inc_subtensor', (['p', '(-lr_t * g_t)'], {}), '(p, -lr_t * g_t)\n', (7771, 7787), True, 'import theano.tensor as T\n'), ((854, 868), 'numpy.float64', 'np.float64', (['lr'], {}), '(lr)\n', (864, 868), True, 'import numpy as np\n'), ((923, 938), 'numpy.float64', 'np.float64', (['rho'], {}), '(rho)\n', (933, 938), True, 'import numpy as np\n'), ((995, 1012), 'numpy.float64', 'np.float64', (['beta1'], {}), '(beta1)\n', (1005, 1012), True, 'import numpy as np\n'), ((1069, 1086), 'numpy.float64', 'np.float64', (['beta2'], {}), '(beta2)\n', (1079, 1086), True, 'import numpy as np\n'), ((1146, 1166), 'numpy.float64', 'np.float64', (['momentum'], {}), '(momentum)\n', (1156, 1166), True, 'import numpy as np\n'), ((1223, 1240), 'numpy.float64', 'np.float64', (['gamma'], {}), '(gamma)\n', (1233, 1240), True, 'import numpy as np\n'), ((4832, 4868), 'theano.tensor.set_subtensor', 'T.set_subtensor', (['acc_slices', 'new_acc'], {}), '(acc_slices, new_acc)\n', (4847, 4868), True, 'import theano.tensor as T\n'), ((7103, 7118), 'numpy.float64', 'np.float64', (['(0.0)'], {}), '(0.0)\n', (7113, 7118), True, 'import numpy as np\n'), ((8385, 8400), 'numpy.float64', 'np.float64', (['(0.0)'], {}), '(0.0)\n', (8395, 8400), True, 'import numpy as np\n'), ((8881, 8906), 'theano.tensor.sqrt', 'T.sqrt', (['(D_t / omg_t + eps)'], {}), '(D_t / omg_t + eps)\n', (8887, 8906), True, 'import theano.tensor as T\n'), ((1744, 1764), 'numpy.float64', 'np.float64', (['max_norm'], {}), '(max_norm)\n', (1754, 1764), True, 'import numpy as np\n'), ((6383, 6429), 'theano.tensor.sqrt', 'T.sqrt', (['((xacc_slices + eps) / (new_gacc + eps))'], {}), '((xacc_slices + eps) / (new_gacc + eps))\n', (6389, 6429), True, 'import theano.tensor as T\n'), ((6762, 6801), 'theano.tensor.sqrt', 'T.sqrt', (['((xacc + eps) / (new_gacc + eps))'], {}), '((xacc + eps) / (new_gacc + eps))\n', (6768, 6801), True, 'import theano.tensor as T\n'), ((7567, 7575), 'theano.tensor.sqr', 'T.sqr', (['g'], {}), '(g)\n', (7572, 7575), True, 'import theano.tensor as T\n'), ((7601, 7612), 'theano.tensor.sqrt', 'T.sqrt', (['v_t'], {}), '(v_t)\n', (7607, 7612), True, 'import theano.tensor as T\n'), ((7882, 7890), 'theano.tensor.sqr', 'T.sqr', (['g'], {}), '(g)\n', (7887, 7890), True, 'import theano.tensor as T\n'), ((7916, 7927), 'theano.tensor.sqrt', 'T.sqrt', (['v_t'], {}), '(v_t)\n', (7922, 7927), True, 'import theano.tensor as T\n'), ((8690, 8699), 'theano.tensor.sqr', 'T.sqr', (['Hv'], {}), '(Hv)\n', (8695, 8699), True, 'import theano.tensor as T\n'), ((5692, 5713), 'theano.tensor.sqrt', 'T.sqrt', (['(new_acc + eps)'], {}), '(new_acc + eps)\n', (5698, 5713), True, 'import theano.tensor as T\n'), ((5837, 5858), 'theano.tensor.sqrt', 'T.sqrt', (['(new_acc + eps)'], {}), '(new_acc + eps)\n', (5843, 5858), True, 'import theano.tensor as T\n')]
|
import sys
from PyQt5.QtWidgets import QMainWindow
from PyQt5.QtGui import QPixmap, QIcon
from PyQt5 import uic
from internationalization import LANGUAGE
class Loose(QMainWindow):
def __init__(self, lang):
QMainWindow.__init__(self)
uic.loadUi("windows/Looser.ui", self)
self.lang = lang
self.reload_text()
self.loser = QPixmap("resources/loser.png")
self.lose_button.clicked.connect(self.end_game)
self.lose_image.setPixmap(self.loser)
def reload_text(self):
"""Change the language of the window according to the chosen previously"""
self.language=LANGUAGE.get(self.lang)
self.setWindowTitle(self.language["lose_title"])
self.lose_label.setText(self.language["lose_text"])
self.lose_button.setText(self.language["return_to_menu"])
def end_game(self):
self.close()
|
[
"PyQt5.uic.loadUi",
"PyQt5.QtGui.QPixmap",
"PyQt5.QtWidgets.QMainWindow.__init__",
"internationalization.LANGUAGE.get"
] |
[((219, 245), 'PyQt5.QtWidgets.QMainWindow.__init__', 'QMainWindow.__init__', (['self'], {}), '(self)\n', (239, 245), False, 'from PyQt5.QtWidgets import QMainWindow\n'), ((254, 291), 'PyQt5.uic.loadUi', 'uic.loadUi', (['"""windows/Looser.ui"""', 'self'], {}), "('windows/Looser.ui', self)\n", (264, 291), False, 'from PyQt5 import uic\n'), ((365, 395), 'PyQt5.QtGui.QPixmap', 'QPixmap', (['"""resources/loser.png"""'], {}), "('resources/loser.png')\n", (372, 395), False, 'from PyQt5.QtGui import QPixmap, QIcon\n'), ((631, 654), 'internationalization.LANGUAGE.get', 'LANGUAGE.get', (['self.lang'], {}), '(self.lang)\n', (643, 654), False, 'from internationalization import LANGUAGE\n')]
|
import os
import subprocess
import time
class Asciicast(object):
def __init__(self, env=os.environ):
self.command = None
self.title = None
self.shell = env.get('SHELL', '/bin/sh')
self.term = env.get('TERM')
self.username = env.get('USER')
@property
def meta_data(self):
lines = int(get_command_output(['tput', 'lines']))
columns = int(get_command_output(['tput', 'cols']))
return {
'username' : self.username,
'duration' : self.duration,
'title' : self.title,
'command' : self.command,
'shell' : self.shell,
'term' : {
'type' : self.term,
'lines' : lines,
'columns': columns
}
}
def get_command_output(args):
process = subprocess.Popen(args, stdout=subprocess.PIPE)
return process.communicate()[0].strip()
|
[
"subprocess.Popen"
] |
[((873, 919), 'subprocess.Popen', 'subprocess.Popen', (['args'], {'stdout': 'subprocess.PIPE'}), '(args, stdout=subprocess.PIPE)\n', (889, 919), False, 'import subprocess\n')]
|
import sys
from pathlib import Path
import numpy as np
import pandas as pd
from bokeh.models import ColumnDataSource
from bokeh.io import export_png
from bokeh.plotting import figure
def plot_lifetime(df, type, path):
df = df.copy()
palette = ["#c9d9d3", "#718dbf", "#e84d60", "#648450"]
ylist = []
list0 = []
list1 = []
list2 = []
list3 = []
interv = np.sort(df["age_real"].unique())
for a in interv:
df_rel = df[df["age_real"]==a]
n = len(df_rel)
status0 = sum(df_rel["employment_status_" + type] == 0)/n
status1 = sum(df_rel["employment_status_" + type] == 1)/n
status2 = sum(df_rel["employment_status_" + type] == 2)/n
status3 = sum(df_rel["employment_status_" + type] == 3)/n
ylist.append(str(a))
list0.append(status0)
list1.append(status1)
list2.append(status2)
list3.append(status3)
dici = {"age": ylist,
"0": list0,
"1": list1,
"2": list2,
"3": list3}
#alllist = ["0", "1", "2", "3"]
#labels = ["N.E.", "Rente", "Teilzeit", "Vollzeit"]
alllist = ["3", "2", "0", "1"]
labels = ["Vollzeit", "Teilzeit", "N.E.", "Rente"]
p = figure(x_range=ylist, plot_height=250, plot_width=1500, title="Employment Status by age: West Germany / type: " + type)
p.vbar_stack(alllist, x='age', width=0.9, color=palette, source=dici,
legend_label=labels)
p.y_range.start = 0
p.x_range.range_padding = 0.1
p.xgrid.grid_line_color = None
p.axis.minor_tick_line_color = None
p.outline_line_color = None
p.legend.location = "bottom_left"
p.legend.orientation = "horizontal"
str_path = "employment_" + type + ".png"
export_png(p, filename=str(path/ str_path))
def var_by_method(dataf, variable):
dataf_out = pd.DataFrame()
dataf_out["pid"] = dataf["pid"]
dataf_out["year"] = dataf["year"]
dataf_out["hid"] = dataf["hid_real"]
dataf_out["age"] = dataf["age_real"]
for m in ["real", "ext"]:
dataf_out[m] = dataf[variable + "_" + m]
return dataf_out
def plot_mean_by_age(dataf, m_list, variable, path):
m_list = ["real", "ext"]
dataf = dataf.copy()
df = var_by_method(dataf, variable)
df_plot = df.groupby("age")[m_list].mean()
fig_title = variable
file_title = variable + ".png"
# return df
plot_age(df_plot, fig_title, file_title, path)
def make_pretty(p):
p.xgrid.grid_line_color = None
p.yaxis.minor_tick_line_width=0
p.xaxis.minor_tick_line_width=0
# p.legend.location = "bottom_right"
return p
def plot_employment_status_by_age(dataf, employment_status, path, female=None, east=None):
dataf = dataf.copy()
dataf_rest = rest_dataf(dataf, female, east)
status_list = ["N_E", "Rente", "Teilzeit", "Vollzeit"]
status = status_list[employment_status]
df_tmp = var_by_method(dataf_rest, "employment_status")
tmp = df_tmp[["real", "ext"]] == employment_status
df_plot = pd.concat([df_tmp["age"], tmp], axis=1)
df_plot = df_plot.groupby("age").mean()
# Plotting
fig_title, file_title = get_titles(female, east, status)
plot_age(df_plot, fig_title, file_title, path, interv=1)
def plot_age(dataf, fig_title, file_title, path, interv=0):
source = ColumnDataSource(dataf)
if interv==1:
p = figure(title = fig_title, y_range=(0, 1))
else:
p = figure(title = fig_title)
p.line(x="age", y="real", source=source,
line_color="black", line_dash = "solid", line_width=2,
legend_label = "Real")
p.line(x="age", y="ext", source=source,
line_color="black", line_dash = "dotted", line_width=2,
legend_label = "Ext")
p.xaxis.axis_label = "Age"
p = make_pretty(p)
export_png(p, filename=str(path/ file_title))
def plot_year(dataf, fig_title, file_title, path, interv=0):
source = ColumnDataSource(dataf)
if interv==1:
p = figure(title = fig_title, y_range=(0, 1))
else:
p = figure(title = fig_title)
p.line(x="year", y="real", source=source,
line_color="black", line_dash = "solid", line_width=2,
legend_label = "Real")
p.line(x="year", y="ext", source=source,
line_color="black", line_dash = "dotted", line_width=2,
legend_label = "Ext")
p.xaxis.axis_label = "Year"
p = make_pretty(p)
export_png(p, filename=str(path/ file_title))
def plot_year_age(ploto, by="year"):
source = ColumnDataSource(ploto.df_plot)
if ploto.y_range is None:
p = figure(title = ploto.fig_title)
else:
p = figure(title = ploto.fig_title, y_range=ploto.y_range)
p.line(x=by, y="real", source=source,
line_color="black", line_dash = "solid", line_width=2,
legend_label = "Real")
p.line(x=by, y="ext", source=source,
line_color="black", line_dash = "dotted", line_width=2,
legend_label = "Ext")
if by == "year":
p.xaxis.axis_label = "Year"
elif by == "age":
p.xaxis.axis_label = "Age"
p = make_pretty(p)
export_png(p, filename=str(ploto.path/ ploto.file_title))
def rest_dataf(dataf, female, east):
dataf = dataf.copy()
method = "real" # Gender and East do not change during the simulation
# Including either all people, or only male and female
if female == 1:
condition_female = dataf["female_" + method] == 1
elif female == 0:
condition_female = dataf["female_" + method] == 0
else:
condition_female = np.ones(len(dataf))
# Including either all people, or only east or west germans
if east == 1:
condition_east = dataf["east_" + method] == 1
elif east == 0:
condition_east = dataf["east_" + method] == 0
else:
condition_east = np.ones(len(dataf))
# Output is then sum of both conditions
final_condition = (condition_female).astype(int) \
+ (condition_east).astype(int)
df_out = dataf[final_condition == 2]
return df_out
def get_titles(female, east, status):
title = ""
shorttitle = status
if (female==None) & (east==None):
title = "Employment status: " + status + "; all people"
shorttitle += "_mfew.png"
elif (female==None) & (east==0):
title = "Employment status: " + status + "; all genders, west Germany"
shorttitle += "_mfw.png"
elif (female==None) & (east==1):
title = "Employment status: " + status + "; all genders, east Germany"
shorttitle += "_mfe.png"
elif (female==0) & (east==None):
title = "Employment status: " + status + "; male, whole Germany"
shorttitle += "_mew.png"
elif (female==1) & (east==None):
title = "Employment status: " + status + "; female, whole Germany"
shorttitle += "_few.png"
elif (female==0) & (east==0):
title = "Employment status: " + status + "; male, west Germany"
shorttitle += "_mw.png"
elif (female==0) & (east==1):
title = "Employment status: " + status + "; male, east Germany"
shorttitle += "_me.png"
elif (female==1) & (east==0):
title = "Employment status: " + status + "; female, west Germany"
shorttitle += "_fw.png"
elif (female==1) & (east==1):
title = "Employment status: " + status + "; female, east Germany"
shorttitle += "_fe.png"
return title, shorttitle
def get_titles_incomes(suffix, variable, working, female, fulltime, measure):
w_string = ""
f_string = ""
t_string = ""
if working==1:
w_string = "_working"
else:
pass
if female==1:
f_string = "_female"
elif female==0:
f_string = "_male"
else:
pass
if fulltime==1:
t_string = "_fulltime"
elif fulltime==0:
t_string = "_parttime"
else:
pass
fig_title = suffix + measure + "_" + variable + w_string + f_string + t_string
file_title = fig_title + ".png"
return fig_title, file_title
def wrap_employment_plots(dataf, path):
dataf = dataf.copy()
for emp in np.arange(4):
# All people, all employment status
plot_employment_status_by_age(dataf, emp, path)
# Males, all employment status
plot_employment_status_by_age(dataf, emp, path, female=0)
# Females, all employment status
plot_employment_status_by_age(dataf, emp, path, female=1)
# All_people, east Germany, all employment status
plot_employment_status_by_age(dataf, emp, path, east=1)
# All_people, west Germany, all employment status
plot_employment_status_by_age(dataf, emp, path, east=0)
# Males, east Germany, all employment status
plot_employment_status_by_age(dataf, emp, path, female=0, east=1)
# Males, west Germany, all employment status
plot_employment_status_by_age(dataf, emp, path, female=0, east=0)
# Females, east Germany, all employment status
plot_employment_status_by_age(dataf, emp, path, female=1, east=1)
# Females, west Germany, all employment status
plot_employment_status_by_age(dataf, emp, path, female=1, east=0)
def condition_by_type(dataf, method, working=False, female=None, fulltime=None):
dataf = dataf.copy()
# Condition to include all or only working people
if working:
condition_work = dataf["working_" + method] == 1
else:
condition_work = np.ones(len(dataf))
# Including either all people, or only male and female
if female == 1:
condition_female = dataf["female_" + method] == 1
elif female == 0:
condition_female = dataf["female_" + method] == 0
else:
condition_female = np.ones(len(dataf))
# Including either all people, or only male and female
if fulltime == 1:
condition_fulltime = dataf["fulltime_" + method] == 1
elif fulltime == 0:
condition_fulltime = dataf["parttimetime_" + method] == 1
else:
condition_fulltime = np.ones(len(dataf))
# Output is then sum of both conditions
final_condition = (condition_female).astype(int) \
+ (condition_work).astype(int) \
+ (condition_fulltime).astype(int)
df_out = dataf[final_condition == 3]
return df_out
def restrict(dataf, working=False, female=None, fulltime=None):
dataf = dataf.copy()
out_dici = {"real": condition_by_type(dataf, "real", working, female, fulltime),
"ext": condition_by_type(dataf, "ext", working, female, fulltime)}
return out_dici
def var_by_method_dici(dici, variable, group, measure):
tmp = {}
m_list = ["real", "ext"]
for m in m_list:
if measure == "mean":
tmp[m] = dici[m].groupby(group)[variable + "_" + m].mean()
elif measure == "median":
tmp[m] = dici[m].groupby(group)[variable + "_" + m].median()
elif measure == "p90p50":
p90 = dici[m].groupby(group)[variable + "_" + m].quantile(0.9)
p50 = dici[m].groupby(group)[variable + "_" + m].quantile(0.5)
tmp[m] = p90/p50
elif measure == "p90p10":
p90 = dici[m].groupby(group)[variable + "_" + m].quantile(0.9)
p10 = dici[m].groupby(group)[variable + "_" + m].quantile(0.1)
tmp[m] = p90/p10
elif measure == "p50p10":
p50 = dici[m].groupby(group)[variable + "_" + m].quantile(0.5)
p10 = dici[m].groupby(group)[variable + "_" + m].quantile(0.1)
tmp[m] = p50/p10
elif measure == "gini":
tmp[m] = dici[m].groupby(group)[variable + "_" + m].agg(gini_coefficient)
df_out = pd.DataFrame(tmp)
return df_out
def plot_income_age(dataf, variable, path, working=None, female=None, fulltime=None, measure="mean"):
dataf = dataf.copy()
dici = restrict(dataf, working, female, fulltime)
df_plot = var_by_method_dici(dici, variable, group="age_real", measure=measure)
df_plot = df_plot.fillna(0)
df_plot.reset_index(inplace=True)
df_plot.rename(columns={"age_real": "age"}, inplace=True)
fig_title, file_title = get_titles_incomes("age_", variable, working, female, fulltime, measure)
plot_age(df_plot, fig_title, file_title, path)
def wrap_income_age_plots(dataf, path):
dataf = dataf.copy()
variables = ["gross_earnings", "hours"]
for var in variables:
for m in ["mean", "median"]:
# All people
plot_income_age(dataf, var, path=path, measure=m)
plot_income_age(dataf, var, path=path, female=0, measure=m)
plot_income_age(dataf, var, path=path, female=1, measure=m)
# Conditional on working
plot_income_age(dataf, var, path=path, working=1, measure=m)
plot_income_age(dataf, var, path=path, working=1, female=0, measure=m)
plot_income_age(dataf, var, path=path, working=1, female=1, measure=m)
# Conditional on fulltime
plot_income_age(dataf, var, path=path, fulltime=1, measure=m)
plot_income_age(dataf, var, path=path, fulltime=1, female=0, measure=m)
plot_income_age(dataf, var, path=path, fulltime=1, female=1, measure=m)
def plot_income_year(dataf, variable, path, working=None, female=None, fulltime=None, measure="mean"):
dataf = dataf.copy()
dici = restrict(dataf, working, female, fulltime)
df_plot = var_by_method_dici(dici, variable, group="year", measure=measure)
df_plot = df_plot.fillna(0)
df_plot.reset_index(inplace=True)
fig_title, file_title = get_titles_incomes("year_", variable, working, female, fulltime, measure)
plot_year(df_plot, fig_title, file_title, path)
def plot_income_year2(ploto, measure="mean"):
dici = restrict(ploto.data, ploto.working, ploto.female, ploto.fulltime)
df_plot = var_by_method_dici(dici, ploto.var, group="year", measure=measure)
df_plot = df_plot.fillna(0)
df_plot.reset_index(inplace=True)
fig_title, file_title = get_titles_incomes("year_", ploto.var, ploto.working, ploto.female, ploto.fulltime, measure)
plot_year2(df_plot, fig_title, file_title, ploto)
def wrap_income_year_plots(dataf, path):
dataf = dataf.copy()
variables = ["gross_earnings", "hours"]
for var in variables:
for m in ["mean", "median"]:
# All people
plot_income_year(dataf, var, path=path, measure=m)
plot_income_year(dataf, var, path=path, female=0, measure=m)
plot_income_year(dataf, var, path=path, female=1, measure=m)
# Conditional on working
plot_income_year(dataf, var, path=path, working=1, measure=m)
plot_income_year(dataf, var, path=path, working=1, female=0, measure=m)
plot_income_year(dataf, var, path=path, working=1, female=1, measure=m)
# Conditional on fulltime
plot_income_year(dataf, var, path=path, fulltime=1, measure=m)
plot_income_year(dataf, var, path=path, fulltime=1, female=0, measure=m)
plot_income_year(dataf, var, path=path, fulltime=1, female=1, measure=m)
def plot_inequality_year(dataf, variable, path, working=None, female=None, fulltime=None, measure="mean"):
dataf = dataf.copy()
dici = restrict(dataf, working, female, fulltime)
df_plot = var_by_method_dici(dici, variable, group="year", measure=measure)
df_plot = df_plot.fillna(0)
df_plot.reset_index(inplace=True)
fig_title, file_title = get_titles_incomes("ineq_", variable, working, female, fulltime, measure)
plot_year(df_plot, fig_title, file_title, path)
def wrap_inequality_year_plots(dataf, path):
dataf = dataf.copy()
var = ["gross_earnings", "hours"]
for v in var:
for m in ["p90p50", "p90p10", "p50p10", "gini"]:
plot_inequality_year(dataf, v, path, working=1, measure=m)
plot_inequality_year(dataf, v, path, working=1, female=0, measure=m)
plot_inequality_year(dataf, v, path, working=1, female=1, measure=m)
def gini_coefficient(x):
"""Compute Gini coefficient of array of values"""
diffsum = 0
for i, xi in enumerate(x[:-1], 1):
diffsum += np.sum(np.abs(xi - x[i:]))
return diffsum / (len(x)**2 * np.mean(x))
def make_quantile(dataf, var, m_list, q):
dataf = dataf.copy()
for m in m_list:
variable = var + "_" + m
real_q = dataf.groupby(["year"])[variable].quantile(q).to_frame()
real_q.rename(columns={variable: "var"}, inplace=True)
dataf = pd.merge(dataf, real_q, how="left", on="year")
dataf.loc[dataf[variable]>dataf["var"], variable] = dataf["var"]
dataf.drop("var", axis=1, inplace=True)
return dataf
def cap_outliers(dataf, m_list):
dataf = dataf.copy()
# # Hours
# dataf = make_quantile(dataf, "hours", m_list, 0.99)
# dataf = make_quantile(dataf, "hours_t1", m_list, 0.99)
# dataf = make_quantile(dataf, "hours_t2", m_list, 0.99)
# Gross earnings
dataf = make_quantile(dataf, "gross_earnings", m_list, 0.95)
dataf = make_quantile(dataf, "gross_earnings_t1", m_list, 0.95)
dataf = make_quantile(dataf, "gross_earnings_t2", m_list, 0.95)
return dataf
|
[
"numpy.abs",
"numpy.mean",
"bokeh.plotting.figure",
"pandas.merge",
"bokeh.models.ColumnDataSource",
"pandas.DataFrame",
"pandas.concat",
"numpy.arange"
] |
[((1241, 1365), 'bokeh.plotting.figure', 'figure', ([], {'x_range': 'ylist', 'plot_height': '(250)', 'plot_width': '(1500)', 'title': "('Employment Status by age: West Germany / type: ' + type)"}), "(x_range=ylist, plot_height=250, plot_width=1500, title=\n 'Employment Status by age: West Germany / type: ' + type)\n", (1247, 1365), False, 'from bokeh.plotting import figure\n'), ((1875, 1889), 'pandas.DataFrame', 'pd.DataFrame', ([], {}), '()\n', (1887, 1889), True, 'import pandas as pd\n'), ((3093, 3132), 'pandas.concat', 'pd.concat', (["[df_tmp['age'], tmp]"], {'axis': '(1)'}), "([df_tmp['age'], tmp], axis=1)\n", (3102, 3132), True, 'import pandas as pd\n'), ((3402, 3425), 'bokeh.models.ColumnDataSource', 'ColumnDataSource', (['dataf'], {}), '(dataf)\n', (3418, 3425), False, 'from bokeh.models import ColumnDataSource\n'), ((4037, 4060), 'bokeh.models.ColumnDataSource', 'ColumnDataSource', (['dataf'], {}), '(dataf)\n', (4053, 4060), False, 'from bokeh.models import ColumnDataSource\n'), ((4651, 4682), 'bokeh.models.ColumnDataSource', 'ColumnDataSource', (['ploto.df_plot'], {}), '(ploto.df_plot)\n', (4667, 4682), False, 'from bokeh.models import ColumnDataSource\n'), ((8418, 8430), 'numpy.arange', 'np.arange', (['(4)'], {}), '(4)\n', (8427, 8430), True, 'import numpy as np\n'), ((12147, 12164), 'pandas.DataFrame', 'pd.DataFrame', (['tmp'], {}), '(tmp)\n', (12159, 12164), True, 'import pandas as pd\n'), ((3461, 3500), 'bokeh.plotting.figure', 'figure', ([], {'title': 'fig_title', 'y_range': '(0, 1)'}), '(title=fig_title, y_range=(0, 1))\n', (3467, 3500), False, 'from bokeh.plotting import figure\n'), ((3525, 3548), 'bokeh.plotting.figure', 'figure', ([], {'title': 'fig_title'}), '(title=fig_title)\n', (3531, 3548), False, 'from bokeh.plotting import figure\n'), ((4096, 4135), 'bokeh.plotting.figure', 'figure', ([], {'title': 'fig_title', 'y_range': '(0, 1)'}), '(title=fig_title, y_range=(0, 1))\n', (4102, 4135), False, 'from bokeh.plotting import figure\n'), ((4160, 4183), 'bokeh.plotting.figure', 'figure', ([], {'title': 'fig_title'}), '(title=fig_title)\n', (4166, 4183), False, 'from bokeh.plotting import figure\n'), ((4730, 4759), 'bokeh.plotting.figure', 'figure', ([], {'title': 'ploto.fig_title'}), '(title=ploto.fig_title)\n', (4736, 4759), False, 'from bokeh.plotting import figure\n'), ((4784, 4836), 'bokeh.plotting.figure', 'figure', ([], {'title': 'ploto.fig_title', 'y_range': 'ploto.y_range'}), '(title=ploto.fig_title, y_range=ploto.y_range)\n', (4790, 4836), False, 'from bokeh.plotting import figure\n'), ((17226, 17272), 'pandas.merge', 'pd.merge', (['dataf', 'real_q'], {'how': '"""left"""', 'on': '"""year"""'}), "(dataf, real_q, how='left', on='year')\n", (17234, 17272), True, 'import pandas as pd\n'), ((16871, 16889), 'numpy.abs', 'np.abs', (['(xi - x[i:])'], {}), '(xi - x[i:])\n', (16877, 16889), True, 'import numpy as np\n'), ((16925, 16935), 'numpy.mean', 'np.mean', (['x'], {}), '(x)\n', (16932, 16935), True, 'import numpy as np\n')]
|
import unittest
import numpy as np
import openjij as oj
import cxxjij as cj
def calculate_ising_energy(h, J, spins):
energy = 0.0
for (i, j), Jij in J.items():
energy += Jij*spins[i]*spins[j]
for i, hi in h.items():
energy += hi * spins[i]
return energy
def calculate_qubo_energy(Q, binary):
energy = 0.0
for (i, j), Qij in Q.items():
energy += Qij*binary[i]*binary[j]
return energy
class VariableTypeTest(unittest.TestCase):
def test_variable_type(self):
spin = oj.cast_var_type('SPIN')
self.assertEqual(spin, oj.SPIN)
binary = oj.cast_var_type('BINARY')
self.assertEqual(binary, oj.BINARY)
class ModelTest(unittest.TestCase):
def setUp(self):
self.h = {0: 1, 1: -2}
self.J = {(0, 1): -1, (1, 2): -3, (2, 3): 0.5}
self.spins = {0: 1, 1: -1, 2: 1, 3: 1}
self.Q = {(0, 0): 1, (1, 2): -1, (2, 0): -0.2, (1, 3): 3}
self.binaries = {0: 0, 1: 1, 2: 1, 3: 0}
def test_bqm_constructor(self):
# Test BinaryQuadraticModel constructor
bqm = oj.BinaryQuadraticModel(self.h, self.J)
self.assertEqual(type(bqm.interaction_matrix()), np.ndarray)
self.assertEqual(bqm.vartype, oj.SPIN)
dense_graph = bqm.get_cxxjij_ising_graph(sparse=False)
self.assertTrue(isinstance(dense_graph, cj.graph.Dense))
bqm_qubo = oj.BinaryQuadraticModel.from_qubo(Q=self.Q)
self.assertEqual(bqm_qubo.vartype, oj.BINARY)
def test_interaction_matrix(self):
bqm = oj.BinaryQuadraticModel(self.h, self.J)
ising_matrix = np.array([
[1, -1, 0, 0],
[-1, -2, -3, 0],
[0, -3, 0, 0.5],
[0, 0, 0.5, 0]
])
np.testing.assert_array_equal(
bqm.interaction_matrix(), ising_matrix
)
# check Hij = Jij + Jji
J = self.J.copy()
J[0, 1] /= 3
J[1, 0] = J[0, 1] * 2
bqm = oj.BinaryQuadraticModel(self.h, J)
np.testing.assert_array_equal(bqm.interaction_matrix(), ising_matrix)
def test_transfer_to_cxxjij(self):
bqm = oj.BinaryQuadraticModel(self.h, self.J)
# to Dense
ising_graph = bqm.get_cxxjij_ising_graph(sparse=False)
self.assertEqual(ising_graph.size(), len(bqm.indices))
for i in range(len(bqm.indices)):
for j in range(len(bqm.indices)):
if i != j:
self.assertAlmostEqual(bqm.interaction_matrix()[i,j], ising_graph.get_interactions()[i, j])
else:
# i == j
self.assertAlmostEqual(bqm.interaction_matrix()[i,j], ising_graph.get_interactions()[i, len(bqm.indices)])
self.assertAlmostEqual(bqm.interaction_matrix()[i,j], ising_graph.get_interactions()[len(bqm.indices), i])
self.assertEqual(ising_graph.get_interactions()[i,i], 0)
self.assertEqual(ising_graph.get_interactions()[len(bqm.indices),len(bqm.indices)], 1)
# to Sparse
ising_graph = bqm.get_cxxjij_ising_graph(sparse=True)
self.assertEqual(ising_graph.size(), len(bqm.indices))
for i in range(ising_graph.size()):
for j in ising_graph.adj_nodes(i):
self.assertEqual(bqm.interaction_matrix()[i,j], ising_graph[i,j])
def test_bqm_calc_energy(self):
# Test to calculate energy
# Test Ising energy
bqm = oj.BinaryQuadraticModel(self.h, self.J)
ising_energy_bqm = bqm.energy(self.spins)
true_ising_e = calculate_ising_energy(self.h, self.J, self.spins)
self.assertEqual(ising_energy_bqm, true_ising_e)
# Test QUBO energy
bqm = oj.BinaryQuadraticModel.from_qubo(Q=self.Q)
qubo_energy_bqm = bqm.energy(self.binaries)
true_qubo_e = calculate_qubo_energy(self.Q, self.binaries)
self.assertEqual(qubo_energy_bqm, true_qubo_e)
# QUBO == Ising
spins = {0: 1, 1: 1, 2: -1, 3: 1}
binary = {0: 1, 1: 1, 2: 0, 3: 1}
qubo_bqm = oj.BinaryQuadraticModel.from_qubo(Q=self.Q)
# ising_mat = qubo_bqm.ising_interactions()
# h, J = {}, {}
# for i in range(len(ising_mat)-1):
# for j in range(i, len(ising_mat)):
# if i == j:
# h[i] = ising_mat[i][i]
# else:
# J[(i, j)] = ising_mat[i][j]
qubo_energy = qubo_bqm.energy(binary)
self.assertEqual(qubo_energy, qubo_bqm.energy(spins, convert_sample=True))
def test_energy_consistency(self):
bqm = oj.BinaryQuadraticModel(self.h, self.J, var_type='SPIN')
dense_ising_graph = bqm.get_cxxjij_ising_graph(sparse=False)
sparse_ising_graph = bqm.get_cxxjij_ising_graph(sparse=True)
spins = {0: -1, 1: -1, 2: -1, 3: -1}
self.assertAlmostEqual(dense_ising_graph.calc_energy([spins[i] for i in range(len(spins))]), bqm.energy(spins))
self.assertAlmostEqual(sparse_ising_graph.calc_energy([spins[i] for i in range(len(spins))]), bqm.energy(spins))
def test_bqm(self):
h = {}
J = {(0, 1): -1.0, (1, 2): -3.0}
bqm = oj.BinaryQuadraticModel(h, J)
self.assertEqual(J, bqm.get_quadratic())
self.assertEqual(type(bqm.interaction_matrix()), np.ndarray)
correct_mat = np.array([[0, -1, 0, ], [-1, 0, -3], [0, -3, 0]])
np.testing.assert_array_equal(
bqm.interaction_matrix(), correct_mat.astype(np.float))
def test_chimera_converter(self):
h = {}
J = {(0, 4): -1.0, (6, 2): -3.0, (16, 0): 4}
chimera = oj.ChimeraModel(h, J, offset=0, unit_num_L=2)
self.assertEqual(chimera.chimera_coordinate(
4, unit_num_L=2), (0, 0, 4))
self.assertEqual(chimera.chimera_coordinate(
12, unit_num_L=2), (0, 1, 4))
self.assertEqual(chimera.chimera_coordinate(
16, unit_num_L=2), (1, 0, 0))
def test_chimera(self):
h = {}
J = {(0, 4): -1.0, (6, 2): -3.0}
bqm = oj.ChimeraModel(h, J, offset=0, unit_num_L=3)
self.assertTrue(bqm.validate_chimera())
J = {(0, 1): -1}
bqm = oj.ChimeraModel(h, J, unit_num_L=3)
with self.assertRaises(ValueError):
bqm.validate_chimera()
J = {(4, 12): -1}
bqm = oj.ChimeraModel(h, J, unit_num_L=2)
self.assertTrue(bqm.validate_chimera())
J = {(0, 4): -1, (5, 13): 1, (24, 8): 2,
(18, 20): 1, (16, 0): 0.5, (19, 23): -2}
h = {13: 2}
chimera = oj.ChimeraModel(h, J, unit_num_L=2)
self.assertEqual(chimera.to_index(1, 1, 1, unit_num_L=2), 25)
self.assertTrue(chimera.validate_chimera())
def test_ising_dict(self):
Q = {(0, 4): -1.0, (6, 2): -3.0}
bqm = oj.ChimeraModel.from_qubo(Q=Q, unit_num_L=3)
def test_king_graph(self):
h = {}
J = {(0, 1): -1.0, (1, 2): -3.0}
king_interaction = [[0, 0, 1, 0, -1.0], [1, 0, 2, 0, -3.0]]
king_graph = oj.KingGraph(machine_type="ASIC", linear=h, quadratic=J)
correct_mat = np.array([[0, -1, 0, ], [-1, 0, -3], [0, -3, 0]])
np.testing.assert_array_equal(
king_graph.interaction_matrix(), correct_mat.astype(np.float))
self.assertCountEqual(king_interaction, king_graph._ising_king_graph)
king_graph = oj.KingGraph(
machine_type="ASIC", king_graph=king_interaction)
np.testing.assert_array_equal(
king_interaction, king_graph._ising_king_graph)
king_graph = oj.KingGraph.from_qubo(Q={(0, 1): -1}, machine_type='ASIC')
king_interaction = [[0, 0, 0, 0, -0.25],
[0, 0, 1, 0, -0.25], [1, 0, 1, 0, -0.25]]
self.assertCountEqual(king_interaction, king_graph._ising_king_graph)
def test_get_chimera_graph(self):
c_model = oj.ChimeraModel.from_qubo(Q={(0, 4): -1, (1, 1): -1, (1, 5): 1}, unit_num_L=2)
chimera = c_model.get_cxxjij_ising_graph()
self.assertIsInstance(chimera, cj.graph.Chimera)
c_model = oj.ChimeraModel.from_qubo(Q={((0, 0, 1), (0, 0, 4)): -1, ((0, 0, 4), (0, 0, 2)): -1},
unit_num_L=2)
chimera = c_model.get_cxxjij_ising_graph()
self.assertIsInstance(chimera, cj.graph.Chimera)
if __name__ == '__main__':
unittest.main()
|
[
"openjij.cast_var_type",
"openjij.ChimeraModel",
"openjij.BinaryQuadraticModel",
"openjij.BinaryQuadraticModel.from_qubo",
"openjij.KingGraph",
"numpy.array",
"openjij.KingGraph.from_qubo",
"openjij.ChimeraModel.from_qubo",
"unittest.main",
"numpy.testing.assert_array_equal"
] |
[((8441, 8456), 'unittest.main', 'unittest.main', ([], {}), '()\n', (8454, 8456), False, 'import unittest\n'), ((534, 558), 'openjij.cast_var_type', 'oj.cast_var_type', (['"""SPIN"""'], {}), "('SPIN')\n", (550, 558), True, 'import openjij as oj\n'), ((617, 643), 'openjij.cast_var_type', 'oj.cast_var_type', (['"""BINARY"""'], {}), "('BINARY')\n", (633, 643), True, 'import openjij as oj\n'), ((1096, 1135), 'openjij.BinaryQuadraticModel', 'oj.BinaryQuadraticModel', (['self.h', 'self.J'], {}), '(self.h, self.J)\n', (1119, 1135), True, 'import openjij as oj\n'), ((1402, 1445), 'openjij.BinaryQuadraticModel.from_qubo', 'oj.BinaryQuadraticModel.from_qubo', ([], {'Q': 'self.Q'}), '(Q=self.Q)\n', (1435, 1445), True, 'import openjij as oj\n'), ((1554, 1593), 'openjij.BinaryQuadraticModel', 'oj.BinaryQuadraticModel', (['self.h', 'self.J'], {}), '(self.h, self.J)\n', (1577, 1593), True, 'import openjij as oj\n'), ((1617, 1692), 'numpy.array', 'np.array', (['[[1, -1, 0, 0], [-1, -2, -3, 0], [0, -3, 0, 0.5], [0, 0, 0.5, 0]]'], {}), '([[1, -1, 0, 0], [-1, -2, -3, 0], [0, -3, 0, 0.5], [0, 0, 0.5, 0]])\n', (1625, 1692), True, 'import numpy as np\n'), ((1977, 2011), 'openjij.BinaryQuadraticModel', 'oj.BinaryQuadraticModel', (['self.h', 'J'], {}), '(self.h, J)\n', (2000, 2011), True, 'import openjij as oj\n'), ((2144, 2183), 'openjij.BinaryQuadraticModel', 'oj.BinaryQuadraticModel', (['self.h', 'self.J'], {}), '(self.h, self.J)\n', (2167, 2183), True, 'import openjij as oj\n'), ((3470, 3509), 'openjij.BinaryQuadraticModel', 'oj.BinaryQuadraticModel', (['self.h', 'self.J'], {}), '(self.h, self.J)\n', (3493, 3509), True, 'import openjij as oj\n'), ((3733, 3776), 'openjij.BinaryQuadraticModel.from_qubo', 'oj.BinaryQuadraticModel.from_qubo', ([], {'Q': 'self.Q'}), '(Q=self.Q)\n', (3766, 3776), True, 'import openjij as oj\n'), ((4079, 4122), 'openjij.BinaryQuadraticModel.from_qubo', 'oj.BinaryQuadraticModel.from_qubo', ([], {'Q': 'self.Q'}), '(Q=self.Q)\n', (4112, 4122), True, 'import openjij as oj\n'), ((4625, 4681), 'openjij.BinaryQuadraticModel', 'oj.BinaryQuadraticModel', (['self.h', 'self.J'], {'var_type': '"""SPIN"""'}), "(self.h, self.J, var_type='SPIN')\n", (4648, 4681), True, 'import openjij as oj\n'), ((5201, 5230), 'openjij.BinaryQuadraticModel', 'oj.BinaryQuadraticModel', (['h', 'J'], {}), '(h, J)\n', (5224, 5230), True, 'import openjij as oj\n'), ((5381, 5428), 'numpy.array', 'np.array', (['[[0, -1, 0], [-1, 0, -3], [0, -3, 0]]'], {}), '([[0, -1, 0], [-1, 0, -3], [0, -3, 0]])\n', (5389, 5428), True, 'import numpy as np\n'), ((5663, 5708), 'openjij.ChimeraModel', 'oj.ChimeraModel', (['h', 'J'], {'offset': '(0)', 'unit_num_L': '(2)'}), '(h, J, offset=0, unit_num_L=2)\n', (5678, 5708), True, 'import openjij as oj\n'), ((6092, 6137), 'openjij.ChimeraModel', 'oj.ChimeraModel', (['h', 'J'], {'offset': '(0)', 'unit_num_L': '(3)'}), '(h, J, offset=0, unit_num_L=3)\n', (6107, 6137), True, 'import openjij as oj\n'), ((6226, 6261), 'openjij.ChimeraModel', 'oj.ChimeraModel', (['h', 'J'], {'unit_num_L': '(3)'}), '(h, J, unit_num_L=3)\n', (6241, 6261), True, 'import openjij as oj\n'), ((6382, 6417), 'openjij.ChimeraModel', 'oj.ChimeraModel', (['h', 'J'], {'unit_num_L': '(2)'}), '(h, J, unit_num_L=2)\n', (6397, 6417), True, 'import openjij as oj\n'), ((6608, 6643), 'openjij.ChimeraModel', 'oj.ChimeraModel', (['h', 'J'], {'unit_num_L': '(2)'}), '(h, J, unit_num_L=2)\n', (6623, 6643), True, 'import openjij as oj\n'), ((6854, 6898), 'openjij.ChimeraModel.from_qubo', 'oj.ChimeraModel.from_qubo', ([], {'Q': 'Q', 'unit_num_L': '(3)'}), '(Q=Q, unit_num_L=3)\n', (6879, 6898), True, 'import openjij as oj\n'), ((7077, 7133), 'openjij.KingGraph', 'oj.KingGraph', ([], {'machine_type': '"""ASIC"""', 'linear': 'h', 'quadratic': 'J'}), "(machine_type='ASIC', linear=h, quadratic=J)\n", (7089, 7133), True, 'import openjij as oj\n'), ((7156, 7203), 'numpy.array', 'np.array', (['[[0, -1, 0], [-1, 0, -3], [0, -3, 0]]'], {}), '([[0, -1, 0], [-1, 0, -3], [0, -3, 0]])\n', (7164, 7203), True, 'import numpy as np\n'), ((7425, 7487), 'openjij.KingGraph', 'oj.KingGraph', ([], {'machine_type': '"""ASIC"""', 'king_graph': 'king_interaction'}), "(machine_type='ASIC', king_graph=king_interaction)\n", (7437, 7487), True, 'import openjij as oj\n'), ((7518, 7595), 'numpy.testing.assert_array_equal', 'np.testing.assert_array_equal', (['king_interaction', 'king_graph._ising_king_graph'], {}), '(king_interaction, king_graph._ising_king_graph)\n', (7547, 7595), True, 'import numpy as np\n'), ((7631, 7690), 'openjij.KingGraph.from_qubo', 'oj.KingGraph.from_qubo', ([], {'Q': '{(0, 1): -1}', 'machine_type': '"""ASIC"""'}), "(Q={(0, 1): -1}, machine_type='ASIC')\n", (7653, 7690), True, 'import openjij as oj\n'), ((7949, 8027), 'openjij.ChimeraModel.from_qubo', 'oj.ChimeraModel.from_qubo', ([], {'Q': '{(0, 4): -1, (1, 1): -1, (1, 5): 1}', 'unit_num_L': '(2)'}), '(Q={(0, 4): -1, (1, 1): -1, (1, 5): 1}, unit_num_L=2)\n', (7974, 8027), True, 'import openjij as oj\n'), ((8155, 8258), 'openjij.ChimeraModel.from_qubo', 'oj.ChimeraModel.from_qubo', ([], {'Q': '{((0, 0, 1), (0, 0, 4)): -1, ((0, 0, 4), (0, 0, 2)): -1}', 'unit_num_L': '(2)'}), '(Q={((0, 0, 1), (0, 0, 4)): -1, ((0, 0, 4), (0, 0,\n 2)): -1}, unit_num_L=2)\n', (8180, 8258), True, 'import openjij as oj\n')]
|
from __future__ import absolute_import, division, print_function, unicode_literals
import os.path
import pytest
xr = pytest.importorskip('xarray') # noqa
from cfgrib import cfgrib_
SAMPLE_DATA_FOLDER = os.path.join(os.path.dirname(__file__), 'sample-data')
TEST_DATA = os.path.join(SAMPLE_DATA_FOLDER, 'era5-levels-members.grib')
def test_CfGribDataStore():
datastore = cfgrib_.CfGribDataStore(TEST_DATA, encode_cf=())
expected = {'number': 10, 'dataDate': 2, 'dataTime': 2, 'level': 2, 'values': 7320}
assert datastore.get_dimensions() == expected
def test_xarray_open_dataset():
datastore = cfgrib_.CfGribDataStore(TEST_DATA, encode_cf=(), lock=cfgrib_.SerializableLock())
res = xr.open_dataset(datastore)
assert res.attrs['GRIB_edition'] == 1
assert res['t'].attrs['GRIB_gridType'] == 'regular_ll'
assert res['t'].attrs['GRIB_units'] == 'K'
assert res['t'].dims == ('number', 'dataDate', 'dataTime', 'level', 'values')
assert res['t'].mean() > 0.
|
[
"cfgrib.cfgrib_.SerializableLock",
"pytest.importorskip",
"cfgrib.cfgrib_.CfGribDataStore"
] |
[((120, 149), 'pytest.importorskip', 'pytest.importorskip', (['"""xarray"""'], {}), "('xarray')\n", (139, 149), False, 'import pytest\n'), ((383, 431), 'cfgrib.cfgrib_.CfGribDataStore', 'cfgrib_.CfGribDataStore', (['TEST_DATA'], {'encode_cf': '()'}), '(TEST_DATA, encode_cf=())\n', (406, 431), False, 'from cfgrib import cfgrib_\n'), ((674, 700), 'cfgrib.cfgrib_.SerializableLock', 'cfgrib_.SerializableLock', ([], {}), '()\n', (698, 700), False, 'from cfgrib import cfgrib_\n')]
|
#!/usr/bin/python2.5
#
# Copyright 2009 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the 'License')
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test admin_rankers."""
__author__ = '<EMAIL> (<NAME>)'
import datetime
import logging
import unittest
import mock_data
import settings
from django.test.client import Client
from django.utils import simplejson
from google.appengine.api import memcache
from google.appengine.ext import db
from categories import all_test_sets
from models import result_stats
from models.result import ResultParent
from models.result import ResultTime
from models.user_agent import UserAgent
from third_party import mox
from base import admin
USER_AGENT_STRINGS = {
'Firefox 3.0.6': ('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.6) '
'Gecko/2009011912 Firefox/3.0.6'),
'Firefox 3.5': ('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.6) '
'Gecko/2009011912 Firefox/3.5'),
'Firefox 3.0.9': ('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.6) '
'Gecko/2009011912 Firefox/3.0.9'),
}
class TestConfirmUa(unittest.TestCase):
def setUp(self):
self.client = Client()
ua_string = ('Mozilla/5.0 (X11 U Linux armv6l de-DE rv:1.9a6pre) '
'Gecko/20080606 '
'Firefox/3.0a1 Tablet browser 0.3.7 '
'RX-34+RX-44+RX-48_DIABLO_4.2008.23-14')
self.ua = UserAgent.factory(ua_string)
def testConfirmBasic(self):
params = {
'submit': 1,
'ac_%s' % self.ua.key(): 'confirm',
'cht_%s' % self.ua.key(): '',
'csrf_token': self.client.get('/get_csrf').content,
}
response = self.client.get('/admin/confirm-ua', params)
self.assertEqual(302, response.status_code)
self.assertTrue(self.ua.get(self.ua.key()).confirmed)
class TestDataDump(unittest.TestCase):
def setUp(self):
self.test_set = mock_data.MockTestSet()
all_test_sets.AddTestSet(self.test_set)
self.client = Client()
def tearDown(self):
all_test_sets.RemoveTestSet(self.test_set)
def testNoParamsGivesError(self):
params = {}
response = self.client.get('/admin/data_dump', params)
self.assertEqual(400, response.status_code)
def testNoModelGivesError(self):
params = {'keys': 'car,home,heart'}
response = self.client.get('/admin/data_dump', params)
self.assertEqual(400, response.status_code)
def testNonExistentKeyIsMarkedLost(self):
for model in ('ResultParent', 'UserAgent'):
params = {
'keys': '<KEY>',
'model': model}
response = self.client.get('/admin/data_dump', params)
self.assertEqual(200, response.status_code)
response_params = simplejson.loads(response.content)
expected_data = [{
'model_class': model,
'lost_key': '<KEY>',
}]
self.assertEqual(expected_data, response_params['data'])
def testDumpAll(self):
keys = []
for scores in ((1, 4, 50), (1, 1, 20), (0, 2, 30), (1, 0, 10), (1, 3, 10)):
result = ResultParent.AddResult(
self.test_set, '1.2.2.5', mock_data.GetUserAgentString('Firefox 3.5'),
'apple=%s,banana=%s,coconut=%s' % scores)
keys.append(str(result.key()))
params = {
'model': 'ResultParent',
'keys': ','.join(keys),
}
response = self.client.get('/admin/data_dump', params)
self.assertEqual(200, response.status_code)
response_params = simplejson.loads(response.content)
self.assertEqual(20, len(response_params['data'])) # 5 parents + 15 times
class TestDataDumpKeys(unittest.TestCase):
def setUp(self):
self.test_set = mock_data.MockTestSet()
all_test_sets.AddTestSet(self.test_set)
self.client = Client()
def tearDown(self):
all_test_sets.RemoveTestSet(self.test_set)
def testCreated(self):
created_base = datetime.datetime(2009, 9, 9, 9, 9, 0)
keys = []
for scores in ((0, 10, 100), (1, 20, 200)):
ip = '1.2.2.%s' % scores[1]
result = ResultParent.AddResult(
self.test_set, ip, mock_data.GetUserAgentString('Firefox 3.5'),
'apple=%s,banana=%s,coconut=%s' % scores,
created=created_base + datetime.timedelta(seconds=scores[1]))
keys.append(str(result.key()))
params = {
'model': 'ResultParent',
'created': created_base + datetime.timedelta(seconds=15),
}
response = self.client.get('/admin/data_dump_keys', params)
self.assertEqual(200, response.status_code)
response_params = simplejson.loads(response.content)
self.assertEqual(None, response_params['bookmark'])
self.assertEqual(keys[1:], response_params['keys'])
def testBookmarkRestart(self):
expected_keys = []
for scores in ((1, 4, 50), (1, 1, 20), (0, 2, 30), (1, 0, 10), (1, 3, 10)):
result = ResultParent.AddResult(
self.test_set, '1.2.2.5', mock_data.GetUserAgentString('Firefox 3.5'),
'apple=%s,banana=%s,coconut=%s' % scores)
expected_keys.append(str(result.key()))
params = {
'model': 'ResultParent',
'fetch_limit': '3'
}
response = self.client.get('/admin/data_dump_keys', params)
keys = []
self.assertEqual(200, response.status_code)
response_params = simplejson.loads(response.content)
self.assertNotEqual(None, response_params['bookmark'])
keys.extend(response_params['keys'])
self.assertEqual(3, len(keys))
del response_params['keys']
response = self.client.get('/admin/data_dump_keys', response_params)
self.assertEqual(200, response.status_code)
response_params = simplejson.loads(response.content)
self.assertEqual(None, response_params['bookmark'])
keys.extend(response_params['keys'])
self.assertEqual(sorted(expected_keys), sorted(keys))
class TestUploadCategoryBrowsers(unittest.TestCase):
def setUp(self):
self.client = Client()
self.manager = result_stats.CategoryBrowserManager
self.mox = mox.Mox()
def tearDown(self):
self.mox.UnsetStubs()
def testNoBrowsersGivesError(self):
params = {}
response = self.client.get('/admin/upload_category_browsers', params)
self.assertTrue('Must set "browsers"' in response.content)
self.assertEqual(500, response.status_code)
def testNoCategoryGivesError(self):
params = {
'version_level': 4,
'browsers': 'Firefox,IE',
}
response = self.client.get('/admin/upload_category_browsers', params)
self.assertEqual('Must set "category".', response.content)
self.assertEqual(500, response.status_code)
def testBadVersionLevelGivesError(self):
params = {
'category': 'network',
'version_level': 4,
'browsers': 'Firefox,IE',
}
response = self.client.get('/admin/upload_category_browsers', params)
self.assertTrue('Version level' in response.content)
self.assertEqual(500, response.status_code)
def testNoBrowsersGivesError(self):
params = {
'category': 'network',
'version_level': 0,
}
response = self.client.get('/admin/upload_category_browsers', params)
self.assertTrue('Must set "browsers"' in response.content)
self.assertEqual(500, response.status_code)
def testBasic(self):
self.mox.StubOutWithMock(
result_stats.CategoryBrowserManager, 'UpdateSummaryBrowsers')
categories = [ts.category for ts in all_test_sets.GetVisibleTestSets()]
# Use mox to make sure UpdateSummaryBrowsers gets called.
result_stats.CategoryBrowserManager.UpdateSummaryBrowsers(categories)
self.mox.ReplayAll()
params = {
'category': 'network',
'version_level': 0,
'browsers': 'IE,Firefox',
}
response = self.client.get('/admin/upload_category_browsers', params)
self.assertEqual('Success.', response.content)
self.assertEqual(200, response.status_code)
self.assertEqual(['Firefox', 'IE'], self.manager.GetBrowsers('network', 0))
self.mox.VerifyAll()
class TestUpdateStatsCache(unittest.TestCase):
def setUp(self):
self.client = Client()
self.mox = mox.Mox()
self.manager = result_stats.CategoryStatsManager
def tearDown(self):
self.mox.UnsetStubs()
def testNoCategoryGivesError(self):
params = {
'browsers': 'Firefox,IE',
}
response = self.client.get('/admin/update_stats_cache', params)
self.assertEqual('Must set "category".', response.content)
self.assertEqual(500, response.status_code)
def testNoBrowsersGivesError(self):
params = {
'category': 'network',
}
response = self.client.get('/admin/update_stats_cache', params)
self.assertTrue('Must set "browsers"' in response.content)
self.assertEqual(500, response.status_code)
def testBasic(self):
self.mox.StubOutWithMock(self.manager, 'UpdateStatsCache')
self.manager.UpdateStatsCache('network', ['IE']).InAnyOrder()
self.manager.UpdateStatsCache('network', ['Firefox']).InAnyOrder()
params = {
'category': 'network',
'browsers': 'IE,Firefox',
}
self.mox.ReplayAll()
response = self.client.get('/admin/update_stats_cache', params)
self.mox.VerifyAll()
self.assertEqual('Success.', response.content)
self.assertEqual(200, response.status_code)
class TestUpdateAllUncachedStats(unittest.TestCase):
def setUp(self):
self.test_set_1 = mock_data.MockTestSet(category='foo')
self.test_set_2 = mock_data.MockTestSet(category='bar')
all_test_sets.AddTestSet(self.test_set_1)
all_test_sets.AddTestSet(self.test_set_2)
self.client = Client()
def tearDown(self):
all_test_sets.RemoveTestSet(self.test_set_1)
all_test_sets.RemoveTestSet(self.test_set_2)
def testUpdateAllUncachedStats(self):
category_browsers = {
self.test_set_1: ('Firefox 2.5.1', 'Firefox 3.0.7', 'Firefox 3.1.7',
'Firefox 3.1.8', 'Firefox 3.5', 'IE 7.0'),
self.test_set_2: ('Firefox 2.5.1', 'Firefox 3.5', 'IE 7.0'),
}
for test_set, browsers in category_browsers.items():
for browser in browsers:
ua = mock_data.GetUserAgentString(browser)
result = ResultParent.AddResult(
test_set, '1.2.2.5', ua, 'apple=1,banana=1,coconut=1')
params = {'categories': 'foo,bar'}
response = self.client.get('/admin/update_all_uncached_stats', params)
# Instead of checking actual stats, I tried to mock out UpdateStatsCache
# as is done in testBasic. However, it did not work for some unknown reason.
# it would not verify the calls. VerifyAll succeeded no matter what I called
# or did not call. Grrr.
expected_stats = {
'summary_display': '3',
'total_runs': 5,
'summary_score': 104,
'results': {
'apple': {'score': 100, 'raw_score': 1, 'display': 'yes'},
'banana': {'score': 2, 'raw_score': 1, 'display': 'd:2'},
'coconut': {'score': 2, 'raw_score': 1, 'display': 'd:2'},
}
}
self.assertEqual(
expected_stats,
memcache.get('Firefox',
**result_stats.CategoryStatsManager.MemcacheParams('foo')))
class TestUpdateAllStatsCache(unittest.TestCase):
def setUp(self):
self.test_set_1 = mock_data.MockTestSet(category='foo')
self.test_set_2 = mock_data.MockTestSet(category='bar')
all_test_sets.AddTestSet(self.test_set_1)
all_test_sets.AddTestSet(self.test_set_2)
self.client = Client()
def tearDown(self):
all_test_sets.RemoveTestSet(self.test_set_1)
all_test_sets.RemoveTestSet(self.test_set_2)
def testUpdateAllStatsCache(self):
category_browsers = {
self.test_set_1: ('Firefox 2.5.1', 'Firefox 3.0.7', 'Firefox 3.1.7',
'Firefox 3.1.8', 'Firefox 3.5', 'IE 7.0'),
self.test_set_2: ('Firefox 2.5.1', 'Firefox 3.5', 'IE 7.0'),
}
for test_set, browsers in category_browsers.items():
for browser in browsers:
ua = mock_data.GetUserAgentString(browser)
result = ResultParent.AddResult(
test_set, '1.2.2.5', ua, 'apple=1,banana=1,coconut=1')
params = {'categories': 'foo,bar'}
response = self.client.get('/admin/update_all_stats_cache', params)
# Instead of checking actual stats, I tried to mock out UpdateStatsCache
# as is done in testBasic. However, it did not work for some unknown reason.
# it would not verify the calls. VerifyAll succeeded no matter what I called
# or did not call. Grrr.
expected_stats = {
'summary_display': '3',
'total_runs': 5,
'summary_score': 104,
'results': {
'apple': {'score': 100, 'raw_score': 1, 'display': 'yes'},
'banana': {'score': 2, 'raw_score': 1, 'display': 'd:2'},
'coconut': {'score': 2, 'raw_score': 1, 'display': 'd:2'},
}
}
self.assertEqual(
expected_stats,
memcache.get('Firefox',
**result_stats.CategoryStatsManager.MemcacheParams('foo')))
|
[
"django.utils.simplejson.loads",
"datetime.datetime",
"categories.all_test_sets.GetVisibleTestSets",
"categories.all_test_sets.AddTestSet",
"models.user_agent.UserAgent.factory",
"django.test.client.Client",
"models.result_stats.CategoryBrowserManager.UpdateSummaryBrowsers",
"models.result.ResultParent.AddResult",
"mock_data.MockTestSet",
"datetime.timedelta",
"mock_data.GetUserAgentString",
"models.result_stats.CategoryStatsManager.MemcacheParams",
"third_party.mox.Mox",
"categories.all_test_sets.RemoveTestSet"
] |
[((1634, 1642), 'django.test.client.Client', 'Client', ([], {}), '()\n', (1640, 1642), False, 'from django.test.client import Client\n'), ((1876, 1904), 'models.user_agent.UserAgent.factory', 'UserAgent.factory', (['ua_string'], {}), '(ua_string)\n', (1893, 1904), False, 'from models.user_agent import UserAgent\n'), ((2370, 2393), 'mock_data.MockTestSet', 'mock_data.MockTestSet', ([], {}), '()\n', (2391, 2393), False, 'import mock_data\n'), ((2398, 2437), 'categories.all_test_sets.AddTestSet', 'all_test_sets.AddTestSet', (['self.test_set'], {}), '(self.test_set)\n', (2422, 2437), False, 'from categories import all_test_sets\n'), ((2456, 2464), 'django.test.client.Client', 'Client', ([], {}), '()\n', (2462, 2464), False, 'from django.test.client import Client\n'), ((2492, 2534), 'categories.all_test_sets.RemoveTestSet', 'all_test_sets.RemoveTestSet', (['self.test_set'], {}), '(self.test_set)\n', (2519, 2534), False, 'from categories import all_test_sets\n'), ((3923, 3957), 'django.utils.simplejson.loads', 'simplejson.loads', (['response.content'], {}), '(response.content)\n', (3939, 3957), False, 'from django.utils import simplejson\n'), ((4121, 4144), 'mock_data.MockTestSet', 'mock_data.MockTestSet', ([], {}), '()\n', (4142, 4144), False, 'import mock_data\n'), ((4149, 4188), 'categories.all_test_sets.AddTestSet', 'all_test_sets.AddTestSet', (['self.test_set'], {}), '(self.test_set)\n', (4173, 4188), False, 'from categories import all_test_sets\n'), ((4207, 4215), 'django.test.client.Client', 'Client', ([], {}), '()\n', (4213, 4215), False, 'from django.test.client import Client\n'), ((4243, 4285), 'categories.all_test_sets.RemoveTestSet', 'all_test_sets.RemoveTestSet', (['self.test_set'], {}), '(self.test_set)\n', (4270, 4285), False, 'from categories import all_test_sets\n'), ((4331, 4369), 'datetime.datetime', 'datetime.datetime', (['(2009)', '(9)', '(9)', '(9)', '(9)', '(0)'], {}), '(2009, 9, 9, 9, 9, 0)\n', (4348, 4369), False, 'import datetime\n'), ((5004, 5038), 'django.utils.simplejson.loads', 'simplejson.loads', (['response.content'], {}), '(response.content)\n', (5020, 5038), False, 'from django.utils import simplejson\n'), ((5739, 5773), 'django.utils.simplejson.loads', 'simplejson.loads', (['response.content'], {}), '(response.content)\n', (5755, 5773), False, 'from django.utils import simplejson\n'), ((6085, 6119), 'django.utils.simplejson.loads', 'simplejson.loads', (['response.content'], {}), '(response.content)\n', (6101, 6119), False, 'from django.utils import simplejson\n'), ((6367, 6375), 'django.test.client.Client', 'Client', ([], {}), '()\n', (6373, 6375), False, 'from django.test.client import Client\n'), ((6446, 6455), 'third_party.mox.Mox', 'mox.Mox', ([], {}), '()\n', (6453, 6455), False, 'from third_party import mox\n'), ((7971, 8040), 'models.result_stats.CategoryBrowserManager.UpdateSummaryBrowsers', 'result_stats.CategoryBrowserManager.UpdateSummaryBrowsers', (['categories'], {}), '(categories)\n', (8028, 8040), False, 'from models import result_stats\n'), ((8548, 8556), 'django.test.client.Client', 'Client', ([], {}), '()\n', (8554, 8556), False, 'from django.test.client import Client\n'), ((8572, 8581), 'third_party.mox.Mox', 'mox.Mox', ([], {}), '()\n', (8579, 8581), False, 'from third_party import mox\n'), ((9863, 9900), 'mock_data.MockTestSet', 'mock_data.MockTestSet', ([], {'category': '"""foo"""'}), "(category='foo')\n", (9884, 9900), False, 'import mock_data\n'), ((9923, 9960), 'mock_data.MockTestSet', 'mock_data.MockTestSet', ([], {'category': '"""bar"""'}), "(category='bar')\n", (9944, 9960), False, 'import mock_data\n'), ((9965, 10006), 'categories.all_test_sets.AddTestSet', 'all_test_sets.AddTestSet', (['self.test_set_1'], {}), '(self.test_set_1)\n', (9989, 10006), False, 'from categories import all_test_sets\n'), ((10011, 10052), 'categories.all_test_sets.AddTestSet', 'all_test_sets.AddTestSet', (['self.test_set_2'], {}), '(self.test_set_2)\n', (10035, 10052), False, 'from categories import all_test_sets\n'), ((10071, 10079), 'django.test.client.Client', 'Client', ([], {}), '()\n', (10077, 10079), False, 'from django.test.client import Client\n'), ((10107, 10151), 'categories.all_test_sets.RemoveTestSet', 'all_test_sets.RemoveTestSet', (['self.test_set_1'], {}), '(self.test_set_1)\n', (10134, 10151), False, 'from categories import all_test_sets\n'), ((10156, 10200), 'categories.all_test_sets.RemoveTestSet', 'all_test_sets.RemoveTestSet', (['self.test_set_2'], {}), '(self.test_set_2)\n', (10183, 10200), False, 'from categories import all_test_sets\n'), ((11743, 11780), 'mock_data.MockTestSet', 'mock_data.MockTestSet', ([], {'category': '"""foo"""'}), "(category='foo')\n", (11764, 11780), False, 'import mock_data\n'), ((11803, 11840), 'mock_data.MockTestSet', 'mock_data.MockTestSet', ([], {'category': '"""bar"""'}), "(category='bar')\n", (11824, 11840), False, 'import mock_data\n'), ((11845, 11886), 'categories.all_test_sets.AddTestSet', 'all_test_sets.AddTestSet', (['self.test_set_1'], {}), '(self.test_set_1)\n', (11869, 11886), False, 'from categories import all_test_sets\n'), ((11891, 11932), 'categories.all_test_sets.AddTestSet', 'all_test_sets.AddTestSet', (['self.test_set_2'], {}), '(self.test_set_2)\n', (11915, 11932), False, 'from categories import all_test_sets\n'), ((11951, 11959), 'django.test.client.Client', 'Client', ([], {}), '()\n', (11957, 11959), False, 'from django.test.client import Client\n'), ((11987, 12031), 'categories.all_test_sets.RemoveTestSet', 'all_test_sets.RemoveTestSet', (['self.test_set_1'], {}), '(self.test_set_1)\n', (12014, 12031), False, 'from categories import all_test_sets\n'), ((12036, 12080), 'categories.all_test_sets.RemoveTestSet', 'all_test_sets.RemoveTestSet', (['self.test_set_2'], {}), '(self.test_set_2)\n', (12063, 12080), False, 'from categories import all_test_sets\n'), ((3176, 3210), 'django.utils.simplejson.loads', 'simplejson.loads', (['response.content'], {}), '(response.content)\n', (3192, 3210), False, 'from django.utils import simplejson\n'), ((3570, 3613), 'mock_data.GetUserAgentString', 'mock_data.GetUserAgentString', (['"""Firefox 3.5"""'], {}), "('Firefox 3.5')\n", (3598, 3613), False, 'import mock_data\n'), ((4534, 4577), 'mock_data.GetUserAgentString', 'mock_data.GetUserAgentString', (['"""Firefox 3.5"""'], {}), "('Firefox 3.5')\n", (4562, 4577), False, 'import mock_data\n'), ((4826, 4856), 'datetime.timedelta', 'datetime.timedelta', ([], {'seconds': '(15)'}), '(seconds=15)\n', (4844, 4856), False, 'import datetime\n'), ((5363, 5406), 'mock_data.GetUserAgentString', 'mock_data.GetUserAgentString', (['"""Firefox 3.5"""'], {}), "('Firefox 3.5')\n", (5391, 5406), False, 'import mock_data\n'), ((7869, 7903), 'categories.all_test_sets.GetVisibleTestSets', 'all_test_sets.GetVisibleTestSets', ([], {}), '()\n', (7901, 7903), False, 'from categories import all_test_sets\n'), ((10594, 10631), 'mock_data.GetUserAgentString', 'mock_data.GetUserAgentString', (['browser'], {}), '(browser)\n', (10622, 10631), False, 'import mock_data\n'), ((10649, 10726), 'models.result.ResultParent.AddResult', 'ResultParent.AddResult', (['test_set', '"""1.2.2.5"""', 'ua', '"""apple=1,banana=1,coconut=1"""'], {}), "(test_set, '1.2.2.5', ua, 'apple=1,banana=1,coconut=1')\n", (10671, 10726), False, 'from models.result import ResultParent\n'), ((12471, 12508), 'mock_data.GetUserAgentString', 'mock_data.GetUserAgentString', (['browser'], {}), '(browser)\n', (12499, 12508), False, 'import mock_data\n'), ((12526, 12603), 'models.result.ResultParent.AddResult', 'ResultParent.AddResult', (['test_set', '"""1.2.2.5"""', 'ua', '"""apple=1,banana=1,coconut=1"""'], {}), "(test_set, '1.2.2.5', ua, 'apple=1,banana=1,coconut=1')\n", (12548, 12603), False, 'from models.result import ResultParent\n'), ((11591, 11646), 'models.result_stats.CategoryStatsManager.MemcacheParams', 'result_stats.CategoryStatsManager.MemcacheParams', (['"""foo"""'], {}), "('foo')\n", (11639, 11646), False, 'from models import result_stats\n'), ((13465, 13520), 'models.result_stats.CategoryStatsManager.MemcacheParams', 'result_stats.CategoryStatsManager.MemcacheParams', (['"""foo"""'], {}), "('foo')\n", (13513, 13520), False, 'from models import result_stats\n'), ((4664, 4701), 'datetime.timedelta', 'datetime.timedelta', ([], {'seconds': 'scores[1]'}), '(seconds=scores[1])\n', (4682, 4701), False, 'import datetime\n')]
|
from __future__ import print_function
import os, sys
import pickle
import time
import glob
import numpy as np
import torch
from model import PVSE
from loss import cosine_sim, order_sim
from vocab import Vocabulary
from data import get_test_loader
from logger import AverageMeter
from option import parser, verify_input_args
ORDER_BATCH_SIZE = 100
def encode_data(model, data_loader, use_gpu=False):
"""Encode all images and sentences loadable by data_loader"""
# switch to evaluate mode
model.eval()
use_mil = model.module.mil if hasattr(model, 'module') else model.mil
# numpy array to keep all the embeddings
img_embs, txt_embs = None, None
for i, data in enumerate(data_loader):
img, txt, txt_len, ids = data
if torch.cuda.is_available():
img, txt, txt_len = img.cuda(), txt.cuda(), txt_len.cuda()
# compute the embeddings
img_emb, txt_emb, _, _, _, _ = model.forward(img, txt, txt_len)
del img, txt, txt_len
# initialize the output embeddings
if img_embs is None:
if use_gpu:
emb_sz = [len(data_loader.dataset), img_emb.size(1), img_emb.size(2)] \
if use_mil else [len(data_loader.dataset), img_emb.size(1)]
img_embs = torch.zeros(emb_sz, dtype=img_emb.dtype, requires_grad=False).cuda()
txt_embs = torch.zeros(emb_sz, dtype=txt_emb.dtype, requires_grad=False).cuda()
else:
emb_sz = (len(data_loader.dataset), img_emb.size(1), img_emb.size(2)) \
if use_mil else (len(data_loader.dataset), img_emb.size(1))
img_embs = np.zeros(emb_sz)
txt_embs = np.zeros(emb_sz)
# preserve the embeddings by copying from gpu and converting to numpy
img_embs[ids] = img_emb if use_gpu else img_emb.data.cpu().numpy().copy()
txt_embs[ids] = txt_emb if use_gpu else txt_emb.data.cpu().numpy().copy()
return img_embs, txt_embs
def i2t(images, sentences, nreps=1, npts=None, return_ranks=False, order=False, use_gpu=False):
"""
Images->Text (Image Annotation)
Images: (nreps*N, K) matrix of images
Captions: (nreps*N, K) matrix of sentences
"""
if use_gpu:
assert not order, 'Order embedding not supported in GPU mode'
if npts is None:
npts = int(images.shape[0] / nreps)
index_list = []
ranks, top1 = np.zeros(npts), np.zeros(npts)
for index in range(npts):
# Get query image
im = images[nreps * index]
im = im.reshape((1,) + im.shape)
# Compute scores
if use_gpu:
if len(sentences.shape) == 2:
sim = im.mm(sentences.t()).view(-1)
else:
_, K, D = im.shape
sim_kk = im.view(-1, D).mm(sentences.view(-1, D).t())
sim_kk = sim_kk.view(im.size(0), K, sentences.size(0), K)
sim_kk = sim_kk.permute(0,1,3,2).contiguous()
sim_kk = sim_kk.view(im.size(0), -1, sentences.size(0))
sim, _ = sim_kk.max(dim=1)
sim = sim.flatten()
else:
if order:
if index % ORDER_BATCH_SIZE == 0:
mx = min(images.shape[0], nreps * (index + ORDER_BATCH_SIZE))
im2 = images[nreps * index:mx:nreps]
sim_batch = order_sim(torch.Tensor(im2).cuda(), torch.Tensor(sentences).cuda())
sim_batch = sim_batch.cpu().numpy()
sim = sim_batch[index % ORDER_BATCH_SIZE]
else:
sim = np.tensordot(im, sentences, axes=[2, 2]).max(axis=(0,1,3)).flatten() \
if len(sentences.shape) == 3 else np.dot(im, sentences.T).flatten()
if use_gpu:
_, inds_gpu = sim.sort()
inds = inds_gpu.cpu().numpy().copy()[::-1]
else:
inds = np.argsort(sim)[::-1]
index_list.append(inds[0])
# Score
rank = 1e20
for i in range(nreps * index, nreps * (index + 1), 1):
tmp = np.where(inds == i)[0][0]
if tmp < rank:
rank = tmp
ranks[index] = rank
top1[index] = inds[0]
# Compute metrics
r1 = 100.0 * len(np.where(ranks < 1)[0]) / len(ranks)
r5 = 100.0 * len(np.where(ranks < 5)[0]) / len(ranks)
r10 = 100.0 * len(np.where(ranks < 10)[0]) / len(ranks)
medr = np.floor(np.median(ranks)) + 1
meanr = ranks.mean() + 1
if return_ranks:
return (r1, r5, r10, medr, meanr), (ranks, top1)
else:
return (r1, r5, r10, medr, meanr)
def t2i(images, sentences, nreps=1, npts=None, return_ranks=False, order=False, use_gpu=False):
"""
Text->Images (Image Search)
Images: (nreps*N, K) matrix of images
Captions: (nreps*N, K) matrix of sentences
"""
if use_gpu:
assert not order, 'Order embedding not supported in GPU mode'
if npts is None:
npts = int(images.shape[0] / nreps)
if use_gpu:
ims = torch.stack([images[i] for i in range(0, len(images), nreps)])
else:
ims = np.array([images[i] for i in range(0, len(images), nreps)])
ranks, top1 = np.zeros(nreps * npts), np.zeros(nreps * npts)
for index in range(npts):
# Get query sentences
queries = sentences[nreps * index:nreps * (index + 1)]
# Compute scores
if use_gpu:
if len(sentences.shape) == 2:
sim = queries.mm(ims.t())
else:
sim_kk = queries.view(-1, queries.size(-1)).mm(ims.view(-1, ims.size(-1)).t())
sim_kk = sim_kk.view(queries.size(0), queries.size(1), ims.size(0), ims.size(1))
sim_kk = sim_kk.permute(0,1,3,2).contiguous()
sim_kk = sim_kk.view(queries.size(0), -1, ims.size(0))
sim, _ = sim_kk.max(dim=1)
else:
if order:
if nreps * index % ORDER_BATCH_SIZE == 0:
mx = min(sentences.shape[0], nreps * index + ORDER_BATCH_SIZE)
sentences_batch = sentences[nreps * index:mx]
sim_batch = order_sim(torch.Tensor(images).cuda(),
torch.Tensor(sentences_batch).cuda())
sim_batch = sim_batch.cpu().numpy()
sim = sim_batch[:, (nreps * index) % ORDER_BATCH_SIZE:(nreps * index) % ORDER_BATCH_SIZE + nreps].T
else:
sim = np.tensordot(queries, ims, axes=[2, 2]).max(axis=(1,3)) \
if len(sentences.shape) == 3 else np.dot(queries, ims.T)
inds = np.zeros(sim.shape)
for i in range(len(inds)):
if use_gpu:
_, inds_gpu = sim[i].sort()
inds[i] = inds_gpu.cpu().numpy().copy()[::-1]
else:
inds[i] = np.argsort(sim[i])[::-1]
ranks[nreps * index + i] = np.where(inds[i] == index)[0][0]
top1[nreps * index + i] = inds[i][0]
# Compute metrics
r1 = 100.0 * len(np.where(ranks < 1)[0]) / len(ranks)
r5 = 100.0 * len(np.where(ranks < 5)[0]) / len(ranks)
r10 = 100.0 * len(np.where(ranks < 10)[0]) / len(ranks)
medr = np.floor(np.median(ranks)) + 1
meanr = ranks.mean() + 1
if return_ranks:
return (r1, r5, r10, medr, meanr), (ranks, top1)
else:
return (r1, r5, r10, medr, meanr)
def convert_old_state_dict(x, model, multi_gpu=False):
params = model.state_dict()
prefix = ['module.img_enc.', 'module.txt_enc.'] \
if multi_gpu else ['img_enc.', 'txt_enc.']
for i, old_params in enumerate(x):
for key, val in old_params.items():
key = prefix[i] + key.replace('module.','').replace('our_model', 'pie_net')
assert key in params, '{} not found in model state_dict'.format(key)
params[key] = val
return params
def evalrank(model, args, split='test'):
print('Loading dataset')
data_loader = get_test_loader(args, vocab)
print('Computing results... (eval_on_gpu={})'.format(args.eval_on_gpu))
img_embs, txt_embs = encode_data(model, data_loader, args.eval_on_gpu)
n_samples = img_embs.shape[0]
nreps = 5 if args.data_name == 'coco' else 1
print('Images: %d, Sentences: %d' % (img_embs.shape[0] / nreps, txt_embs.shape[0]))
# 5fold cross-validation, only for MSCOCO
mean_metrics = None
if args.data_name == 'coco':
results = []
for i in range(5):
r, rt0 = i2t(img_embs[i*5000:(i + 1)*5000], txt_embs[i*5000:(i + 1)*5000],
nreps=nreps, return_ranks=True, order=args.order, use_gpu=args.eval_on_gpu)
r = (r[0], r[1], r[2], r[3], r[3] / n_samples, r[4], r[4] / n_samples)
print("Image to text: %.2f, %.2f, %.2f, %.2f (%.2f), %.2f (%.2f)" % r)
ri, rti0 = t2i(img_embs[i*5000:(i + 1)*5000], txt_embs[i*5000:(i + 1)*5000],
nreps=nreps, return_ranks=True, order=args.order, use_gpu=args.eval_on_gpu)
if i == 0:
rt, rti = rt0, rti0
ri = (ri[0], ri[1], ri[2], ri[3], ri[3] / n_samples, ri[4], ri[4] / n_samples)
print("Text to image: %.2f, %.2f, %.2f, %.2f (%.2f), %.2f (%.2f)" % ri)
ar = (r[0] + r[1] + r[2]) / 3
ari = (ri[0] + ri[1] + ri[2]) / 3
rsum = r[0] + r[1] + r[2] + ri[0] + ri[1] + ri[2]
print("rsum: %.2f ar: %.2f ari: %.2f" % (rsum, ar, ari))
results += [list(r) + list(ri) + [ar, ari, rsum]]
mean_metrics = tuple(np.array(results).mean(axis=0).flatten())
print("-----------------------------------")
print("Mean metrics from 5-fold evaluation: ")
print("rsum: %.2f" % (mean_metrics[-1] * 6))
print("Average i2t Recall: %.2f" % mean_metrics[-3])
print("Image to text: %.2f %.2f %.2f %.2f (%.2f) %.2f (%.2f)" % mean_metrics[:7])
print("Average t2i Recall: %.2f" % mean_metrics[-2])
print("Text to image: %.2f %.2f %.2f %.2f (%.2f) %.2f (%.2f)" % mean_metrics[7:14])
# no cross-validation, full evaluation
r, rt = i2t(img_embs, txt_embs, nreps=nreps, return_ranks=True, use_gpu=args.eval_on_gpu)
ri, rti = t2i(img_embs, txt_embs, nreps=nreps, return_ranks=True, use_gpu=args.eval_on_gpu)
ar = (r[0] + r[1] + r[2]) / 3
ari = (ri[0] + ri[1] + ri[2]) / 3
rsum = r[0] + r[1] + r[2] + ri[0] + ri[1] + ri[2]
r = (r[0], r[1], r[2], r[3], r[3] / n_samples, r[4], r[4] / n_samples)
ri = (ri[0], ri[1], ri[2], ri[3], ri[3] / n_samples, ri[4], ri[4] / n_samples)
print("rsum: %.2f" % rsum)
print("Average i2t Recall: %.2f" % ar)
print("Image to text: %.2f %.2f %.2f %.2f (%.2f) %.2f (%.2f)" % r)
print("Average t2i Recall: %.2f" % ari)
print("Text to image: %.2f %.2f %.2f %.2f (%.2f) %.2f (%.2f)" % ri)
return mean_metrics
if __name__ == '__main__':
multi_gpu = torch.cuda.device_count() > 1
args = verify_input_args(parser.parse_args())
opt = verify_input_args(parser.parse_args())
# load vocabulary used by the model
with open('./vocab/%s_vocab.pkl' % args.data_name, 'rb') as f:
vocab = pickle.load(f)
args.vocab_size = len(vocab)
# load model and options
assert os.path.isfile(args.ckpt)
model = PVSE(vocab.word2idx, args)
if torch.cuda.is_available():
model = torch.nn.DataParallel(model).cuda() if multi_gpu else model
torch.backends.cudnn.benchmark = True
model.load_state_dict(torch.load(args.ckpt))
# evaluate
metrics = evalrank(model, args, split='test')
|
[
"model.PVSE",
"numpy.median",
"numpy.where",
"torch.load",
"numpy.tensordot",
"pickle.load",
"torch.cuda.device_count",
"torch.nn.DataParallel",
"torch.Tensor",
"os.path.isfile",
"numpy.argsort",
"data.get_test_loader",
"torch.cuda.is_available",
"numpy.zeros",
"numpy.dot",
"numpy.array",
"torch.zeros",
"option.parser.parse_args"
] |
[((7270, 7298), 'data.get_test_loader', 'get_test_loader', (['args', 'vocab'], {}), '(args, vocab)\n', (7285, 7298), False, 'from data import get_test_loader\n'), ((10374, 10399), 'os.path.isfile', 'os.path.isfile', (['args.ckpt'], {}), '(args.ckpt)\n', (10388, 10399), False, 'import os, sys\n'), ((10410, 10436), 'model.PVSE', 'PVSE', (['vocab.word2idx', 'args'], {}), '(vocab.word2idx, args)\n', (10414, 10436), False, 'from model import PVSE\n'), ((10442, 10467), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (10465, 10467), False, 'import torch\n'), ((743, 768), 'torch.cuda.is_available', 'torch.cuda.is_available', ([], {}), '()\n', (766, 768), False, 'import torch\n'), ((2278, 2292), 'numpy.zeros', 'np.zeros', (['npts'], {}), '(npts)\n', (2286, 2292), True, 'import numpy as np\n'), ((2294, 2308), 'numpy.zeros', 'np.zeros', (['npts'], {}), '(npts)\n', (2302, 2308), True, 'import numpy as np\n'), ((4757, 4779), 'numpy.zeros', 'np.zeros', (['(nreps * npts)'], {}), '(nreps * npts)\n', (4765, 4779), True, 'import numpy as np\n'), ((4781, 4803), 'numpy.zeros', 'np.zeros', (['(nreps * npts)'], {}), '(nreps * npts)\n', (4789, 4803), True, 'import numpy as np\n'), ((6021, 6040), 'numpy.zeros', 'np.zeros', (['sim.shape'], {}), '(sim.shape)\n', (6029, 6040), True, 'import numpy as np\n'), ((10049, 10074), 'torch.cuda.device_count', 'torch.cuda.device_count', ([], {}), '()\n', (10072, 10074), False, 'import torch\n'), ((10107, 10126), 'option.parser.parse_args', 'parser.parse_args', ([], {}), '()\n', (10124, 10126), False, 'from option import parser, verify_input_args\n'), ((10154, 10173), 'option.parser.parse_args', 'parser.parse_args', ([], {}), '()\n', (10171, 10173), False, 'from option import parser, verify_input_args\n'), ((10291, 10305), 'pickle.load', 'pickle.load', (['f'], {}), '(f)\n', (10302, 10305), False, 'import pickle\n'), ((10607, 10628), 'torch.load', 'torch.load', (['args.ckpt'], {}), '(args.ckpt)\n', (10617, 10628), False, 'import torch\n'), ((4042, 4058), 'numpy.median', 'np.median', (['ranks'], {}), '(ranks)\n', (4051, 4058), True, 'import numpy as np\n'), ((6553, 6569), 'numpy.median', 'np.median', (['ranks'], {}), '(ranks)\n', (6562, 6569), True, 'import numpy as np\n'), ((1561, 1577), 'numpy.zeros', 'np.zeros', (['emb_sz'], {}), '(emb_sz)\n', (1569, 1577), True, 'import numpy as np\n'), ((1597, 1613), 'numpy.zeros', 'np.zeros', (['emb_sz'], {}), '(emb_sz)\n', (1605, 1613), True, 'import numpy as np\n'), ((3564, 3579), 'numpy.argsort', 'np.argsort', (['sim'], {}), '(sim)\n', (3574, 3579), True, 'import numpy as np\n'), ((3717, 3736), 'numpy.where', 'np.where', (['(inds == i)'], {}), '(inds == i)\n', (3725, 3736), True, 'import numpy as np\n'), ((3873, 3892), 'numpy.where', 'np.where', (['(ranks < 1)'], {}), '(ranks < 1)\n', (3881, 3892), True, 'import numpy as np\n'), ((3929, 3948), 'numpy.where', 'np.where', (['(ranks < 5)'], {}), '(ranks < 5)\n', (3937, 3948), True, 'import numpy as np\n'), ((3986, 4006), 'numpy.where', 'np.where', (['(ranks < 10)'], {}), '(ranks < 10)\n', (3994, 4006), True, 'import numpy as np\n'), ((5986, 6008), 'numpy.dot', 'np.dot', (['queries', 'ims.T'], {}), '(queries, ims.T)\n', (5992, 6008), True, 'import numpy as np\n'), ((6210, 6228), 'numpy.argsort', 'np.argsort', (['sim[i]'], {}), '(sim[i])\n', (6220, 6228), True, 'import numpy as np\n'), ((6268, 6294), 'numpy.where', 'np.where', (['(inds[i] == index)'], {}), '(inds[i] == index)\n', (6276, 6294), True, 'import numpy as np\n'), ((6384, 6403), 'numpy.where', 'np.where', (['(ranks < 1)'], {}), '(ranks < 1)\n', (6392, 6403), True, 'import numpy as np\n'), ((6440, 6459), 'numpy.where', 'np.where', (['(ranks < 5)'], {}), '(ranks < 5)\n', (6448, 6459), True, 'import numpy as np\n'), ((6497, 6517), 'numpy.where', 'np.where', (['(ranks < 10)'], {}), '(ranks < 10)\n', (6505, 6517), True, 'import numpy as np\n'), ((10481, 10509), 'torch.nn.DataParallel', 'torch.nn.DataParallel', (['model'], {}), '(model)\n', (10502, 10509), False, 'import torch\n'), ((1217, 1278), 'torch.zeros', 'torch.zeros', (['emb_sz'], {'dtype': 'img_emb.dtype', 'requires_grad': '(False)'}), '(emb_sz, dtype=img_emb.dtype, requires_grad=False)\n', (1228, 1278), False, 'import torch\n'), ((1305, 1366), 'torch.zeros', 'torch.zeros', (['emb_sz'], {'dtype': 'txt_emb.dtype', 'requires_grad': '(False)'}), '(emb_sz, dtype=txt_emb.dtype, requires_grad=False)\n', (1316, 1366), False, 'import torch\n'), ((3410, 3433), 'numpy.dot', 'np.dot', (['im', 'sentences.T'], {}), '(im, sentences.T)\n', (3416, 3433), True, 'import numpy as np\n'), ((5882, 5921), 'numpy.tensordot', 'np.tensordot', (['queries', 'ims'], {'axes': '[2, 2]'}), '(queries, ims, axes=[2, 2])\n', (5894, 5921), True, 'import numpy as np\n'), ((8750, 8767), 'numpy.array', 'np.array', (['results'], {}), '(results)\n', (8758, 8767), True, 'import numpy as np\n'), ((3113, 3130), 'torch.Tensor', 'torch.Tensor', (['im2'], {}), '(im2)\n', (3125, 3130), False, 'import torch\n'), ((3139, 3162), 'torch.Tensor', 'torch.Tensor', (['sentences'], {}), '(sentences)\n', (3151, 3162), False, 'import torch\n'), ((5602, 5622), 'torch.Tensor', 'torch.Tensor', (['images'], {}), '(images)\n', (5614, 5622), False, 'import torch\n'), ((5664, 5693), 'torch.Tensor', 'torch.Tensor', (['sentences_batch'], {}), '(sentences_batch)\n', (5676, 5693), False, 'import torch\n'), ((3293, 3333), 'numpy.tensordot', 'np.tensordot', (['im', 'sentences'], {'axes': '[2, 2]'}), '(im, sentences, axes=[2, 2])\n', (3305, 3333), True, 'import numpy as np\n')]
|
# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import paddle
import paddle.nn as nn
import paddle.nn.functional as F
from paddlex.ppdet.core.workspace import register
import pycocotools.mask as mask_util
from ..initializer import linear_init_, constant_
from ..transformers.utils import inverse_sigmoid
__all__ = ['DETRHead', 'DeformableDETRHead']
class MLP(nn.Layer):
def __init__(self, input_dim, hidden_dim, output_dim, num_layers):
super().__init__()
self.num_layers = num_layers
h = [hidden_dim] * (num_layers - 1)
self.layers = nn.LayerList(
nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]))
self._reset_parameters()
def _reset_parameters(self):
for l in self.layers:
linear_init_(l)
def forward(self, x):
for i, layer in enumerate(self.layers):
x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x)
return x
class MultiHeadAttentionMap(nn.Layer):
"""This is a 2D attention module, which only returns the attention softmax (no multiplication by value)"""
def __init__(self,
query_dim,
hidden_dim,
num_heads,
dropout=0.0,
bias=True):
super().__init__()
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.dropout = nn.Dropout(dropout)
weight_attr = paddle.ParamAttr(
initializer=paddle.nn.initializer.XavierUniform())
bias_attr = paddle.framework.ParamAttr(
initializer=paddle.nn.initializer.Constant()) if bias else False
self.q_proj = nn.Linear(query_dim, hidden_dim, weight_attr, bias_attr)
self.k_proj = nn.Conv2D(
query_dim,
hidden_dim,
1,
weight_attr=weight_attr,
bias_attr=bias_attr)
self.normalize_fact = float(hidden_dim / self.num_heads)**-0.5
def forward(self, q, k, mask=None):
q = self.q_proj(q)
k = self.k_proj(k)
bs, num_queries, n, c, h, w = q.shape[0], q.shape[1], self.num_heads,\
self.hidden_dim // self.num_heads, k.shape[-2], k.shape[-1]
qh = q.reshape([bs, num_queries, n, c])
kh = k.reshape([bs, n, c, h, w])
# weights = paddle.einsum("bqnc,bnchw->bqnhw", qh * self.normalize_fact, kh)
qh = qh.transpose([0, 2, 1, 3]).reshape([-1, num_queries, c])
kh = kh.reshape([-1, c, h * w])
weights = paddle.bmm(qh * self.normalize_fact, kh).reshape(
[bs, n, num_queries, h, w]).transpose([0, 2, 1, 3, 4])
if mask is not None:
weights += mask
# fix a potenial bug: https://github.com/facebookresearch/detr/issues/247
weights = F.softmax(weights.flatten(3), axis=-1).reshape(weights.shape)
weights = self.dropout(weights)
return weights
class MaskHeadFPNConv(nn.Layer):
"""
Simple convolutional head, using group norm.
Upsampling is done using a FPN approach
"""
def __init__(self, input_dim, fpn_dims, context_dim, num_groups=8):
super().__init__()
inter_dims = [input_dim,
] + [context_dim // (2**i) for i in range(1, 5)]
weight_attr = paddle.ParamAttr(
initializer=paddle.nn.initializer.KaimingUniform())
bias_attr = paddle.framework.ParamAttr(
initializer=paddle.nn.initializer.Constant())
self.conv0 = self._make_layers(input_dim, input_dim, 3, num_groups,
weight_attr, bias_attr)
self.conv_inter = nn.LayerList()
for in_dims, out_dims in zip(inter_dims[:-1], inter_dims[1:]):
self.conv_inter.append(
self._make_layers(in_dims, out_dims, 3, num_groups,
weight_attr, bias_attr))
self.conv_out = nn.Conv2D(
inter_dims[-1],
1,
3,
padding=1,
weight_attr=weight_attr,
bias_attr=bias_attr)
self.adapter = nn.LayerList()
for i in range(len(fpn_dims)):
self.adapter.append(
nn.Conv2D(
fpn_dims[i],
inter_dims[i + 1],
1,
weight_attr=weight_attr,
bias_attr=bias_attr))
def _make_layers(self,
in_dims,
out_dims,
kernel_size,
num_groups,
weight_attr=None,
bias_attr=None):
return nn.Sequential(
nn.Conv2D(
in_dims,
out_dims,
kernel_size,
padding=kernel_size // 2,
weight_attr=weight_attr,
bias_attr=bias_attr),
nn.GroupNorm(num_groups, out_dims),
nn.ReLU())
def forward(self, x, bbox_attention_map, fpns):
x = paddle.concat([
x.tile([bbox_attention_map.shape[1], 1, 1, 1]),
bbox_attention_map.flatten(0, 1)
], 1)
x = self.conv0(x)
for inter_layer, adapter_layer, feat in zip(self.conv_inter[:-1],
self.adapter, fpns):
feat = adapter_layer(feat).tile(
[bbox_attention_map.shape[1], 1, 1, 1])
x = inter_layer(x)
x = feat + F.interpolate(x, size=feat.shape[-2:])
x = self.conv_inter[-1](x)
x = self.conv_out(x)
return x
@register
class DETRHead(nn.Layer):
__shared__ = ['num_classes', 'hidden_dim', 'use_focal_loss']
__inject__ = ['loss']
def __init__(self,
num_classes=80,
hidden_dim=256,
nhead=8,
num_mlp_layers=3,
loss='DETRLoss',
fpn_dims=[1024, 512, 256],
with_mask_head=False,
use_focal_loss=False):
super(DETRHead, self).__init__()
# add background class
self.num_classes = num_classes if use_focal_loss else num_classes + 1
self.hidden_dim = hidden_dim
self.loss = loss
self.with_mask_head = with_mask_head
self.use_focal_loss = use_focal_loss
self.score_head = nn.Linear(hidden_dim, self.num_classes)
self.bbox_head = MLP(hidden_dim,
hidden_dim,
output_dim=4,
num_layers=num_mlp_layers)
if self.with_mask_head:
self.bbox_attention = MultiHeadAttentionMap(hidden_dim, hidden_dim,
nhead)
self.mask_head = MaskHeadFPNConv(hidden_dim + nhead, fpn_dims,
hidden_dim)
self._reset_parameters()
def _reset_parameters(self):
linear_init_(self.score_head)
@classmethod
def from_config(cls, cfg, hidden_dim, nhead, input_shape):
return {
'hidden_dim': hidden_dim,
'nhead': nhead,
'fpn_dims': [i.channels for i in input_shape[::-1]][1:]
}
@staticmethod
def get_gt_mask_from_polygons(gt_poly, pad_mask):
out_gt_mask = []
for polygons, padding in zip(gt_poly, pad_mask):
height, width = int(padding[:, 0].sum()), int(padding[0, :].sum())
masks = []
for obj_poly in polygons:
rles = mask_util.frPyObjects(obj_poly, height, width)
rle = mask_util.merge(rles)
masks.append(
paddle.to_tensor(mask_util.decode(rle)).astype('float32'))
masks = paddle.stack(masks)
masks_pad = paddle.zeros(
[masks.shape[0], pad_mask.shape[1], pad_mask.shape[2]])
masks_pad[:, :height, :width] = masks
out_gt_mask.append(masks_pad)
return out_gt_mask
def forward(self, out_transformer, body_feats, inputs=None):
r"""
Args:
out_transformer (Tuple): (feats: [num_levels, batch_size,
num_queries, hidden_dim],
memory: [batch_size, hidden_dim, h, w],
src_proj: [batch_size, h*w, hidden_dim],
src_mask: [batch_size, 1, 1, h, w])
body_feats (List(Tensor)): list[[B, C, H, W]]
inputs (dict): dict(inputs)
"""
feats, memory, src_proj, src_mask = out_transformer
outputs_logit = self.score_head(feats)
outputs_bbox = F.sigmoid(self.bbox_head(feats))
outputs_seg = None
if self.with_mask_head:
bbox_attention_map = self.bbox_attention(feats[-1], memory,
src_mask)
fpn_feats = [a for a in body_feats[::-1]][1:]
outputs_seg = self.mask_head(src_proj, bbox_attention_map,
fpn_feats)
outputs_seg = outputs_seg.reshape([
feats.shape[1], feats.shape[2], outputs_seg.shape[-2],
outputs_seg.shape[-1]
])
if self.training:
assert inputs is not None
assert 'gt_bbox' in inputs and 'gt_class' in inputs
gt_mask = self.get_gt_mask_from_polygons(
inputs['gt_poly'],
inputs['pad_mask']) if 'gt_poly' in inputs else None
return self.loss(
outputs_bbox,
outputs_logit,
inputs['gt_bbox'],
inputs['gt_class'],
masks=outputs_seg,
gt_mask=gt_mask)
else:
return (outputs_bbox[-1], outputs_logit[-1], outputs_seg)
@register
class DeformableDETRHead(nn.Layer):
__shared__ = ['num_classes', 'hidden_dim']
__inject__ = ['loss']
def __init__(self,
num_classes=80,
hidden_dim=512,
nhead=8,
num_mlp_layers=3,
loss='DETRLoss'):
super(DeformableDETRHead, self).__init__()
self.num_classes = num_classes
self.hidden_dim = hidden_dim
self.nhead = nhead
self.loss = loss
self.score_head = nn.Linear(hidden_dim, self.num_classes)
self.bbox_head = MLP(hidden_dim,
hidden_dim,
output_dim=4,
num_layers=num_mlp_layers)
self._reset_parameters()
def _reset_parameters(self):
linear_init_(self.score_head)
constant_(self.score_head.bias, -4.595)
constant_(self.bbox_head.layers[-1].weight)
bias = paddle.zeros_like(self.bbox_head.layers[-1].bias)
bias[2:] = -2.0
self.bbox_head.layers[-1].bias.set_value(bias)
@classmethod
def from_config(cls, cfg, hidden_dim, nhead, input_shape):
return {'hidden_dim': hidden_dim, 'nhead': nhead}
def forward(self, out_transformer, body_feats, inputs=None):
r"""
Args:
out_transformer (Tuple): (feats: [num_levels, batch_size,
num_queries, hidden_dim],
memory: [batch_size,
\sum_{l=0}^{L-1} H_l \cdot W_l, hidden_dim],
reference_points: [batch_size, num_queries, 2])
body_feats (List(Tensor)): list[[B, C, H, W]]
inputs (dict): dict(inputs)
"""
feats, memory, reference_points = out_transformer
reference_points = inverse_sigmoid(reference_points.unsqueeze(0))
outputs_bbox = self.bbox_head(feats)
# It's equivalent to "outputs_bbox[:, :, :, :2] += reference_points",
# but the gradient is wrong in paddle.
outputs_bbox = paddle.concat(
[
outputs_bbox[:, :, :, :2] + reference_points,
outputs_bbox[:, :, :, 2:]
],
axis=-1)
outputs_bbox = F.sigmoid(outputs_bbox)
outputs_logit = self.score_head(feats)
if self.training:
assert inputs is not None
assert 'gt_bbox' in inputs and 'gt_class' in inputs
return self.loss(outputs_bbox, outputs_logit, inputs['gt_bbox'],
inputs['gt_class'])
else:
return (outputs_bbox[-1], outputs_logit[-1], None)
|
[
"paddle.nn.LayerList",
"pycocotools.mask.merge",
"paddle.stack",
"paddle.nn.functional.interpolate",
"pycocotools.mask.frPyObjects",
"paddle.bmm",
"paddle.nn.functional.sigmoid",
"pycocotools.mask.decode",
"paddle.nn.initializer.XavierUniform",
"paddle.nn.Linear",
"paddle.nn.initializer.Constant",
"paddle.zeros",
"paddle.zeros_like",
"paddle.nn.Dropout",
"paddle.nn.Conv2D",
"paddle.nn.ReLU",
"paddle.nn.GroupNorm",
"paddle.concat",
"paddle.nn.initializer.KaimingUniform"
] |
[((2075, 2094), 'paddle.nn.Dropout', 'nn.Dropout', (['dropout'], {}), '(dropout)\n', (2085, 2094), True, 'import paddle.nn as nn\n'), ((2347, 2403), 'paddle.nn.Linear', 'nn.Linear', (['query_dim', 'hidden_dim', 'weight_attr', 'bias_attr'], {}), '(query_dim, hidden_dim, weight_attr, bias_attr)\n', (2356, 2403), True, 'import paddle.nn as nn\n'), ((2426, 2512), 'paddle.nn.Conv2D', 'nn.Conv2D', (['query_dim', 'hidden_dim', '(1)'], {'weight_attr': 'weight_attr', 'bias_attr': 'bias_attr'}), '(query_dim, hidden_dim, 1, weight_attr=weight_attr, bias_attr=\n bias_attr)\n', (2435, 2512), True, 'import paddle.nn as nn\n'), ((4340, 4354), 'paddle.nn.LayerList', 'nn.LayerList', ([], {}), '()\n', (4352, 4354), True, 'import paddle.nn as nn\n'), ((4614, 4706), 'paddle.nn.Conv2D', 'nn.Conv2D', (['inter_dims[-1]', '(1)', '(3)'], {'padding': '(1)', 'weight_attr': 'weight_attr', 'bias_attr': 'bias_attr'}), '(inter_dims[-1], 1, 3, padding=1, weight_attr=weight_attr,\n bias_attr=bias_attr)\n', (4623, 4706), True, 'import paddle.nn as nn\n'), ((4800, 4814), 'paddle.nn.LayerList', 'nn.LayerList', ([], {}), '()\n', (4812, 4814), True, 'import paddle.nn as nn\n'), ((7069, 7108), 'paddle.nn.Linear', 'nn.Linear', (['hidden_dim', 'self.num_classes'], {}), '(hidden_dim, self.num_classes)\n', (7078, 7108), True, 'import paddle.nn as nn\n'), ((11104, 11143), 'paddle.nn.Linear', 'nn.Linear', (['hidden_dim', 'self.num_classes'], {}), '(hidden_dim, self.num_classes)\n', (11113, 11143), True, 'import paddle.nn as nn\n'), ((11546, 11595), 'paddle.zeros_like', 'paddle.zeros_like', (['self.bbox_head.layers[-1].bias'], {}), '(self.bbox_head.layers[-1].bias)\n', (11563, 11595), False, 'import paddle\n'), ((12689, 12790), 'paddle.concat', 'paddle.concat', (['[outputs_bbox[:, :, :, :2] + reference_points, outputs_bbox[:, :, :, 2:]]'], {'axis': '(-1)'}), '([outputs_bbox[:, :, :, :2] + reference_points, outputs_bbox[:,\n :, :, 2:]], axis=-1)\n', (12702, 12790), False, 'import paddle\n'), ((12882, 12905), 'paddle.nn.functional.sigmoid', 'F.sigmoid', (['outputs_bbox'], {}), '(outputs_bbox)\n', (12891, 12905), True, 'import paddle.nn.functional as F\n'), ((5371, 5488), 'paddle.nn.Conv2D', 'nn.Conv2D', (['in_dims', 'out_dims', 'kernel_size'], {'padding': '(kernel_size // 2)', 'weight_attr': 'weight_attr', 'bias_attr': 'bias_attr'}), '(in_dims, out_dims, kernel_size, padding=kernel_size // 2,\n weight_attr=weight_attr, bias_attr=bias_attr)\n', (5380, 5488), True, 'import paddle.nn as nn\n'), ((5595, 5629), 'paddle.nn.GroupNorm', 'nn.GroupNorm', (['num_groups', 'out_dims'], {}), '(num_groups, out_dims)\n', (5607, 5629), True, 'import paddle.nn as nn\n'), ((5643, 5652), 'paddle.nn.ReLU', 'nn.ReLU', ([], {}), '()\n', (5650, 5652), True, 'import paddle.nn as nn\n'), ((8483, 8502), 'paddle.stack', 'paddle.stack', (['masks'], {}), '(masks)\n', (8495, 8502), False, 'import paddle\n'), ((8527, 8595), 'paddle.zeros', 'paddle.zeros', (['[masks.shape[0], pad_mask.shape[1], pad_mask.shape[2]]'], {}), '([masks.shape[0], pad_mask.shape[1], pad_mask.shape[2]])\n', (8539, 8595), False, 'import paddle\n'), ((1273, 1288), 'paddle.nn.Linear', 'nn.Linear', (['n', 'k'], {}), '(n, k)\n', (1282, 1288), True, 'import paddle.nn as nn\n'), ((2160, 2197), 'paddle.nn.initializer.XavierUniform', 'paddle.nn.initializer.XavierUniform', ([], {}), '()\n', (2195, 2197), False, 'import paddle\n'), ((4028, 4066), 'paddle.nn.initializer.KaimingUniform', 'paddle.nn.initializer.KaimingUniform', ([], {}), '()\n', (4064, 4066), False, 'import paddle\n'), ((4140, 4172), 'paddle.nn.initializer.Constant', 'paddle.nn.initializer.Constant', ([], {}), '()\n', (4170, 4172), False, 'import paddle\n'), ((4903, 4997), 'paddle.nn.Conv2D', 'nn.Conv2D', (['fpn_dims[i]', 'inter_dims[i + 1]', '(1)'], {'weight_attr': 'weight_attr', 'bias_attr': 'bias_attr'}), '(fpn_dims[i], inter_dims[i + 1], 1, weight_attr=weight_attr,\n bias_attr=bias_attr)\n', (4912, 4997), True, 'import paddle.nn as nn\n'), ((6182, 6220), 'paddle.nn.functional.interpolate', 'F.interpolate', (['x'], {'size': 'feat.shape[-2:]'}), '(x, size=feat.shape[-2:])\n', (6195, 6220), True, 'import paddle.nn.functional as F\n'), ((8263, 8309), 'pycocotools.mask.frPyObjects', 'mask_util.frPyObjects', (['obj_poly', 'height', 'width'], {}), '(obj_poly, height, width)\n', (8284, 8309), True, 'import pycocotools.mask as mask_util\n'), ((8332, 8353), 'pycocotools.mask.merge', 'mask_util.merge', (['rles'], {}), '(rles)\n', (8347, 8353), True, 'import pycocotools.mask as mask_util\n'), ((2271, 2303), 'paddle.nn.initializer.Constant', 'paddle.nn.initializer.Constant', ([], {}), '()\n', (2301, 2303), False, 'import paddle\n'), ((3215, 3255), 'paddle.bmm', 'paddle.bmm', (['(qh * self.normalize_fact)', 'kh'], {}), '(qh * self.normalize_fact, kh)\n', (3225, 3255), False, 'import paddle\n'), ((8421, 8442), 'pycocotools.mask.decode', 'mask_util.decode', (['rle'], {}), '(rle)\n', (8437, 8442), True, 'import pycocotools.mask as mask_util\n')]
|
import numpy as np
import pygame
import sys
import math
import random
from board import Board
from ai import Minimax_AI
# function to draw the board in pygame
def draw_board(board):
for c in range(COLUMN_COUNT):
for r in range(ROW_COUNT):
pygame.draw.rect(screen, colors["blue"], (c*SQUARESIZE, r *
SQUARESIZE+SQUARESIZE, SQUARESIZE, SQUARESIZE))
pygame.draw.circle(screen, colors["black"], (int(
c*SQUARESIZE+SQUARESIZE/2), int(r*SQUARESIZE+SQUARESIZE+SQUARESIZE/2)), RADIUS)
for c in range(COLUMN_COUNT):
for r in range(ROW_COUNT):
if board[r][c] == 1:
pygame.draw.circle(screen, colors["red"], (int(
c*SQUARESIZE+SQUARESIZE/2), height-int(r*SQUARESIZE+SQUARESIZE/2)), RADIUS)
elif board[r][c] == 2:
pygame.draw.circle(screen, colors["yellow"], (int(
c*SQUARESIZE+SQUARESIZE/2), height-int(r*SQUARESIZE+SQUARESIZE/2)), RADIUS)
pygame.display.update()
if __name__ == '__main__':
# colors for game
colors = {"blue": (0, 0, 255),
"black": (0, 0, 0),
"red": (255, 0, 0),
"yellow": (255, 255, 0)}
# size of board
ROW_COUNT = 6
COLUMN_COUNT = 7
# create board
board = Board(ROW_COUNT, COLUMN_COUNT)
# players
players = [1, 2]
# initialize AI
ai_depth = 6
ai_player = random.choice(players)
ai = Minimax_AI(ai_depth, ai_player, ROW_COUNT, COLUMN_COUNT)
# decide turns; if turn is 0 player moves first
if ai_player == 2:
turn = 0
else:
turn = 1
pygame.init()
SQUARESIZE = 100
width = COLUMN_COUNT * SQUARESIZE
height = (ROW_COUNT+1) * SQUARESIZE
size = (width, height)
RADIUS = int(SQUARESIZE/2 - 5)
screen = pygame.display.set_mode(size)
draw_board(board.status)
pygame.display.update()
myfont = pygame.font.SysFont("monospace", 75)
game_over = False
while not game_over:
# Ask for Player 1 Input
if turn == 0:
turn_over = False
while not turn_over:
for event in pygame.event.get():
if event.type == pygame.QUIT:
sys.exit()
if event.type == pygame.MOUSEMOTION:
pygame.draw.rect(
screen, colors["black"], (0, 0, width, SQUARESIZE))
posx = event.pos[0]
if turn == 0:
pygame.draw.circle(
screen, colors["red"], (posx, int(SQUARESIZE/2)), RADIUS)
else:
pygame.draw.circle(
screen, colors["yellow"], (posx, int(SQUARESIZE/2)), RADIUS)
pygame.display.update()
if event.type == pygame.MOUSEBUTTONDOWN:
pygame.draw.rect(
screen, colors["black"], (0, 0, width, SQUARESIZE))
# print(event.pos)
posx = event.pos[0]
col = int(math.floor(posx/SQUARESIZE))
if board.is_valid_location(col):
row = board.get_next_open_row(col)
board.insert_piece(row, col, 1)
turn_over = True
if board.is_winning_position(1):
label = myfont.render(
"You win!!", 1, colors["red"])
screen.blit(label, (40, 10))
game_over = True
draw_board(board.status)
# Ask for Player 2 Input
else:
col = ai.make_move(board.status)
if board.is_valid_location(col):
row = board.get_next_open_row(col)
board.insert_piece(row, col, 2)
if board.is_winning_position(2):
label = myfont.render(
"AI win!!", 1, colors["red"])
screen.blit(label, (40, 10))
game_over = True
draw_board(board.status)
turn += 1
turn = turn % 2
if game_over:
pygame.time.wait(3000)
|
[
"random.choice",
"pygame.init",
"pygame.event.get",
"pygame.time.wait",
"pygame.display.set_mode",
"math.floor",
"board.Board",
"pygame.draw.rect",
"sys.exit",
"pygame.display.update",
"ai.Minimax_AI",
"pygame.font.SysFont"
] |
[((1050, 1073), 'pygame.display.update', 'pygame.display.update', ([], {}), '()\n', (1071, 1073), False, 'import pygame\n'), ((1359, 1389), 'board.Board', 'Board', (['ROW_COUNT', 'COLUMN_COUNT'], {}), '(ROW_COUNT, COLUMN_COUNT)\n', (1364, 1389), False, 'from board import Board\n'), ((1480, 1502), 'random.choice', 'random.choice', (['players'], {}), '(players)\n', (1493, 1502), False, 'import random\n'), ((1512, 1568), 'ai.Minimax_AI', 'Minimax_AI', (['ai_depth', 'ai_player', 'ROW_COUNT', 'COLUMN_COUNT'], {}), '(ai_depth, ai_player, ROW_COUNT, COLUMN_COUNT)\n', (1522, 1568), False, 'from ai import Minimax_AI\n'), ((1694, 1707), 'pygame.init', 'pygame.init', ([], {}), '()\n', (1705, 1707), False, 'import pygame\n'), ((1887, 1916), 'pygame.display.set_mode', 'pygame.display.set_mode', (['size'], {}), '(size)\n', (1910, 1916), False, 'import pygame\n'), ((1950, 1973), 'pygame.display.update', 'pygame.display.update', ([], {}), '()\n', (1971, 1973), False, 'import pygame\n'), ((1988, 2024), 'pygame.font.SysFont', 'pygame.font.SysFont', (['"""monospace"""', '(75)'], {}), "('monospace', 75)\n", (2007, 2024), False, 'import pygame\n'), ((265, 380), 'pygame.draw.rect', 'pygame.draw.rect', (['screen', "colors['blue']", '(c * SQUARESIZE, r * SQUARESIZE + SQUARESIZE, SQUARESIZE, SQUARESIZE)'], {}), "(screen, colors['blue'], (c * SQUARESIZE, r * SQUARESIZE +\n SQUARESIZE, SQUARESIZE, SQUARESIZE))\n", (281, 380), False, 'import pygame\n'), ((4432, 4454), 'pygame.time.wait', 'pygame.time.wait', (['(3000)'], {}), '(3000)\n', (4448, 4454), False, 'import pygame\n'), ((2220, 2238), 'pygame.event.get', 'pygame.event.get', ([], {}), '()\n', (2236, 2238), False, 'import pygame\n'), ((2916, 2939), 'pygame.display.update', 'pygame.display.update', ([], {}), '()\n', (2937, 2939), False, 'import pygame\n'), ((2314, 2324), 'sys.exit', 'sys.exit', ([], {}), '()\n', (2322, 2324), False, 'import sys\n'), ((2407, 2475), 'pygame.draw.rect', 'pygame.draw.rect', (['screen', "colors['black']", '(0, 0, width, SQUARESIZE)'], {}), "(screen, colors['black'], (0, 0, width, SQUARESIZE))\n", (2423, 2475), False, 'import pygame\n'), ((3026, 3094), 'pygame.draw.rect', 'pygame.draw.rect', (['screen', "colors['black']", '(0, 0, width, SQUARESIZE)'], {}), "(screen, colors['black'], (0, 0, width, SQUARESIZE))\n", (3042, 3094), False, 'import pygame\n'), ((3246, 3275), 'math.floor', 'math.floor', (['(posx / SQUARESIZE)'], {}), '(posx / SQUARESIZE)\n', (3256, 3275), False, 'import math\n')]
|
#!/usr/bin/env python
#
# pyFlow - a lightweight parallel task engine
#
# Copyright (c) 2012-2017 Illumina, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in
# the documentation and/or other materials provided with the
# distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY
# WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
#
#
# demonstrate/test addTask() cwd option
#
import os.path
import sys
# add module path by hand
#
scriptDir=os.path.abspath(os.path.dirname(__file__))
sys.path.append(scriptDir+"/../../src")
from pyflow import WorkflowRunner
# all pyflow workflows are written into classes derived from
# pyflow.WorkflowRunner:
#
class CwdWorkflow(WorkflowRunner) :
# a workflow is defined by overloading the
# WorkflowRunner.workflow() method:
#
def workflow(self) :
# get cwd and its parent for the addTask cwd test
#
cwd=os.getcwd()
parentdir=os.path.abspath(os.path.join(cwd,".."))
self.flowLog("testing pyflow cwd: '%s' parentdir: '%s'" % (cwd,parentdir))
# task will fail unless pwd == parentdir:
#
# test both absolute and relative cwd arguments:
#
self.addTask("testAbsCwd","[ $(pwd) == '%s' ]; exit $?" % (parentdir),cwd=parentdir)
self.addTask("testRelCwd","[ $(pwd) == '%s' ]; exit $?" % (parentdir),cwd="..")
# Instantiate the workflow
#
wflow = CwdWorkflow()
# Run the worklow:
#
retval=wflow.run(mode="local")
sys.exit(retval)
|
[
"sys.path.append",
"sys.exit"
] |
[((1589, 1630), 'sys.path.append', 'sys.path.append', (["(scriptDir + '/../../src')"], {}), "(scriptDir + '/../../src')\n", (1604, 1630), False, 'import sys\n'), ((2561, 2577), 'sys.exit', 'sys.exit', (['retval'], {}), '(retval)\n', (2569, 2577), False, 'import sys\n')]
|
import pandas as pd
from datetime import timedelta, date
import matplotlib.pyplot as plt
def daterange(start_date, end_date):
for n in range(int((end_date - start_date).days)):
yield start_date + timedelta(n)
def getFileByDate(date = 'latest'):
url = 'https://raw.githubusercontent.com/pcm-dpc/COVID-19/master/dati-regioni/dpc-covid19-ita-regioni-' + date + '.csv' #20200927.csv'
df = pd.read_csv(url, error_bad_lines=False)
return df
#default region is Sicily
def getValue(daily, column = 'nuovi_positivi', region='Sicilia'):
regRaw = daily.loc[daily['denominazione_regione'] == region]
regRaw.loc[regRaw['denominazione_regione'] == region]
return regRaw[column].to_numpy()[0] #regRaw.at[16, column] #return daily.iloc[2, 17]
def getAll(column, region):
start_date = date(2020, 2, 24)
end_date = date(2020, 4, 10)
end_date = date.today()
result = []
for single_date in daterange(start_date, end_date):
day = single_date.strftime("%Y%m%d")
result.append(getValue(getFileByDate(day), column, region))
return result
nuovi_positivi = getAll('nuovi_positivi', 'Sicilia')
#deceduti = getAll('deceduti', 'Sicilia')
#dimessi_guariti = getAll('dimessi_guariti', 'Sicilia')
nuovi_positivi = pd.Series(nuovi_positivi, index=pd.date_range('2/24/2020', periods=len(nuovi_positivi)))
#deceduti = pd.Series(deceduti, index=pd.date_range('2/24/2020', periods=len(deceduti)))
#dimessi_guariti = pd.Series(dimessi_guariti, index=pd.date_range('2/24/2020', periods=len(dimessi_guariti)))
plt.figure();
ax = nuovi_positivi.plot()
#deceduti.plot(ax=ax)
#dimessi_guariti.plot(ax=ax)
plt.show()
|
[
"pandas.read_csv",
"datetime.timedelta",
"matplotlib.pyplot.figure",
"datetime.date",
"datetime.date.today",
"matplotlib.pyplot.show"
] |
[((1521, 1533), 'matplotlib.pyplot.figure', 'plt.figure', ([], {}), '()\n', (1531, 1533), True, 'import matplotlib.pyplot as plt\n'), ((1613, 1623), 'matplotlib.pyplot.show', 'plt.show', ([], {}), '()\n', (1621, 1623), True, 'import matplotlib.pyplot as plt\n'), ((403, 442), 'pandas.read_csv', 'pd.read_csv', (['url'], {'error_bad_lines': '(False)'}), '(url, error_bad_lines=False)\n', (414, 442), True, 'import pandas as pd\n'), ((799, 816), 'datetime.date', 'date', (['(2020)', '(2)', '(24)'], {}), '(2020, 2, 24)\n', (803, 816), False, 'from datetime import timedelta, date\n'), ((829, 846), 'datetime.date', 'date', (['(2020)', '(4)', '(10)'], {}), '(2020, 4, 10)\n', (833, 846), False, 'from datetime import timedelta, date\n'), ((859, 871), 'datetime.date.today', 'date.today', ([], {}), '()\n', (869, 871), False, 'from datetime import timedelta, date\n'), ((209, 221), 'datetime.timedelta', 'timedelta', (['n'], {}), '(n)\n', (218, 221), False, 'from datetime import timedelta, date\n')]
|
"""
Model select class1 single allele models.
"""
import argparse
import os
import signal
import sys
import time
import traceback
import random
from functools import partial
from pprint import pprint
import numpy
import pandas
from scipy.stats import kendalltau, percentileofscore, pearsonr
from sklearn.metrics import roc_auc_score
import tqdm # progress bar
tqdm.monitor_interval = 0 # see https://github.com/tqdm/tqdm/issues/481
from .class1_affinity_predictor import Class1AffinityPredictor
from .common import normalize_allele_name
from .encodable_sequences import EncodableSequences
from .common import configure_logging, random_peptides
from .local_parallelism import worker_pool_with_gpu_assignments_from_args, add_local_parallelism_args
from .regression_target import from_ic50
# To avoid pickling large matrices to send to child processes when running in
# parallel, we use this global variable as a place to store data. Data that is
# stored here before creating the thread pool will be inherited to the child
# processes upon fork() call, allowing us to share large data with the workers
# via shared memory.
GLOBAL_DATA = {}
parser = argparse.ArgumentParser(usage=__doc__)
parser.add_argument(
"--data",
metavar="FILE.csv",
required=False,
help=(
"Model selection data CSV. Expected columns: "
"allele, peptide, measurement_value"))
parser.add_argument(
"--exclude-data",
metavar="FILE.csv",
required=False,
help=(
"Data to EXCLUDE from model selection. Useful to specify the original "
"training data used"))
parser.add_argument(
"--models-dir",
metavar="DIR",
required=True,
help="Directory to read models")
parser.add_argument(
"--out-models-dir",
metavar="DIR",
required=True,
help="Directory to write selected models")
parser.add_argument(
"--out-unselected-predictions",
metavar="FILE.csv",
help="Write predictions for validation data using unselected predictor to "
"FILE.csv")
parser.add_argument(
"--unselected-accuracy-scorer",
metavar="SCORER",
default="combined:mass-spec,mse")
parser.add_argument(
"--unselected-accuracy-scorer-num-samples",
type=int,
default=1000)
parser.add_argument(
"--unselected-accuracy-percentile-threshold",
type=float,
metavar="X",
default=95)
parser.add_argument(
"--allele",
default=None,
nargs="+",
help="Alleles to select models for. If not specified, all alleles with "
"enough measurements will be used.")
parser.add_argument(
"--combined-min-models",
type=int,
default=8,
metavar="N",
help="Min number of models to select per allele when using combined selector")
parser.add_argument(
"--combined-max-models",
type=int,
default=1000,
metavar="N",
help="Max number of models to select per allele when using combined selector")
parser.add_argument(
"--combined-min-contribution-percent",
type=float,
default=1.0,
metavar="X",
help="Use only model selectors that can contribute at least X %% to the "
"total score. Default: %(default)s")
parser.add_argument(
"--mass-spec-min-measurements",
type=int,
metavar="N",
default=1,
help="Min number of measurements required for an allele to use mass-spec model "
"selection")
parser.add_argument(
"--mass-spec-min-models",
type=int,
default=8,
metavar="N",
help="Min number of models to select per allele when using mass-spec selector")
parser.add_argument(
"--mass-spec-max-models",
type=int,
default=1000,
metavar="N",
help="Max number of models to select per allele when using mass-spec selector")
parser.add_argument(
"--mse-min-measurements",
type=int,
metavar="N",
default=1,
help="Min number of measurements required for an allele to use MSE model "
"selection")
parser.add_argument(
"--mse-min-models",
type=int,
default=8,
metavar="N",
help="Min number of models to select per allele when using MSE selector")
parser.add_argument(
"--mse-max-models",
type=int,
default=1000,
metavar="N",
help="Max number of models to select per allele when using MSE selector")
parser.add_argument(
"--scoring",
nargs="+",
default=["mse", "consensus"],
help="Scoring procedures to use in order")
parser.add_argument(
"--consensus-min-models",
type=int,
default=8,
metavar="N",
help="Min number of models to select per allele when using consensus selector")
parser.add_argument(
"--consensus-max-models",
type=int,
default=1000,
metavar="N",
help="Max number of models to select per allele when using consensus selector")
parser.add_argument(
"--consensus-num-peptides-per-length",
type=int,
default=10000,
help="Num peptides per length to use for consensus scoring")
parser.add_argument(
"--mass-spec-regex",
metavar="REGEX",
default="mass[- ]spec",
help="Regular expression for mass-spec data. Runs on measurement_source col."
"Default: %(default)s.")
parser.add_argument(
"--verbosity",
type=int,
help="Keras verbosity. Default: %(default)s",
default=0)
add_local_parallelism_args(parser)
def run(argv=sys.argv[1:]):
global GLOBAL_DATA
# On sigusr1 print stack trace
print("To show stack trace, run:\nkill -s USR1 %d" % os.getpid())
signal.signal(signal.SIGUSR1, lambda sig, frame: traceback.print_stack())
args = parser.parse_args(argv)
args.out_models_dir = os.path.abspath(args.out_models_dir)
configure_logging(verbose=args.verbosity > 1)
input_predictor = Class1AffinityPredictor.load(args.models_dir)
print("Loaded: %s" % input_predictor)
if args.allele:
alleles = [normalize_allele_name(a) for a in args.allele]
else:
alleles = input_predictor.supported_alleles
metadata_dfs = {}
if args.data:
df = pandas.read_csv(args.data)
print("Loaded data: %s" % (str(df.shape)))
df = df.loc[
(df.peptide.str.len() >= 8) & (df.peptide.str.len() <= 15)
]
print("Subselected to 8-15mers: %s" % (str(df.shape)))
# Allele names in data are assumed to be already normalized.
df = df.loc[df.allele.isin(alleles)].dropna()
print("Selected %d alleles: %s" % (len(alleles), ' '.join(alleles)))
if args.exclude_data:
exclude_df = pandas.read_csv(args.exclude_data)
metadata_dfs["model_selection_exclude"] = exclude_df
print("Loaded exclude data: %s" % (str(df.shape)))
df["_key"] = df.allele + "__" + df.peptide
exclude_df["_key"] = exclude_df.allele + "__" + exclude_df.peptide
df["_excluded"] = df._key.isin(exclude_df._key.unique())
print("Excluding measurements per allele (counts): ")
print(df.groupby("allele")._excluded.sum())
print("Excluding measurements per allele (fractions): ")
print(df.groupby("allele")._excluded.mean())
df = df.loc[~df._excluded]
del df["_excluded"]
del df["_key"]
print("Reduced data to: %s" % (str(df.shape)))
metadata_dfs["model_selection_data"] = df
df["mass_spec"] = df.measurement_source.str.contains(
args.mass_spec_regex)
else:
df = None
if args.out_unselected_predictions:
df["unselected_prediction"] = input_predictor.predict(
alleles=df.allele.values,
peptides=df.peptide.values)
df.to_csv(args.out_unselected_predictions)
print("Wrote: %s" % args.out_unselected_predictions)
selectors = {}
selector_to_model_selection_kwargs = {}
def make_selector(
scoring,
combined_min_contribution_percent=args.combined_min_contribution_percent):
if scoring in selectors:
return (
selectors[scoring], selector_to_model_selection_kwargs[scoring])
start = time.time()
if scoring.startswith("combined:"):
model_selection_kwargs = {
'min_models': args.combined_min_models,
'max_models': args.combined_max_models,
}
component_selectors = []
for component_selector in scoring.split(":", 1)[1].split(","):
component_selectors.append(
make_selector(
component_selector)[0])
selector = CombinedModelSelector(
component_selectors,
min_contribution_percent=combined_min_contribution_percent)
elif scoring == "mse":
model_selection_kwargs = {
'min_models': args.mse_min_models,
'max_models': args.mse_max_models,
}
min_measurements = args.mse_min_measurements
selector = MSEModelSelector(
df=df.loc[~df.mass_spec],
predictor=input_predictor,
min_measurements=min_measurements)
elif scoring == "mass-spec":
mass_spec_df = df.loc[df.mass_spec]
model_selection_kwargs = {
'min_models': args.mass_spec_min_models,
'max_models': args.mass_spec_max_models,
}
min_measurements = args.mass_spec_min_measurements
selector = MassSpecModelSelector(
df=mass_spec_df,
predictor=input_predictor,
min_measurements=min_measurements)
elif scoring == "consensus":
model_selection_kwargs = {
'min_models': args.consensus_min_models,
'max_models': args.consensus_max_models,
}
selector = ConsensusModelSelector(
predictor=input_predictor,
num_peptides_per_length=args.consensus_num_peptides_per_length)
else:
raise ValueError("Unsupported scoring method: %s" % scoring)
print("Instantiated model selector %s in %0.2f sec." % (
scoring, time.time() - start))
return (selector, model_selection_kwargs)
for scoring in args.scoring:
(selector, model_selection_kwargs) = make_selector(scoring)
selectors[scoring] = selector
selector_to_model_selection_kwargs[scoring] = model_selection_kwargs
unselected_accuracy_scorer = None
if args.unselected_accuracy_scorer:
# Force running all selectors by setting combined_min_contribution_percent=0.
unselected_accuracy_scorer = make_selector(
args.unselected_accuracy_scorer,
combined_min_contribution_percent=0.0)[0]
print("Using unselected accuracy scorer: %s" % unselected_accuracy_scorer)
GLOBAL_DATA["unselected_accuracy_scorer"] = unselected_accuracy_scorer
print("Selectors for alleles:")
allele_to_selector = {}
allele_to_model_selection_kwargs = {}
for allele in alleles:
selector = None
for possible_selector in args.scoring:
if selectors[possible_selector].usable_for_allele(allele=allele):
selector = selectors[possible_selector]
print("%20s %s" % (allele, selector.plan_summary(allele)))
break
if selector is None:
raise ValueError("No selectors usable for allele: %s" % allele)
allele_to_selector[allele] = selector
allele_to_model_selection_kwargs[allele] = (
selector_to_model_selection_kwargs[possible_selector])
GLOBAL_DATA["args"] = args
GLOBAL_DATA["input_predictor"] = input_predictor
GLOBAL_DATA["unselected_accuracy_scorer"] = unselected_accuracy_scorer
GLOBAL_DATA["allele_to_selector"] = allele_to_selector
GLOBAL_DATA["allele_to_model_selection_kwargs"] = allele_to_model_selection_kwargs
if not os.path.exists(args.out_models_dir):
print("Attempting to create directory: %s" % args.out_models_dir)
os.mkdir(args.out_models_dir)
print("Done.")
result_predictor = Class1AffinityPredictor(metadata_dataframes=metadata_dfs)
worker_pool = worker_pool_with_gpu_assignments_from_args(args)
start = time.time()
if worker_pool is None:
# Serial run
print("Running in serial.")
results = (
model_select(allele) for allele in alleles)
else:
# Parallel run
random.shuffle(alleles)
results = worker_pool.imap_unordered(
partial(model_select, constant_data=GLOBAL_DATA),
alleles,
chunksize=1)
unselected_summary = []
model_selection_dfs = []
for result in tqdm.tqdm(results, total=len(alleles)):
pprint(result)
summary_dict = dict(result)
summary_dict["retained"] = result["selected"] is not None
del summary_dict["selected"]
unselected_summary.append(summary_dict)
if result['selected'] is not None:
model_selection_dfs.append(
result['selected'].metadata_dataframes['model_selection'])
result_predictor.merge_in_place([result['selected']])
if model_selection_dfs:
model_selection_df = pandas.concat(
model_selection_dfs, ignore_index=True)
model_selection_df["selector"] = model_selection_df.allele.map(
allele_to_selector)
result_predictor.metadata_dataframes["model_selection"] = (
model_selection_df)
result_predictor.metadata_dataframes["unselected_summary"] = (
pandas.DataFrame(unselected_summary))
print("Done model selecting for %d alleles." % len(alleles))
result_predictor.save(args.out_models_dir)
model_selection_time = time.time() - start
if worker_pool:
worker_pool.close()
worker_pool.join()
print("Model selection time %0.2f min." % (model_selection_time / 60.0))
print("Predictor written to: %s" % args.out_models_dir)
class ScrambledPredictor(object):
def __init__(self, predictor):
self.predictor = predictor
self._predictions = {}
self._allele = None
def predict(self, peptides, allele):
if peptides not in self._predictions:
self._predictions[peptides] = pandas.Series(
self.predictor.predict(peptides=peptides, allele=allele))
self._allele = allele
assert allele == self._allele
return self._predictions[peptides].sample(frac=1.0).values
def model_select(allele, constant_data=GLOBAL_DATA):
unselected_accuracy_scorer = constant_data["unselected_accuracy_scorer"]
selector = constant_data["allele_to_selector"][allele]
model_selection_kwargs = constant_data[
"allele_to_model_selection_kwargs"
][allele]
predictor = constant_data["input_predictor"]
args = constant_data["args"]
unselected_accuracy_scorer_samples = constant_data["args"].unselected_accuracy_scorer_num_samples
result_dict = {
"allele": allele
}
unselected_score = None
unselected_score_percentile = None
unselected_score_scrambled_mean = None
if unselected_accuracy_scorer:
unselected_score_function = (
unselected_accuracy_scorer.score_function(allele))
additional_metadata = {}
unselected_score = unselected_score_function(
predictor, additional_metadata_out=additional_metadata)
scrambled_predictor = ScrambledPredictor(predictor)
scrambled_scores = numpy.array([
unselected_score_function(
scrambled_predictor)
for _ in range(unselected_accuracy_scorer_samples)
])
unselected_score_scrambled_mean = scrambled_scores.mean()
unselected_score_percentile = percentileofscore(
scrambled_scores, unselected_score)
print(
"Unselected score and percentile",
allele,
unselected_score,
unselected_score_percentile,
additional_metadata)
result_dict.update(
dict(("unselected_%s" % key, value)
for (key, value)
in additional_metadata.items()))
selected = None
threshold = args.unselected_accuracy_percentile_threshold
if unselected_score_percentile is None or unselected_score_percentile >= threshold:
selected = predictor.model_select(
score_function=selector.score_function(allele=allele),
alleles=[allele],
**model_selection_kwargs)
result_dict["unselected_score_plan"] = (
unselected_accuracy_scorer.plan_summary(allele)
if unselected_accuracy_scorer else None)
result_dict["selector_score_plan"] = selector.plan_summary(allele)
result_dict["unselected_accuracy_score_percentile"] = unselected_score_percentile
result_dict["unselected_score"] = unselected_score
result_dict["unselected_score_scrambled_mean"] = unselected_score_scrambled_mean
result_dict["selected"] = selected
result_dict["num_models"] = len(selected.neural_networks) if selected else None
return result_dict
def cache_encoding(predictor, peptides):
# Encode the peptides for each neural network, so the encoding
# becomes cached.
for network in predictor.neural_networks:
network.peptides_to_network_input(peptides)
class ScoreFunction(object):
"""
Thin wrapper over a score function (Class1AffinityPredictor -> float).
Used to keep a summary string associated with the function.
"""
def __init__(self, function, summary=None):
self.function = function
self.summary = summary if summary else "(n/a)"
def __call__(self, *args, **kwargs):
return self.function(*args, **kwargs)
class CombinedModelSelector(object):
"""
Model selector that computes a weighted average over other model selectors.
"""
def __init__(self, model_selectors, weights=None, min_contribution_percent=1.0):
if weights is None:
weights = numpy.ones(shape=(len(model_selectors),))
self.model_selectors = model_selectors
self.selector_to_weight = dict(zip(self.model_selectors, weights))
self.min_contribution_percent = min_contribution_percent
def usable_for_allele(self, allele):
return any(
selector.usable_for_allele(allele)
for selector in self.model_selectors)
def plan_summary(self, allele):
return self.score_function(allele, dry_run=True).summary
def score_function(self, allele, dry_run=False):
selector_to_max_weighted_score = {}
for selector in self.model_selectors:
weight = self.selector_to_weight[selector]
if selector.usable_for_allele(allele):
max_weighted_score = selector.max_absolute_value(allele) * weight
else:
max_weighted_score = 0
selector_to_max_weighted_score[selector] = max_weighted_score
max_total_score = sum(selector_to_max_weighted_score.values())
# Use only selectors that can contribute >1% to the total score
selectors_to_use = [
selector
for selector in self.model_selectors
if (
selector_to_max_weighted_score[selector] >
max_total_score * self.min_contribution_percent / 100.0)
]
summary = ", ".join([
"%s(|%.3f|)" % (
selector.plan_summary(allele),
selector_to_max_weighted_score[selector])
for selector in selectors_to_use
])
if dry_run:
score = None
else:
score_functions_and_weights = [
(selector.score_function(allele=allele),
self.selector_to_weight[selector])
for selector in selectors_to_use
]
def score(predictor, additional_metadata_out=None):
scores = numpy.array([
score_function(
predictor,
additional_metadata_out=additional_metadata_out) * weight
for (score_function, weight) in score_functions_and_weights
])
if additional_metadata_out is not None:
additional_metadata_out["combined_score_terms"] = str(
list(scores))
return scores.sum()
return ScoreFunction(score, summary=summary)
class ConsensusModelSelector(object):
"""
Model selector that scores sub-ensembles based on their Kendall tau
consistency with the full ensemble over a set of random peptides.
"""
def __init__(
self,
predictor,
num_peptides_per_length=10000,
multiply_score_by_value=10.0):
(min_length, max_length) = predictor.supported_peptide_lengths
peptides = []
for length in range(min_length, max_length + 1):
peptides.extend(
random_peptides(num_peptides_per_length, length=length))
self.peptides = EncodableSequences.create(peptides)
self.predictor = predictor
self.multiply_score_by_value = multiply_score_by_value
cache_encoding(self.predictor, self.peptides)
def usable_for_allele(self, allele):
return True
def max_absolute_value(self, allele):
return self.multiply_score_by_value
def plan_summary(self, allele):
return "consensus (%d points)" % len(self.peptides)
def score_function(self, allele):
full_ensemble_predictions = self.predictor.predict(
allele=allele,
peptides=self.peptides)
def score(predictor, additional_metadata_out=None):
predictions = predictor.predict(
allele=allele,
peptides=self.peptides,
)
tau = kendalltau(predictions, full_ensemble_predictions).correlation
if additional_metadata_out is not None:
additional_metadata_out["score_consensus_tau"] = tau
return tau * self.multiply_score_by_value
return ScoreFunction(
score, summary=self.plan_summary(allele))
class MSEModelSelector(object):
"""
Model selector that uses mean-squared error to score models. Inequalities
are supported.
"""
def __init__(
self,
df,
predictor,
min_measurements=1,
multiply_score_by_data_size=True):
self.df = df
self.predictor = predictor
self.min_measurements = min_measurements
self.multiply_score_by_data_size = multiply_score_by_data_size
def usable_for_allele(self, allele):
return (self.df.allele == allele).sum() >= self.min_measurements
def max_absolute_value(self, allele):
if self.multiply_score_by_data_size:
return (self.df.allele == allele).sum()
else:
return 1.0
def plan_summary(self, allele):
return self.score_function(allele).summary
def score_function(self, allele):
sub_df = self.df.loc[self.df.allele == allele].reset_index(drop=True)
peptides = EncodableSequences.create(sub_df.peptide.values)
def score(predictor, additional_metadata_out=None):
predictions = predictor.predict(
allele=allele,
peptides=peptides,
)
deviations = from_ic50(predictions) - from_ic50(
sub_df.measurement_value)
if 'measurement_inequality' in sub_df.columns:
# Must reverse meaning of inequality since we are working with
# transformed 0-1 values, which are anti-correlated with the ic50s.
# The measurement_inequality column is given in terms of ic50s.
deviations.loc[
(
(sub_df.measurement_inequality == "<") & (deviations > 0)) |
((sub_df.measurement_inequality == ">") & (deviations < 0))
] = 0.0
score_mse = (1 - (deviations ** 2).mean())
if additional_metadata_out is not None:
additional_metadata_out["score_MSE"] = 1 - score_mse
# We additionally include other scores on (=) measurements as
# a convenience
eq_df = sub_df
if 'measurement_inequality' in sub_df.columns:
eq_df = sub_df.loc[
sub_df.measurement_inequality == "="
]
additional_metadata_out["score_pearsonr"] = (
pearsonr(
numpy.log(eq_df.measurement_value.values),
numpy.log(predictions[eq_df.index.values]))[0])
for threshold in [500, 5000, 15000]:
if (eq_df.measurement_value < threshold).nunique() == 2:
additional_metadata_out["score_AUC@%d" % threshold] = (
roc_auc_score(
(eq_df.measurement_value < threshold).values,
-1 * predictions[eq_df.index.values]))
return score_mse * (
len(sub_df) if self.multiply_score_by_data_size else 1)
summary = "mse (%d points)" % (len(sub_df))
return ScoreFunction(score, summary=summary)
class MassSpecModelSelector(object):
"""
Model selector that uses PPV of differentiating decoys from hits from
mass-spec experiments.
"""
def __init__(
self,
df,
predictor,
decoys_per_length=0,
min_measurements=100,
multiply_score_by_data_size=True):
# Index is peptide, columns are alleles
hit_matrix = df.groupby(
["peptide", "allele"]).measurement_value.count().unstack().fillna(
0).astype(bool)
if decoys_per_length:
(min_length, max_length) = predictor.supported_peptide_lengths
decoys = []
for length in range(min_length, max_length + 1):
decoys.extend(
random_peptides(decoys_per_length, length=length))
decoy_matrix = pandas.DataFrame(
index=decoys, columns=hit_matrix.columns, dtype=bool)
decoy_matrix[:] = False
full_matrix = pandas.concat([hit_matrix, decoy_matrix])
else:
full_matrix = hit_matrix
if len(full_matrix) > 0:
full_matrix = full_matrix.sample(frac=1.0).astype(float)
self.df = full_matrix
self.predictor = predictor
self.min_measurements = min_measurements
self.multiply_score_by_data_size = multiply_score_by_data_size
self.peptides = EncodableSequences.create(full_matrix.index.values)
cache_encoding(self.predictor, self.peptides)
@staticmethod
def ppv(y_true, predictions):
df = pandas.DataFrame({"prediction": predictions, "y_true": y_true})
return df.sort_values("prediction", ascending=True)[
: int(y_true.sum())
].y_true.mean()
def usable_for_allele(self, allele):
return allele in self.df.columns and (
self.df[allele].sum() >= self.min_measurements)
def max_absolute_value(self, allele):
if self.multiply_score_by_data_size:
return self.df[allele].sum()
else:
return 1.0
def plan_summary(self, allele):
return self.score_function(allele).summary
def score_function(self, allele):
total_hits = self.df[allele].sum()
total_decoys = (self.df[allele] == 0).sum()
multiplier = total_hits if self.multiply_score_by_data_size else 1
def score(predictor, additional_metadata_out=None):
predictions = predictor.predict(
allele=allele,
peptides=self.peptides,
)
ppv = self.ppv(self.df[allele], predictions)
if additional_metadata_out is not None:
additional_metadata_out["score_mass_spec_PPV"] = ppv
# We additionally compute AUC score.
additional_metadata_out["score_mass_spec_AUC"] = roc_auc_score(
self.df[allele].values, -1 * predictions)
return ppv * multiplier
summary = "mass-spec (%d hits / %d decoys)" % (total_hits, total_decoys)
return ScoreFunction(score, summary=summary)
if __name__ == '__main__':
run()
|
[
"os.path.exists",
"scipy.stats.percentileofscore",
"random.shuffle",
"pandas.DataFrame",
"argparse.ArgumentParser",
"pandas.read_csv",
"traceback.print_stack",
"numpy.log",
"sklearn.metrics.roc_auc_score",
"pandas.concat",
"os.mkdir",
"os.getpid",
"functools.partial",
"os.path.abspath",
"time.time",
"pprint.pprint",
"scipy.stats.kendalltau"
] |
[((1157, 1195), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {'usage': '__doc__'}), '(usage=__doc__)\n', (1180, 1195), False, 'import argparse\n'), ((5568, 5604), 'os.path.abspath', 'os.path.abspath', (['args.out_models_dir'], {}), '(args.out_models_dir)\n', (5583, 5604), False, 'import os\n'), ((12260, 12271), 'time.time', 'time.time', ([], {}), '()\n', (12269, 12271), False, 'import time\n'), ((13610, 13646), 'pandas.DataFrame', 'pandas.DataFrame', (['unselected_summary'], {}), '(unselected_summary)\n', (13626, 13646), False, 'import pandas\n'), ((5970, 5996), 'pandas.read_csv', 'pandas.read_csv', (['args.data'], {}), '(args.data)\n', (5985, 5996), False, 'import pandas\n'), ((8063, 8074), 'time.time', 'time.time', ([], {}), '()\n', (8072, 8074), False, 'import time\n'), ((11925, 11960), 'os.path.exists', 'os.path.exists', (['args.out_models_dir'], {}), '(args.out_models_dir)\n', (11939, 11960), False, 'import os\n'), ((12044, 12073), 'os.mkdir', 'os.mkdir', (['args.out_models_dir'], {}), '(args.out_models_dir)\n', (12052, 12073), False, 'import os\n'), ((12475, 12498), 'random.shuffle', 'random.shuffle', (['alleles'], {}), '(alleles)\n', (12489, 12498), False, 'import random\n'), ((12777, 12791), 'pprint.pprint', 'pprint', (['result'], {}), '(result)\n', (12783, 12791), False, 'from pprint import pprint\n'), ((13263, 13316), 'pandas.concat', 'pandas.concat', (['model_selection_dfs'], {'ignore_index': '(True)'}), '(model_selection_dfs, ignore_index=True)\n', (13276, 13316), False, 'import pandas\n'), ((13789, 13800), 'time.time', 'time.time', ([], {}), '()\n', (13798, 13800), False, 'import time\n'), ((15832, 15885), 'scipy.stats.percentileofscore', 'percentileofscore', (['scrambled_scores', 'unselected_score'], {}), '(scrambled_scores, unselected_score)\n', (15849, 15885), False, 'from scipy.stats import kendalltau, percentileofscore, pearsonr\n'), ((27131, 27194), 'pandas.DataFrame', 'pandas.DataFrame', (["{'prediction': predictions, 'y_true': y_true}"], {}), "({'prediction': predictions, 'y_true': y_true})\n", (27147, 27194), False, 'import pandas\n'), ((5414, 5425), 'os.getpid', 'os.getpid', ([], {}), '()\n', (5423, 5425), False, 'import os\n'), ((5480, 5503), 'traceback.print_stack', 'traceback.print_stack', ([], {}), '()\n', (5501, 5503), False, 'import traceback\n'), ((6471, 6505), 'pandas.read_csv', 'pandas.read_csv', (['args.exclude_data'], {}), '(args.exclude_data)\n', (6486, 6505), False, 'import pandas\n'), ((12557, 12605), 'functools.partial', 'partial', (['model_select'], {'constant_data': 'GLOBAL_DATA'}), '(model_select, constant_data=GLOBAL_DATA)\n', (12564, 12605), False, 'from functools import partial\n'), ((26402, 26472), 'pandas.DataFrame', 'pandas.DataFrame', ([], {'index': 'decoys', 'columns': 'hit_matrix.columns', 'dtype': 'bool'}), '(index=decoys, columns=hit_matrix.columns, dtype=bool)\n', (26418, 26472), False, 'import pandas\n'), ((26552, 26593), 'pandas.concat', 'pandas.concat', (['[hit_matrix, decoy_matrix]'], {}), '([hit_matrix, decoy_matrix])\n', (26565, 26593), False, 'import pandas\n'), ((21991, 22041), 'scipy.stats.kendalltau', 'kendalltau', (['predictions', 'full_ensemble_predictions'], {}), '(predictions, full_ensemble_predictions)\n', (22001, 22041), False, 'from scipy.stats import kendalltau, percentileofscore, pearsonr\n'), ((28411, 28466), 'sklearn.metrics.roc_auc_score', 'roc_auc_score', (['self.df[allele].values', '(-1 * predictions)'], {}), '(self.df[allele].values, -1 * predictions)\n', (28424, 28466), False, 'from sklearn.metrics import roc_auc_score\n'), ((10137, 10148), 'time.time', 'time.time', ([], {}), '()\n', (10146, 10148), False, 'import time\n'), ((24817, 24858), 'numpy.log', 'numpy.log', (['eq_df.measurement_value.values'], {}), '(eq_df.measurement_value.values)\n', (24826, 24858), False, 'import numpy\n'), ((24884, 24926), 'numpy.log', 'numpy.log', (['predictions[eq_df.index.values]'], {}), '(predictions[eq_df.index.values])\n', (24893, 24926), False, 'import numpy\n'), ((25171, 25272), 'sklearn.metrics.roc_auc_score', 'roc_auc_score', (['(eq_df.measurement_value < threshold).values', '(-1 * predictions[eq_df.index.values])'], {}), '((eq_df.measurement_value < threshold).values, -1 *\n predictions[eq_df.index.values])\n', (25184, 25272), False, 'from sklearn.metrics import roc_auc_score\n')]
|
import time
from adafruit_circuitplayground.express import cpx
import simpleio
cpx.pixels.auto_write = False
cpx.pixels.brightness = 0.3
# Set these based on your ambient temperature for best results!
minimum_temp = 24
maximum_temp = 30
while True:
# temperature value remapped to pixel position
peak = simpleio.map_range(cpx.temperature, minimum_temp, maximum_temp, 0, 10)
print(cpx.temperature)
print(int(peak))
for i in range(0, 10, 1):
if i <= peak:
cpx.pixels[i] = (0, 255, 255)
else:
cpx.pixels[i] = (0, 0, 0)
cpx.pixels.show()
time.sleep(0.05)
|
[
"adafruit_circuitplayground.express.cpx.pixels.show",
"simpleio.map_range",
"time.sleep"
] |
[((314, 384), 'simpleio.map_range', 'simpleio.map_range', (['cpx.temperature', 'minimum_temp', 'maximum_temp', '(0)', '(10)'], {}), '(cpx.temperature, minimum_temp, maximum_temp, 0, 10)\n', (332, 384), False, 'import simpleio\n'), ((584, 601), 'adafruit_circuitplayground.express.cpx.pixels.show', 'cpx.pixels.show', ([], {}), '()\n', (599, 601), False, 'from adafruit_circuitplayground.express import cpx\n'), ((606, 622), 'time.sleep', 'time.sleep', (['(0.05)'], {}), '(0.05)\n', (616, 622), False, 'import time\n')]
|
############################################################
# Copyright 2019 <NAME>
# Licensed under the new BSD (3-clause) license:
#
# https://opensource.org/licenses/BSD-3-Clause
############################################################
############################################################
#
# Initial setup
#
############################################################
import matplotlib.pyplot as plot
import scipy.stats as stats
import numpy
import math
light = "#DCBCBC"
light_highlight = "#C79999"
mid = "#B97C7C"
mid_highlight = "#A25050"
dark = "#8F2727"
dark_highlight = "#7C0000"
green = "#00FF00"
# To facilitate the computation of Markov chain Monte Carlo estimators
# let's define a _Welford accumulator_ that computes empirical summaries
# of a sample in a single pass
def welford_summary(x, L = 100):
summary = [0] * (L + 1)
for n in range(len(x)):
delta = x[n] - summary[0]
summary[0] += delta / (n + 1)
for l in range(L):
if n > l:
summary[l + 1] += delta * (x[n - l] - summary[0])
norm = 1.0 / (len(x) - 1)
for l in range(L): summary[l + 1] *= norm
return summary
# We can then use the Welford accumulator output to compute the
# Markov chain Monte Carlo estimators and their properties
def compute_mcmc_stats(x, L = 20):
summary = welford_summary(x, L)
mean = summary[0]
var = summary[1]
acov = summary[1:(L + 1)]
# Compute the effective sample size
rho_hat_s = [0] * L
rho_hat_s[1] = acov[1] / var
# First we transform our autocovariances into Geyer's initial positive sequence
max_s = 1
for s in [ 2 * i + 1 for i in range((L - 1) / 2) ]:
rho_hat_even = acov[s + 1] / var
rho_hat_odd = acov[s + 2] / var;
max_s = s + 2
if rho_hat_even + rho_hat_odd > 0:
rho_hat_s[s + 1] = rho_hat_even
rho_hat_s[s + 2] = rho_hat_odd
else:
break
# Then we transform this output into Geyer's initial monotone sequence
for s in [ 2 * i + 3 for i in range((max_s - 2)/ 2) ]:
if rho_hat_s[s + 1] + rho_hat_s[s + 2] > rho_hat_s[s - 1] + rho_hat_s[s]:
rho_hat_s[s + 1] = 0.5 * (rho_hat_s[s - 1] + rho_hat_s[s])
rho_hat_s[s + 2] = rho_hat_s[s + 1]
ess = len(x) / (1.0 + 2 * sum(rho_hat_s))
return [mean, math.sqrt(var / ess), math.sqrt(var), ess]
# To generate our samples we'll use numpy's pseudo random number
# generator which needs to be seeded to achieve reproducible
# results
numpy.random.seed(seed=8675309)
# To ensure accurate results let's generate pretty large samples
N = 10000
# To see how results scale with dimension we'll consider
# behavior one thorugh ten dimensions
Ds = [ n + 1 for n in range(10) ]
idxs = [ idx for idx in range(Ds[-1]) for r in range(2) ]
plot_Ds = [ D + delta for D in Ds for delta in [-0.5, 0.5]]
############################################################
#
# How does the Random Walk Metropolis algorithm perform
# on a target distribution with a two-dimensional Gaussian
# density function?
#
############################################################
# Target density
def target_lpdf(x):
return - 0.5 * ( (x[0] - 1)**2 + (x[1] + 1)**2 ) \
- 0.5 * 2 * math.log(6.283185307179586)
# Tune proposal density
sigma = 1.4
# A place to store our Markov chain
# D columns for the parameters and one extra column
# for the Metropolis acceptance probability
D = 2
mcmc_samples = [[0] * (D + 1) for _ in range(N)]
# Randomly seed the initial state
mcmc_samples[0][0] = stats.norm.rvs(0, 3)
mcmc_samples[0][1] = stats.norm.rvs(0, 3)
mcmc_samples[0][2] = 1
for n in range(1, N):
x0 = [ mcmc_samples[n - 1][0], mcmc_samples[n - 1][1]]
xp = [ stats.norm.rvs(x0[0], sigma), stats.norm.rvs(x0[1], sigma) ]
# Compute acceptance probability
accept_prob = 1
if target_lpdf(xp) < target_lpdf(x0):
accept_prob = math.exp(target_lpdf(xp) - target_lpdf(x0))
mcmc_samples[n][D] = accept_prob
# Apply Metropolis correction
u = stats.uniform.rvs(0, 1)
if accept_prob > u:
mcmc_samples[n][0] = xp[0]
mcmc_samples[n][1] = xp[1]
else:
mcmc_samples[n][0] = x0[0]
mcmc_samples[n][1] = x0[1]
# Compute MCMC estimator statistics, leaving
# out the first 100 samples as warmup
compute_mcmc_stats([ s[0] for s in mcmc_samples[100:] ])
compute_mcmc_stats([ s[1] for s in mcmc_samples[100:] ])
# Plot convergence of MCMC estimators for each parameter
stride = 250
M = N / stride
iters = [ stride * (i + 1) for i in range(N / stride) ]
x1_mean = [0] * M
x1_se = [0] * M
x2_mean = [0] * M
x2_se = [0] * M
for m in range(M):
running_samples = [ s[0] for s in mcmc_samples[100:iters[m]] ]
mcmc_stats = compute_mcmc_stats(running_samples)
x1_mean[m] = mcmc_stats[0]
x1_se[m] = mcmc_stats[1]
running_samples = [ s[1] for s in mcmc_samples[100:iters[m]] ]
mcmc_stats = compute_mcmc_stats(running_samples)
x2_mean[m] = mcmc_stats[0]
x2_se[m] = mcmc_stats[1]
plot.fill_between(iters,
[ x1_mean[m] - 2 * x1_se[m] for m in range(M) ],
[ x1_mean[m] + 2 * x1_se[m] for m in range(M) ],
facecolor=light, color=light)
plot.plot(iters, x1_mean, color=dark)
plot.plot([iters[0], iters[-1]], [1, 1], color='grey', linestyle='--')
plot.gca().set_xlim([0, N])
plot.gca().set_xlabel("Iteration")
plot.gca().set_ylim([-2, 2])
plot.gca().set_ylabel("Monte Carlo Estimator")
plot.show()
plot.fill_between(iters,
[ x2_mean[m] - 2 * x2_se[m] for m in range(M) ],
[ x2_mean[m] + 2 * x2_se[m] for m in range(M) ],
facecolor=light, color=light)
plot.plot(iters, x2_mean, color=dark)
plot.plot([iters[0], iters[-1]], [-1, -1], color='grey', linestyle='--')
plot.gca().set_xlim([0, N])
plot.gca().set_xlabel("Iteration")
plot.gca().set_ylim([-2, 2])
plot.gca().set_ylabel("Monte Carlo Estimator")
plot.show()
############################################################
#
# How does the Random Walk Metropolis algorithm perform
# on a target distribution with a funnel density function?
#
############################################################
# Target density
def target_lpdf(x):
return - 0.5 * ( x[0]**2 + x[1]**2 + ( (x[2] - x[0]) / math.exp(x[1]) )**2 ) \
- 0.5 * 3 * math.log(6.283185307179586) - 0.5 * x[2]
# Tune proposal density
sigma = 1.4
# A place to store our Markov chain
# D columns for the parameters and one extra column
# for the Metropolis acceptance probability
D = 3
mcmc_samples = [[0] * (D + 1) for _ in range(N)]
# Randomly seed the initial state
mcmc_samples[0][0] = stats.norm.rvs(0, 3)
mcmc_samples[0][1] = stats.norm.rvs(0, 3)
mcmc_samples[0][2] = stats.norm.rvs(0, 3)
mcmc_samples[0][3] = 1
for n in range(1, N):
x0 = [ mcmc_samples[n - 1][0],
mcmc_samples[n - 1][1],
mcmc_samples[n - 1][2]]
xp = [ stats.norm.rvs(x0[0], sigma),
stats.norm.rvs(x0[1], sigma),
stats.norm.rvs(x0[2], sigma) ]
# Compute acceptance probability
accept_prob = 1
if target_lpdf(xp) < target_lpdf(x0):
accept_prob = math.exp(target_lpdf(xp) - target_lpdf(x0))
mcmc_samples[n][D] = accept_prob
# Apply Metropolis correction
u = stats.uniform.rvs(0, 1)
if accept_prob > u:
mcmc_samples[n][0] = xp[0]
mcmc_samples[n][1] = xp[1]
mcmc_samples[n][2] = xp[2]
else:
mcmc_samples[n][0] = x0[0]
mcmc_samples[n][1] = x0[1]
mcmc_samples[n][2] = x0[2]
# Compute MCMC estimator statistics, leaving
# out the first 100 samples as warmup
compute_mcmc_stats([ s[0] for s in mcmc_samples[100:] ])
compute_mcmc_stats([ s[1] for s in mcmc_samples[100:] ])
compute_mcmc_stats([ s[2] for s in mcmc_samples[100:] ])
# Plot convergence of MCMC estimators for each parameter
stride = 250
M = N / stride
iters = [ stride * (i + 1) for i in range(N / stride) ]
mu_mean = [0] * M
mu_se = [0] * M
log_tau_mean = [0] * M
log_tau_se = [0] * M
for m in range(M):
running_samples = [ s[0] for s in mcmc_samples[100:iters[m]] ]
mcmc_stats = compute_mcmc_stats(running_samples)
mu_mean[m] = mcmc_stats[0]
mu_se[m] = mcmc_stats[1]
running_samples = [ s[1] for s in mcmc_samples[100:iters[m]] ]
mcmc_stats = compute_mcmc_stats(running_samples)
log_tau_mean[m] = mcmc_stats[0]
log_tau_se[m] = mcmc_stats[1]
plot.fill_between(iters,
[ mu_mean[m] - 2 * mu_se[m] for m in range(M) ],
[ mu_mean[m] + 2 * mu_se[m] for m in range(M) ],
facecolor=light, color=light)
plot.plot(iters, mu_mean, color=dark)
plot.plot([iters[0], iters[-1]], [0, 0], color='grey', linestyle='--')
plot.gca().set_xlim([0, N])
plot.gca().set_xlabel("Iteration")
plot.gca().set_ylim([-1, 1])
plot.gca().set_ylabel("Monte Carlo Estimator")
plot.show()
plot.fill_between(iters,
[ log_tau_mean[m] - 2 * log_tau_se[m] for m in range(M) ],
[ log_tau_mean[m] + 2 * log_tau_se[m] for m in range(M) ],
facecolor=light, color=light)
plot.plot(iters, log_tau_mean, color=dark)
plot.plot([iters[0], iters[-1]], [0, 0], color='grey', linestyle='--')
plot.gca().set_xlim([0, N])
plot.gca().set_xlabel("Iteration")
plot.gca().set_ylim([-1, 8])
plot.gca().set_ylabel("Monte Carlo Estimator")
plot.show()
############################################################
#
# How does the effective sample size of a Random Walk
# Metropolis Markov chain vary with the dimension of
# the target distribution?
#
############################################################
def target_lpdf(x):
return - 0.5 * sum([ x_n**2 for x_n in x ]) \
- 0.5 * len(x) * math.log(6.283185307179586)
############################################################
# First let's use a constant Markov transition
############################################################
accept_prob_means = [0] * len(Ds)
accept_prob_ses = [0] * len(Ds)
ave_eff_sample_sizes = [0] * len(Ds)
# Tune proposal density
sigma = 1.4
for D in Ds:
# A place to store our Markov chain
# D columns for the parameters and one extra column
# for the Metropolis acceptance probability
mcmc_samples = [[0] * (D + 1) for _ in range(N)]
# Seeding the initial state with an exact sample
# from the target distribution ensures that we
# start in the typical set and avoid having to
# worry about warmup.
for d in range(D):
mcmc_samples[0][d] = stats.norm.rvs(0, 3)
mcmc_samples[0][D] = 1
for n in range(1, N):
x0 = [ mcmc_samples[n - 1][d] for d in range(D) ]
xp = [ stats.norm.rvs(x0[d], sigma) for d in range(D) ]
# Compute acceptance probability
accept_prob = 1
if target_lpdf(xp) < target_lpdf(x0):
accept_prob = math.exp(target_lpdf(xp) - target_lpdf(x0))
mcmc_samples[n][D] = accept_prob
# Apply Metropolis correction
u = stats.uniform.rvs(0, 1)
if accept_prob > u:
mcmc_samples[n][0:D] = xp
else:
mcmc_samples[n][0:D] = x0
# Estimate average acceptance probability
# Compute MCMC estimator statistics
mcmc_stats = compute_mcmc_stats([ s[D] for s in mcmc_samples])
accept_prob_means[D - 1] = mcmc_stats[0]
accept_prob_ses[D - 1] = mcmc_stats[1]
# Estimate effective sample size
eff_sample_sizes = [ compute_mcmc_stats([ s[d] for s in mcmc_samples])[3] \
for d in range(D) ]
ave_eff_sample_sizes[D - 1] = sum(eff_sample_sizes) / D
f, axarr = plot.subplots(1, 2)
axarr[0].set_title("")
axarr[0].fill_between(plot_Ds,
[ accept_prob_means[idx] - 2 * accept_prob_ses[idx] for idx in idxs ],
[ accept_prob_means[idx] + 2 * accept_prob_ses[idx] for idx in idxs ],
facecolor=dark, color=dark)
axarr[0].plot(plot_Ds, [ accept_prob_means[idx] for idx in idxs], color=dark_highlight)
axarr[0].set_xlim([Ds[0], Ds[-1]])
axarr[0].set_xlabel("Dimension")
axarr[0].set_ylim([0, 1])
axarr[0].set_ylabel("Average Acceptance Probability")
axarr[1].set_title("")
axarr[1].plot(plot_Ds, [ ave_eff_sample_sizes[idx] / N for idx in idxs],
color=dark_highlight)
axarr[1].set_xlim([Ds[0], Ds[-1]])
axarr[1].set_xlabel("Dimension")
axarr[1].set_ylim([0, 0.3])
axarr[1].set_ylabel("Average Effective Sample Size Per Iteration")
plot.show()
############################################################
# Now let's use an (approximately) optimally tuned Markov
# transition for each dimension
############################################################
accept_prob_means = [0] * len(Ds)
accept_prob_ses = [0] * len(Ds)
ave_eff_sample_sizes = [0] * len(Ds)
# Approximately optimal proposal tuning
opt_sigmas = [2.5, 1.75, 1.5, 1.2, 1.15, 1.0, 0.95, 0.85, 0.8, 0.75]
# Tune proposal density
sigma = 1.4
for D in Ds:
# A place to store our Markov chain
# D columns for the parameters and one extra column
# for the Metropolis acceptance probability
mcmc_samples = [[0] * (D + 1) for _ in range(N)]
# Seeding the initial state with an exact sample
# from the target distribution ensures that we
# start in the typical set and avoid having to
# worry about warmup.
for d in range(D):
mcmc_samples[0][d] = stats.norm.rvs(0, 3)
mcmc_samples[0][D] = 1
for n in range(1, N):
x0 = [ mcmc_samples[n - 1][d] for d in range(D) ]
xp = [ stats.norm.rvs(x0[d], opt_sigmas[D - 1]) for d in range(D) ]
# Compute acceptance probability
accept_prob = 1
if target_lpdf(xp) < target_lpdf(x0):
accept_prob = math.exp(target_lpdf(xp) - target_lpdf(x0))
mcmc_samples[n][D] = accept_prob
# Apply Metropolis correction
u = stats.uniform.rvs(0, 1)
if accept_prob > u:
mcmc_samples[n][0:D] = xp
else:
mcmc_samples[n][0:D] = x0
# Estimate average acceptance probability
# Compute MCMC estimator statistics
mcmc_stats = compute_mcmc_stats([ s[D] for s in mcmc_samples])
accept_prob_means[D - 1] = mcmc_stats[0]
accept_prob_ses[D - 1] = mcmc_stats[1]
# Estimate effective sample size
eff_sample_sizes = [ compute_mcmc_stats([ s[d] for s in mcmc_samples])[3] \
for d in range(D) ]
ave_eff_sample_sizes[D - 1] = sum(eff_sample_sizes) / D
f, axarr = plot.subplots(1, 2)
axarr[0].set_title("")
axarr[0].fill_between(plot_Ds,
[ accept_prob_means[idx] - 2 * accept_prob_ses[idx] for idx in idxs ],
[ accept_prob_means[idx] + 2 * accept_prob_ses[idx] for idx in idxs ],
facecolor=dark, color=dark)
axarr[0].plot(plot_Ds, [ accept_prob_means[idx] for idx in idxs], color=dark_highlight)
axarr[0].set_xlim([Ds[0], Ds[-1]])
axarr[0].set_xlabel("Dimension")
axarr[0].set_ylim([0, 1])
axarr[0].set_ylabel("Average Acceptance Probability")
axarr[1].set_title("")
axarr[1].plot(plot_Ds, [ ave_eff_sample_sizes[idx] / N for idx in idxs],
color=dark_highlight)
axarr[1].set_xlim([Ds[0], Ds[-1]])
axarr[1].set_xlabel("Dimension")
axarr[1].set_ylim([0, 0.3])
axarr[1].set_ylabel("Average Effective Sample Size Per Iteration")
plot.show()
|
[
"matplotlib.pyplot.gca",
"matplotlib.pyplot.plot",
"math.sqrt",
"scipy.stats.norm.rvs",
"math.log",
"numpy.random.seed",
"math.exp",
"scipy.stats.uniform.rvs",
"matplotlib.pyplot.subplots",
"matplotlib.pyplot.show"
] |
[((2453, 2484), 'numpy.random.seed', 'numpy.random.seed', ([], {'seed': '(8675309)'}), '(seed=8675309)\n', (2470, 2484), False, 'import numpy\n'), ((3493, 3513), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['(0)', '(3)'], {}), '(0, 3)\n', (3507, 3513), True, 'import scipy.stats as stats\n'), ((3535, 3555), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['(0)', '(3)'], {}), '(0, 3)\n', (3549, 3555), True, 'import scipy.stats as stats\n'), ((5127, 5164), 'matplotlib.pyplot.plot', 'plot.plot', (['iters', 'x1_mean'], {'color': 'dark'}), '(iters, x1_mean, color=dark)\n', (5136, 5164), True, 'import matplotlib.pyplot as plot\n'), ((5165, 5235), 'matplotlib.pyplot.plot', 'plot.plot', (['[iters[0], iters[-1]]', '[1, 1]'], {'color': '"""grey"""', 'linestyle': '"""--"""'}), "([iters[0], iters[-1]], [1, 1], color='grey', linestyle='--')\n", (5174, 5235), True, 'import matplotlib.pyplot as plot\n'), ((5377, 5388), 'matplotlib.pyplot.show', 'plot.show', ([], {}), '()\n', (5386, 5388), True, 'import matplotlib.pyplot as plot\n'), ((5598, 5635), 'matplotlib.pyplot.plot', 'plot.plot', (['iters', 'x2_mean'], {'color': 'dark'}), '(iters, x2_mean, color=dark)\n', (5607, 5635), True, 'import matplotlib.pyplot as plot\n'), ((5636, 5708), 'matplotlib.pyplot.plot', 'plot.plot', (['[iters[0], iters[-1]]', '[-1, -1]'], {'color': '"""grey"""', 'linestyle': '"""--"""'}), "([iters[0], iters[-1]], [-1, -1], color='grey', linestyle='--')\n", (5645, 5708), True, 'import matplotlib.pyplot as plot\n'), ((5850, 5861), 'matplotlib.pyplot.show', 'plot.show', ([], {}), '()\n', (5859, 5861), True, 'import matplotlib.pyplot as plot\n'), ((6566, 6586), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['(0)', '(3)'], {}), '(0, 3)\n', (6580, 6586), True, 'import scipy.stats as stats\n'), ((6608, 6628), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['(0)', '(3)'], {}), '(0, 3)\n', (6622, 6628), True, 'import scipy.stats as stats\n'), ((6650, 6670), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['(0)', '(3)'], {}), '(0, 3)\n', (6664, 6670), True, 'import scipy.stats as stats\n'), ((8474, 8511), 'matplotlib.pyplot.plot', 'plot.plot', (['iters', 'mu_mean'], {'color': 'dark'}), '(iters, mu_mean, color=dark)\n', (8483, 8511), True, 'import matplotlib.pyplot as plot\n'), ((8512, 8582), 'matplotlib.pyplot.plot', 'plot.plot', (['[iters[0], iters[-1]]', '[0, 0]'], {'color': '"""grey"""', 'linestyle': '"""--"""'}), "([iters[0], iters[-1]], [0, 0], color='grey', linestyle='--')\n", (8521, 8582), True, 'import matplotlib.pyplot as plot\n'), ((8724, 8735), 'matplotlib.pyplot.show', 'plot.show', ([], {}), '()\n', (8733, 8735), True, 'import matplotlib.pyplot as plot\n'), ((8965, 9007), 'matplotlib.pyplot.plot', 'plot.plot', (['iters', 'log_tau_mean'], {'color': 'dark'}), '(iters, log_tau_mean, color=dark)\n', (8974, 9007), True, 'import matplotlib.pyplot as plot\n'), ((9008, 9078), 'matplotlib.pyplot.plot', 'plot.plot', (['[iters[0], iters[-1]]', '[0, 0]'], {'color': '"""grey"""', 'linestyle': '"""--"""'}), "([iters[0], iters[-1]], [0, 0], color='grey', linestyle='--')\n", (9017, 9078), True, 'import matplotlib.pyplot as plot\n'), ((9220, 9231), 'matplotlib.pyplot.show', 'plot.show', ([], {}), '()\n', (9229, 9231), True, 'import matplotlib.pyplot as plot\n'), ((11377, 11396), 'matplotlib.pyplot.subplots', 'plot.subplots', (['(1)', '(2)'], {}), '(1, 2)\n', (11390, 11396), True, 'import matplotlib.pyplot as plot\n'), ((12226, 12237), 'matplotlib.pyplot.show', 'plot.show', ([], {}), '()\n', (12235, 12237), True, 'import matplotlib.pyplot as plot\n'), ((14164, 14183), 'matplotlib.pyplot.subplots', 'plot.subplots', (['(1)', '(2)'], {}), '(1, 2)\n', (14177, 14183), True, 'import matplotlib.pyplot as plot\n'), ((15013, 15024), 'matplotlib.pyplot.show', 'plot.show', ([], {}), '()\n', (15022, 15024), True, 'import matplotlib.pyplot as plot\n'), ((3961, 3984), 'scipy.stats.uniform.rvs', 'stats.uniform.rvs', (['(0)', '(1)'], {}), '(0, 1)\n', (3978, 3984), True, 'import scipy.stats as stats\n'), ((7169, 7192), 'scipy.stats.uniform.rvs', 'stats.uniform.rvs', (['(0)', '(1)'], {}), '(0, 1)\n', (7186, 7192), True, 'import scipy.stats as stats\n'), ((2273, 2293), 'math.sqrt', 'math.sqrt', (['(var / ess)'], {}), '(var / ess)\n', (2282, 2293), False, 'import math\n'), ((2295, 2309), 'math.sqrt', 'math.sqrt', (['var'], {}), '(var)\n', (2304, 2309), False, 'import math\n'), ((3668, 3696), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['x0[0]', 'sigma'], {}), '(x0[0], sigma)\n', (3682, 3696), True, 'import scipy.stats as stats\n'), ((3698, 3726), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['x0[1]', 'sigma'], {}), '(x0[1], sigma)\n', (3712, 3726), True, 'import scipy.stats as stats\n'), ((5237, 5247), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (5245, 5247), True, 'import matplotlib.pyplot as plot\n'), ((5265, 5275), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (5273, 5275), True, 'import matplotlib.pyplot as plot\n'), ((5300, 5310), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (5308, 5310), True, 'import matplotlib.pyplot as plot\n'), ((5329, 5339), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (5337, 5339), True, 'import matplotlib.pyplot as plot\n'), ((5710, 5720), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (5718, 5720), True, 'import matplotlib.pyplot as plot\n'), ((5738, 5748), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (5746, 5748), True, 'import matplotlib.pyplot as plot\n'), ((5773, 5783), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (5781, 5783), True, 'import matplotlib.pyplot as plot\n'), ((5802, 5812), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (5810, 5812), True, 'import matplotlib.pyplot as plot\n'), ((6827, 6855), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['x0[0]', 'sigma'], {}), '(x0[0], sigma)\n', (6841, 6855), True, 'import scipy.stats as stats\n'), ((6867, 6895), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['x0[1]', 'sigma'], {}), '(x0[1], sigma)\n', (6881, 6895), True, 'import scipy.stats as stats\n'), ((6906, 6934), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['x0[2]', 'sigma'], {}), '(x0[2], sigma)\n', (6920, 6934), True, 'import scipy.stats as stats\n'), ((8584, 8594), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (8592, 8594), True, 'import matplotlib.pyplot as plot\n'), ((8612, 8622), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (8620, 8622), True, 'import matplotlib.pyplot as plot\n'), ((8647, 8657), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (8655, 8657), True, 'import matplotlib.pyplot as plot\n'), ((8676, 8686), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (8684, 8686), True, 'import matplotlib.pyplot as plot\n'), ((9080, 9090), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (9088, 9090), True, 'import matplotlib.pyplot as plot\n'), ((9108, 9118), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (9116, 9118), True, 'import matplotlib.pyplot as plot\n'), ((9143, 9153), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (9151, 9153), True, 'import matplotlib.pyplot as plot\n'), ((9172, 9182), 'matplotlib.pyplot.gca', 'plot.gca', ([], {}), '()\n', (9180, 9182), True, 'import matplotlib.pyplot as plot\n'), ((10352, 10372), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['(0)', '(3)'], {}), '(0, 3)\n', (10366, 10372), True, 'import scipy.stats as stats\n'), ((10786, 10809), 'scipy.stats.uniform.rvs', 'stats.uniform.rvs', (['(0)', '(1)'], {}), '(0, 1)\n', (10803, 10809), True, 'import scipy.stats as stats\n'), ((13127, 13147), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['(0)', '(3)'], {}), '(0, 3)\n', (13141, 13147), True, 'import scipy.stats as stats\n'), ((13573, 13596), 'scipy.stats.uniform.rvs', 'stats.uniform.rvs', (['(0)', '(1)'], {}), '(0, 1)\n', (13590, 13596), True, 'import scipy.stats as stats\n'), ((3184, 3211), 'math.log', 'math.log', (['(6.283185307179586)'], {}), '(6.283185307179586)\n', (3192, 3211), False, 'import math\n'), ((9588, 9615), 'math.log', 'math.log', (['(6.283185307179586)'], {}), '(6.283185307179586)\n', (9596, 9615), False, 'import math\n'), ((10491, 10519), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['x0[d]', 'sigma'], {}), '(x0[d], sigma)\n', (10505, 10519), True, 'import scipy.stats as stats\n'), ((13266, 13306), 'scipy.stats.norm.rvs', 'stats.norm.rvs', (['x0[d]', 'opt_sigmas[D - 1]'], {}), '(x0[d], opt_sigmas[D - 1])\n', (13280, 13306), True, 'import scipy.stats as stats\n'), ((6244, 6271), 'math.log', 'math.log', (['(6.283185307179586)'], {}), '(6.283185307179586)\n', (6252, 6271), False, 'import math\n'), ((6199, 6213), 'math.exp', 'math.exp', (['x[1]'], {}), '(x[1])\n', (6207, 6213), False, 'import math\n')]
|
import vigorish.database as db
from vigorish.enums import DataSet, ScrapeCondition
from vigorish.scrape.brooks_pitchfx.parse_html import parse_pitchfx_log
from vigorish.scrape.scrape_task import ScrapeTaskABC
from vigorish.status.update_status_brooks_pitchfx import update_status_brooks_pitchfx_log
from vigorish.util.dt_format_strings import DATE_ONLY_2
from vigorish.util.result import Result
class ScrapeBrooksPitchFx(ScrapeTaskABC):
def __init__(self, app, db_job):
self.data_set = DataSet.BROOKS_PITCHFX
self.req_data_set = DataSet.BROOKS_PITCH_LOGS
super().__init__(app, db_job)
def check_prerequisites(self, game_date):
brooks_pitch_logs = db.DateScrapeStatus.verify_all_brooks_pitch_logs_scraped_for_date(
self.db_session, game_date
)
if brooks_pitch_logs:
return Result.Ok()
date_str = game_date.strftime(DATE_ONLY_2)
error = (
f"Brooks pitch logs for date {date_str} have not been scraped, unable to scrape "
"Brooks pitchfx data until this has been done."
)
return Result.Fail(error)
def check_current_status(self, game_date):
if self.scrape_condition == ScrapeCondition.ALWAYS:
return Result.Ok()
scraped_brooks_pitchfx = db.DateScrapeStatus.verify_all_brooks_pitchfx_scraped_for_date(
self.db_session, game_date
)
return Result.Ok() if not scraped_brooks_pitchfx else Result.Fail("skip")
def parse_scraped_html(self):
parsed = 0
for game_date in self.date_range:
pitch_logs_for_date = self.scraped_data.get_all_brooks_pitch_logs_for_date(game_date)
if not pitch_logs_for_date:
date_str = game_date.strftime(DATE_ONLY_2)
error = f"Failed to retrieve {self.req_data_set} for date: {date_str}"
return Result.Fail(error)
for pitch_logs_for_game in pitch_logs_for_date:
game_id = pitch_logs_for_game.bbref_game_id
self.spinner.text = self.url_tracker.parse_html_report(parsed, game_id)
for pitch_log in pitch_logs_for_game.pitch_logs:
if not pitch_log.parsed_all_info:
continue
if pitch_log.pitch_app_id not in self.url_tracker.parse_url_ids:
continue
html = self.url_tracker.get_html(pitch_log.pitch_app_id)
result = parse_pitchfx_log(html, pitch_log)
if result.failure:
return result
pitchfx_log = result.value
result = self.scraped_data.save_json(self.data_set, pitchfx_log)
if result.failure:
return Result.Fail(f"Error! {result.error} (ID: {pitch_log.pitch_app_id})")
result = self.update_status(pitchfx_log)
if result.failure:
return Result.Fail(f"Error! {result.error} (ID: {pitch_log.pitch_app_id})")
parsed += 1
self.spinner.text = self.url_tracker.parse_html_report(parsed, game_id)
self.db_session.commit()
return Result.Ok()
def parse_html(self, url_details):
pass
def update_status(self, parsed_data):
result = update_status_brooks_pitchfx_log(self.db_session, parsed_data)
if result.failure:
return result
self.db_session.commit()
return Result.Ok()
|
[
"vigorish.util.result.Result.Fail",
"vigorish.status.update_status_brooks_pitchfx.update_status_brooks_pitchfx_log",
"vigorish.database.DateScrapeStatus.verify_all_brooks_pitchfx_scraped_for_date",
"vigorish.scrape.brooks_pitchfx.parse_html.parse_pitchfx_log",
"vigorish.util.result.Result.Ok",
"vigorish.database.DateScrapeStatus.verify_all_brooks_pitch_logs_scraped_for_date"
] |
[((690, 788), 'vigorish.database.DateScrapeStatus.verify_all_brooks_pitch_logs_scraped_for_date', 'db.DateScrapeStatus.verify_all_brooks_pitch_logs_scraped_for_date', (['self.db_session', 'game_date'], {}), '(self.\n db_session, game_date)\n', (755, 788), True, 'import vigorish.database as db\n'), ((1115, 1133), 'vigorish.util.result.Result.Fail', 'Result.Fail', (['error'], {}), '(error)\n', (1126, 1133), False, 'from vigorish.util.result import Result\n'), ((1306, 1401), 'vigorish.database.DateScrapeStatus.verify_all_brooks_pitchfx_scraped_for_date', 'db.DateScrapeStatus.verify_all_brooks_pitchfx_scraped_for_date', (['self.db_session', 'game_date'], {}), '(self.\n db_session, game_date)\n', (1368, 1401), True, 'import vigorish.database as db\n'), ((3274, 3285), 'vigorish.util.result.Result.Ok', 'Result.Ok', ([], {}), '()\n', (3283, 3285), False, 'from vigorish.util.result import Result\n'), ((3399, 3461), 'vigorish.status.update_status_brooks_pitchfx.update_status_brooks_pitchfx_log', 'update_status_brooks_pitchfx_log', (['self.db_session', 'parsed_data'], {}), '(self.db_session, parsed_data)\n', (3431, 3461), False, 'from vigorish.status.update_status_brooks_pitchfx import update_status_brooks_pitchfx_log\n'), ((3563, 3574), 'vigorish.util.result.Result.Ok', 'Result.Ok', ([], {}), '()\n', (3572, 3574), False, 'from vigorish.util.result import Result\n'), ((855, 866), 'vigorish.util.result.Result.Ok', 'Result.Ok', ([], {}), '()\n', (864, 866), False, 'from vigorish.util.result import Result\n'), ((1261, 1272), 'vigorish.util.result.Result.Ok', 'Result.Ok', ([], {}), '()\n', (1270, 1272), False, 'from vigorish.util.result import Result\n'), ((1434, 1445), 'vigorish.util.result.Result.Ok', 'Result.Ok', ([], {}), '()\n', (1443, 1445), False, 'from vigorish.util.result import Result\n'), ((1481, 1500), 'vigorish.util.result.Result.Fail', 'Result.Fail', (['"""skip"""'], {}), "('skip')\n", (1492, 1500), False, 'from vigorish.util.result import Result\n'), ((1904, 1922), 'vigorish.util.result.Result.Fail', 'Result.Fail', (['error'], {}), '(error)\n', (1915, 1922), False, 'from vigorish.util.result import Result\n'), ((2507, 2541), 'vigorish.scrape.brooks_pitchfx.parse_html.parse_pitchfx_log', 'parse_pitchfx_log', (['html', 'pitch_log'], {}), '(html, pitch_log)\n', (2524, 2541), False, 'from vigorish.scrape.brooks_pitchfx.parse_html import parse_pitchfx_log\n'), ((2821, 2889), 'vigorish.util.result.Result.Fail', 'Result.Fail', (['f"""Error! {result.error} (ID: {pitch_log.pitch_app_id})"""'], {}), "(f'Error! {result.error} (ID: {pitch_log.pitch_app_id})')\n", (2832, 2889), False, 'from vigorish.util.result import Result\n'), ((3021, 3089), 'vigorish.util.result.Result.Fail', 'Result.Fail', (['f"""Error! {result.error} (ID: {pitch_log.pitch_app_id})"""'], {}), "(f'Error! {result.error} (ID: {pitch_log.pitch_app_id})')\n", (3032, 3089), False, 'from vigorish.util.result import Result\n')]
|
import numpy as np
from torch import nn
def layer_init(layer, std=np.sqrt(2), bias_const=0.0):
"""
Simple function to init layers
"""
nn.init.orthogonal_(layer.weight, std)
nn.init.constant_(layer.bias, bias_const)
return layer
|
[
"torch.nn.init.orthogonal_",
"numpy.sqrt",
"torch.nn.init.constant_"
] |
[((68, 78), 'numpy.sqrt', 'np.sqrt', (['(2)'], {}), '(2)\n', (75, 78), True, 'import numpy as np\n'), ((152, 190), 'torch.nn.init.orthogonal_', 'nn.init.orthogonal_', (['layer.weight', 'std'], {}), '(layer.weight, std)\n', (171, 190), False, 'from torch import nn\n'), ((195, 236), 'torch.nn.init.constant_', 'nn.init.constant_', (['layer.bias', 'bias_const'], {}), '(layer.bias, bias_const)\n', (212, 236), False, 'from torch import nn\n')]
|
"""
Test script for src=9 provisioning
Below are some odd examples and notes:
Adding a class
{
'src': '9',
'uln': 'Githens',
'ufn': 'Steven',
'aid': '56021',
'utp': '2',
'said': '56021',
'fid': '2',
'username': 'swgithen',
'ctl': 'CourseTitleb018b622-b425-4af7-bb3d-d0d2b4deb35c',
'diagnostic': '0',
'encrypt': '0',
'uem': '<EMAIL>',
'cid': 'CourseTitleb018b622-b425-4af7-bb3d-d0d2b4deb35c',
'fcmd': '2'
}
{rmessage=Successful!, userid=17463901, classid=2836785, rcode=21}
Adding an assignment
{
'fid': '4',
'diagnostic': '0',
'ufn': 'Steven',
'uln': 'Githens',
'username': 'swgithen',
'assignid': 'AssignmentTitlec717957d-254f-4d6d-a64c-952e630db872',
'aid': '56021',
'src': '9',
'cid': 'CourseTitleb018b622-b425-4af7-bb3d-d0d2b4deb35c', 'said': '56021', 'dtstart': '20091225', 'encrypt': '0', 'assign': 'AssignmentTitlec717957d-254f-4d6d-a64c-952e630db872', 'uem': '<EMAIL>', 'utp': '2', 'fcmd': '2', 'ctl': 'CourseTitleb018b622-b425-4af7-bb3d-d0d2b4deb35c', 'dtdue': '20100101'}
{rmessage=Successful!, userid=17463901, classid=2836785, assignmentid=7902977, rcode=41}
Adding an assignment with another inst
{'fid': '4', 'diagnostic': '0', 'ufn': 'StevenIU', 'uln': 'GithensIU', 'username': 'sgithens', 'assignid': 'AssignmentTitle5ae51e10-fd60-4720-931b-ed4f58057d00', 'aid': '56021', 'src': '9', 'cid': '2836785', 'said': '56021', 'dtstart': '20091225', 'encrypt': '0', 'assign': 'AssignmentTitle5ae51e10-fd60-4720-931b-ed4f58057d00', 'uem': '<EMAIL>', 'utp': '2', 'fcmd': '2', 'ctl': 'CourseTitleb018b622-b425-4af7-bb3d-d0d2b4deb35c', 'dtdue': '20100101'}
{rmessage=Successful!, userid=17463902, classid=2836786, assignmentid=7902978, rcode=41}
Adding a class
{'src': '9', 'uln': 'Githens', 'ufn': 'Steven', 'aid': '56021', 'utp': '2', 'said': '56021', 'fid': '2', 'username': 'swgithen', 'ctl': 'CourseTitle46abd163-7464-4d21-a2c0-90c5af3312ab', 'diagnostic': '0', 'encrypt': '0', 'uem': '<EMAIL>', 'fcmd': '2'}
{rmessage=Successful!, userid=17259618, classid=2836733, rcode=21}
Adding an assignment
{'fid': '4', 'diagnostic': '0', 'ufn': 'Steven', 'uln': 'Githens', 'username': 'swgithen', 'assignid': 'AssignmentTitlec4f211c1-2c38-4daf-86dc-3c57c6ef5b7b', 'aid': '56021', 'src': '9', 'cid': '2836733', 'said': '56021', 'dtstart': '20091225', 'encrypt': '0', 'assign': 'AssignmentTitlec4f211c1-2c38-4daf-86dc-3c57c6ef5b7b', 'uem': '<EMAIL>', 'utp': '2', 'fcmd': '2', 'ctl': 'CourseTitle46abd163-7464-4d21-a2c0-90c5af3312ab', 'dtdue': '20100101'}
{rmessage=Successful!, userid=17463581, classid=2836734, assignmentid=7902887, rcode=41}
Adding an assignment with another inst
{'fid': '4', 'diagnostic': '0', 'ufn': 'StevenIU', 'uln': 'GithensIU', 'username': 'sgithens', 'assignid': 'AssignmentTitle2650fcca-b96e-42bd-926e-63660076d2ad', 'aid': '56021', 'src': '9', 'cid': '2836733', 'said': '56021', 'dtstart': '20091225', 'encrypt': '0', 'assign': 'AssignmentTitle2650fcca-b96e-42bd-926e-63660076d2ad', 'uem': '<EMAIL>', 'utp': '2', 'fcmd': '2', 'ctl': 'CourseTitle46abd163-7464-4d21-a2c0-90c5af3312ab', 'dtdue': '20100101'}
{rmessage=Successful!, userid=17463581, classid=2836734, assignmentid=7902888, rcode=41}
"""
import unittest
import random
import sys
from org.sakaiproject.component.cover import ComponentManager
from java.net import InetSocketAddress, Proxy, InetAddress
from java.util import HashMap
debug_proxy = Proxy(Proxy.Type.HTTP, InetSocketAddress(InetAddress.getByName("127.0.0.1"),8008))
tiireview_serv = ComponentManager.get("org.sakaiproject.contentreview.service.ContentReviewService")
class SakaiUuid(object):
"""My Current Jython impl doens't seem to have UUID, so re-implementing it
for now"""
def __init__(self):
self.idmanager = ComponentManager.get("org.sakaiproject.id.api.IdManager")
def uuid1(self):
return self.idmanager.createUuid()
uuid = SakaiUuid()
def getJavaMap(d=None,**kwargs):
m = HashMap()
if d is not None:
for key,val in d.iteritems():
m.put(key,val)
for key,val in kwargs.iteritems():
m.put(key,val)
return m
defaults = {
"aid": "56021",
"said": "56021",
"diagnostic": "0",
"encrypt": "0",
"src": "9"
}
userdummy = {
"uem": "<EMAIL>",
"ufn": "Stevenaabb1234",
"uln": "Githensaaabb234",
"utp": "2",
"uid": "1979092312341234124aabb",
"username": "swgithenaabb1234124"
}
user = {
"uem": "<EMAIL>",
"ufn": "Steven",
"uln": "Githens",
"utp": "2",
#"uid": "19790923",
"username": "swgithen"
}
user2 = {
"uem": "<EMAIL>",
"ufn": "StevenIU",
"uln": "GithensIU",
"utp": "2",
"username": "sgithens"
}
adduser = {
"fcmd" : "2",
"fid" : "1"
}
def callTIIReviewServ(params):
"""Use the Sakai Turnitin Service to make a raw call to TII with the
dictionary of parameters. Returns the API results in map/dict form."""
return tiireview_serv.callTurnitinWDefaultsReturnMap(getJavaMap(params))
def makeNewCourseTitle():
"Make and return a new random title to use for integration test courses"
return "CourseTitle"+str(uuid.uuid1())
def makeNewAsnnTitle():
"Make and return a new random title to use for integration test assignments"
return "AssignmentTitle"+str(uuid.uuid1())
def addSampleInst():
"""This will add/update a user to Turnitin. A successful return looks as
follows:
{rmessage=Successful!, userid=17259618, rcode=11}
It important to note that the userid returned is the userid of whoever made
this API call, and not necessarily the user that was just added.
"""
adduser_cmd = {}
adduser_cmd.update(adduser)
adduser_cmd.update(user)
adduser_cmd.update(defaults)
return callTIIReviewServ(adduser_cmd)
def addSampleClass():
"""Add a simple class using Sakai Source 9 parameters.
Successful results should look as follows:
{rmessage=Successful!, userid=17259618, classid=2833470, rcode=21}
"""
addclass_cmd = {}
addclass_cmd.update(user)
addclass_cmd.update(defaults)
addclass_cmd.update({
"ctl": makeNewCourseTitle(),
"utp":"2",
"fid":"2",
"fcmd":"2"
})
return callTIIReviewServ(addclass_cmd)
def addSampleAssignment():
"""Add a simple assignment."""
course_title = makeNewCourseTitle()
addclass_cmd = {}
addclass_cmd.update(user)
addclass_cmd.update(defaults)
addclass_cmd.update({
"ctl": course_title,
"cid": course_title,
"utp":"2",
"fid":"2",
"fcmd":"2"
})
print("Adding a class\n"+str(addclass_cmd))
addclass_results = callTIIReviewServ(addclass_cmd)
print(addclass_results)
cid = addclass_results["classid"]
asnn_title = makeNewAsnnTitle()
addasnn_cmd = {}
addasnn_cmd.update(user)
addasnn_cmd.update(defaults)
addasnn_cmd.update({
"fid":"4",
"fcmd":"2",
"ctl":course_title,
"assign":asnn_title,
"assignid":asnn_title,
"utp":"2",
"dtstart":"20091225",
"dtdue":"20100101",
"cid":course_title
#"ced":"20110101"
})
print("Adding an assignment\n"+str(addasnn_cmd))
print(callTIIReviewServ(addasnn_cmd))
# Trying with a second instructor now
asnn_title = makeNewAsnnTitle()
addasnn_cmd = {}
addasnn_cmd.update(user2)
addasnn_cmd.update(defaults)
addasnn_cmd.update({
"fid":"4",
"fcmd":"2",
"ctl":course_title,
"assign":asnn_title,
"assignid":asnn_title,
"utp":"2",
"dtstart":"20091225",
"dtdue":"20100101",
"cid":cid
#"ced":"20110101"
})
print("Adding an assignment with another inst\n"+str(addasnn_cmd))
print(callTIIReviewServ(addasnn_cmd))
# Temporarily change to straight HTTP so I can intercept with WebScarab to get a parameter dump
#tiiresult = tiireview_serv.callTurnitinReturnMap("http://www.turnitin.com/api.asp?",
# getJavaMap(adduser_cmd), "sakai123", debug_proxy
# );
class TestRawTurnitinSource9(unittest.TestCase):
"""
This set of test cases is going to flex using the raw Turnitin API by
sending the hand crafted maps to the server and examing the return results.
Additionally all these tests will use the source 9 setup.
"""
def setUp(self):
self.tiireview_serv = ComponentManager.get("org.sakaiproject.contentreview.service.ContentReviewService")
def testAdduser(self):
results = addSampleInst()
self.assertEquals(results["rmessage"],"Successful!")
self.assertEquals(results["rcode"],"11")
def testAddclass(self):
results = addSampleClass()
self.assertEquals(results["rmessage"],"Successful!")
self.assertEquals(results["rcode"],"21")
def main(args):
if len(args) > 0 and args[0] == "runtests":
print("Running the tests")
tii_suites = []
tii_suites.append(unittest.TestLoader().loadTestsFromTestCase(TestRawTurnitinSource9))
alltests = unittest.TestSuite(tii_suites)
unittest.TextTestRunner(verbosity=2).run(alltests)
else:
addSampleAssignment()
if __name__ == "__main__":
main(sys.argv[1:])
|
[
"unittest.TestSuite",
"java.net.InetAddress.getByName",
"java.util.HashMap",
"org.sakaiproject.component.cover.ComponentManager.get",
"unittest.TextTestRunner",
"unittest.TestLoader"
] |
[((3507, 3595), 'org.sakaiproject.component.cover.ComponentManager.get', 'ComponentManager.get', (['"""org.sakaiproject.contentreview.service.ContentReviewService"""'], {}), "(\n 'org.sakaiproject.contentreview.service.ContentReviewService')\n", (3527, 3595), False, 'from org.sakaiproject.component.cover import ComponentManager\n'), ((3947, 3956), 'java.util.HashMap', 'HashMap', ([], {}), '()\n', (3954, 3956), False, 'from java.util import HashMap\n'), ((3447, 3481), 'java.net.InetAddress.getByName', 'InetAddress.getByName', (['"""127.0.0.1"""'], {}), "('127.0.0.1')\n", (3468, 3481), False, 'from java.net import InetSocketAddress, Proxy, InetAddress\n'), ((3762, 3819), 'org.sakaiproject.component.cover.ComponentManager.get', 'ComponentManager.get', (['"""org.sakaiproject.id.api.IdManager"""'], {}), "('org.sakaiproject.id.api.IdManager')\n", (3782, 3819), False, 'from org.sakaiproject.component.cover import ComponentManager\n'), ((8457, 8545), 'org.sakaiproject.component.cover.ComponentManager.get', 'ComponentManager.get', (['"""org.sakaiproject.contentreview.service.ContentReviewService"""'], {}), "(\n 'org.sakaiproject.contentreview.service.ContentReviewService')\n", (8477, 8545), False, 'from org.sakaiproject.component.cover import ComponentManager\n'), ((9134, 9164), 'unittest.TestSuite', 'unittest.TestSuite', (['tii_suites'], {}), '(tii_suites)\n', (9152, 9164), False, 'import unittest\n'), ((9173, 9209), 'unittest.TextTestRunner', 'unittest.TextTestRunner', ([], {'verbosity': '(2)'}), '(verbosity=2)\n', (9196, 9209), False, 'import unittest\n'), ((9046, 9067), 'unittest.TestLoader', 'unittest.TestLoader', ([], {}), '()\n', (9065, 9067), False, 'import unittest\n')]
|
import logging
from collections import Generator
from typing import Dict
from spanner import ems_spanner_client
from tenacity import retry, stop_after_attempt, wait_fixed
class SpannerChecker:
STOP_AFTER_ATTEMPT_SECS = 15
WAIT_FIXED = 3
def __init__(self, project_id: str, instance_id: str, db_name: str) -> None:
self._client = ems_spanner_client.EmsSpannerClient(project_id, instance_id, db_name)
def execute_sql(self, query: str) -> Generator:
logging.info(f"Executing query: {query}")
return self._client.execute_sql(query)
def execute_update(self, query: str):
logging.info(f"Executing update: {query}")
self._client.execute_update(query)
def has_row_for(self, table_name: str, conditions: Dict):
@retry(stop=stop_after_attempt(self.STOP_AFTER_ATTEMPT_SECS), wait=wait_fixed(self.WAIT_FIXED))
def is_found(query: str):
if list(self.execute_sql(query))[0][0] == 0:
raise ValueError("Spanner table row not found.")
return True
query = self._compose_query(table_name, conditions)
return is_found(query)
@staticmethod
def _compose_query(table_name, conditions) -> str:
normalized_conditions = []
for key, value in conditions.items():
quoted_value = f"'{value}'" if isinstance(value, str) else value
normalized_conditions.append(f'{key} = {quoted_value}')
where = ' AND '.join(normalized_conditions)
return f'SELECT COUNT(*) FROM {table_name} WHERE {where}'
|
[
"tenacity.wait_fixed",
"logging.info",
"spanner.ems_spanner_client.EmsSpannerClient",
"tenacity.stop_after_attempt"
] |
[((353, 422), 'spanner.ems_spanner_client.EmsSpannerClient', 'ems_spanner_client.EmsSpannerClient', (['project_id', 'instance_id', 'db_name'], {}), '(project_id, instance_id, db_name)\n', (388, 422), False, 'from spanner import ems_spanner_client\n'), ((484, 525), 'logging.info', 'logging.info', (['f"""Executing query: {query}"""'], {}), "(f'Executing query: {query}')\n", (496, 525), False, 'import logging\n'), ((624, 666), 'logging.info', 'logging.info', (['f"""Executing update: {query}"""'], {}), "(f'Executing update: {query}')\n", (636, 666), False, 'import logging\n'), ((793, 841), 'tenacity.stop_after_attempt', 'stop_after_attempt', (['self.STOP_AFTER_ATTEMPT_SECS'], {}), '(self.STOP_AFTER_ATTEMPT_SECS)\n', (811, 841), False, 'from tenacity import retry, stop_after_attempt, wait_fixed\n'), ((848, 875), 'tenacity.wait_fixed', 'wait_fixed', (['self.WAIT_FIXED'], {}), '(self.WAIT_FIXED)\n', (858, 875), False, 'from tenacity import retry, stop_after_attempt, wait_fixed\n')]
|
from typing import Any
from click import echo, style
def out(message: str, new_line: bool = True, **styles: Any) -> None:
if "bold" not in styles:
styles["bold"] = True
message = style(message, **styles)
echo(message, nl=new_line)
def err(message: str, new_line: bool = True, **styles: Any) -> None:
if "fg" not in styles:
styles["fg"] = "red"
message = style(message, **styles)
echo(message, nl=new_line)
|
[
"click.echo",
"click.style"
] |
[((230, 256), 'click.echo', 'echo', (['message'], {'nl': 'new_line'}), '(message, nl=new_line)\n', (234, 256), False, 'from click import echo, style\n'), ((431, 457), 'click.echo', 'echo', (['message'], {'nl': 'new_line'}), '(message, nl=new_line)\n', (435, 457), False, 'from click import echo, style\n'), ((201, 225), 'click.style', 'style', (['message'], {}), '(message, **styles)\n', (206, 225), False, 'from click import echo, style\n'), ((402, 426), 'click.style', 'style', (['message'], {}), '(message, **styles)\n', (407, 426), False, 'from click import echo, style\n')]
|
from django.db.models.signals import m2m_changed
from django.dispatch import receiver
from .models import Image
@receiver(m2m_changed, sender=Image.users_likes.through)
def users_like_changed(sender, instance, **kwargs):
instance.total_likes = instance.users_likes.count()
instance.save()
|
[
"django.dispatch.receiver"
] |
[((114, 169), 'django.dispatch.receiver', 'receiver', (['m2m_changed'], {'sender': 'Image.users_likes.through'}), '(m2m_changed, sender=Image.users_likes.through)\n', (122, 169), False, 'from django.dispatch import receiver\n')]
|
import random
from typing import List, Union
import torch
import torchvision.transforms as T
import torchvision.transforms.functional as F
from PIL import Image
class RandomDiscreteRotation():
def __init__(self, angles, resample=0, expand=False):
self.angles = angles
self.resample = resample
self.expand = expand
def __call__(self, image, target=None):
if target is not None:
raise NotImplementedError("target transformation not implemented")
angle = random.choice(self.angles)
image = F.rotate(image, angle, self.resample, self.expand)
return image
def __repr__(self):
return f"{self.__class__.__name__}(angles={self.angles})"
class Compose():
def __init__(self, transforms):
self.transforms = transforms
def __call__(self, image, target):
for t in self.transforms:
image, target = t(image, target)
return image, target
class RandomHorizontalFlip():
def __init__(self, p=0.5):
self.p = p
def __call__(self, image, target):
if torch.rand(1) < self.p:
image = F.hflip(image)
width, _ = _get_image_size(image)
boxes = target["boxes"]
boxes[:, [0, 2]] = width - boxes[:, [2, 0]]
target["boxes"] = boxes
if "masks" in target:
target["masks"] = F.hflip(target["masks"])
return image, target
class RandomVerticalFlip():
def __init__(self, p=0.5):
self.p = p
def __call__(self, image, target):
if torch.rand(1) < self.p:
image = F.vflip(image)
_, height = _get_image_size(image)
boxes = target["boxes"]
boxes[:, [1, 3]] = height - boxes[:, [3, 1]]
target["boxes"] = boxes
if "masks" in target:
target["masks"] = F.vflip(target["masks"])
return image, target
class RandomFlip():
def __init__(self, p=0.5):
self.p = p
self.transforms = [
RandomHorizontalFlip(p),
RandomVerticalFlip(p),
]
def __call__(self, image, target):
t = random.choice(self.transforms)
return t(image, target)
class GammaJitter():
def __init__(self, gamma=0):
self.gamma = self._check_input(gamma, "gamma")
def _check_input(self, value, name, center=1, bound=(0, float('inf')), clip_first_on_zero=True):
if isinstance(value, (int, float)):
if value < 0:
raise ValueError("If {} is a single number, it must be non negative.".format(name))
value = [center - float(value), center + float(value)]
if clip_first_on_zero:
value[0] = max(value[0], 0.0)
elif isinstance(value, (tuple, list)) and len(value) == 2:
if not bound[0] <= value[0] <= value[1] <= bound[1]:
raise ValueError("{} values should be between {}".format(name, bound))
else:
raise TypeError("{} should be a single number or a list/tuple with lenght 2.".format(name))
if value[0] == value[1] == center:
value = None
return value
def __call__(self, image, target):
gamma = torch.tensor(1.0).uniform_(self.gamma[0], self.gamma[1]).item()
image = F.adjust_gamma(image, gamma)
return image, target
class ColorJitter():
def __init__(self, brightness=0, contrast=0, saturation=0, hue=0):
self.color_jitter = T.ColorJitter(brightness, contrast, saturation, hue)
def __call__(self, image, target):
image = self.color_jitter(image)
return image, target
class RandomChoice():
def __init__(self, transforms):
self.transforms = transforms
def __call__(self, image, target):
t = random.choice(self.transforms)
return t(image, target)
class ToTensor():
def __call__(self, image, target):
image = F.to_tensor(image)
return image, target
def _get_image_size(img: Union[Image.Image, torch.Tensor]):
if isinstance(img, torch.Tensor):
return _get_tensor_image_size(img)
elif isinstance(img, Image.Image):
return img.size
raise TypeError("Unexpected input type")
def _is_tensor_a_torch_image(x: torch.Tensor) -> bool:
return x.ndim >= 2
def _get_tensor_image_size(img: torch.Tensor) -> List[int]:
"""Returns (w, h) of tensor image"""
if _is_tensor_a_torch_image(img):
return [img.shape[-1], img.shape[-2]]
raise TypeError("Unexpected input type")
|
[
"torchvision.transforms.functional.adjust_gamma",
"torchvision.transforms.functional.to_tensor",
"random.choice",
"torchvision.transforms.functional.hflip",
"torchvision.transforms.ColorJitter",
"torchvision.transforms.functional.rotate",
"torch.tensor",
"torchvision.transforms.functional.vflip",
"torch.rand"
] |
[((518, 544), 'random.choice', 'random.choice', (['self.angles'], {}), '(self.angles)\n', (531, 544), False, 'import random\n'), ((561, 611), 'torchvision.transforms.functional.rotate', 'F.rotate', (['image', 'angle', 'self.resample', 'self.expand'], {}), '(image, angle, self.resample, self.expand)\n', (569, 611), True, 'import torchvision.transforms.functional as F\n'), ((2180, 2210), 'random.choice', 'random.choice', (['self.transforms'], {}), '(self.transforms)\n', (2193, 2210), False, 'import random\n'), ((3337, 3365), 'torchvision.transforms.functional.adjust_gamma', 'F.adjust_gamma', (['image', 'gamma'], {}), '(image, gamma)\n', (3351, 3365), True, 'import torchvision.transforms.functional as F\n'), ((3518, 3570), 'torchvision.transforms.ColorJitter', 'T.ColorJitter', (['brightness', 'contrast', 'saturation', 'hue'], {}), '(brightness, contrast, saturation, hue)\n', (3531, 3570), True, 'import torchvision.transforms as T\n'), ((3830, 3860), 'random.choice', 'random.choice', (['self.transforms'], {}), '(self.transforms)\n', (3843, 3860), False, 'import random\n'), ((3968, 3986), 'torchvision.transforms.functional.to_tensor', 'F.to_tensor', (['image'], {}), '(image)\n', (3979, 3986), True, 'import torchvision.transforms.functional as F\n'), ((1097, 1110), 'torch.rand', 'torch.rand', (['(1)'], {}), '(1)\n', (1107, 1110), False, 'import torch\n'), ((1141, 1155), 'torchvision.transforms.functional.hflip', 'F.hflip', (['image'], {}), '(image)\n', (1148, 1155), True, 'import torchvision.transforms.functional as F\n'), ((1586, 1599), 'torch.rand', 'torch.rand', (['(1)'], {}), '(1)\n', (1596, 1599), False, 'import torch\n'), ((1630, 1644), 'torchvision.transforms.functional.vflip', 'F.vflip', (['image'], {}), '(image)\n', (1637, 1644), True, 'import torchvision.transforms.functional as F\n'), ((1400, 1424), 'torchvision.transforms.functional.hflip', 'F.hflip', (["target['masks']"], {}), "(target['masks'])\n", (1407, 1424), True, 'import torchvision.transforms.functional as F\n'), ((1891, 1915), 'torchvision.transforms.functional.vflip', 'F.vflip', (["target['masks']"], {}), "(target['masks'])\n", (1898, 1915), True, 'import torchvision.transforms.functional as F\n'), ((3257, 3274), 'torch.tensor', 'torch.tensor', (['(1.0)'], {}), '(1.0)\n', (3269, 3274), False, 'import torch\n')]
|
import pandas as pd
# Wczytaj do DataFrame arkusz z narodzinami dzieci
# w Polsce dostępny pod adresem
df = pd.read_csv('Imiona_dzieci_2000-2019.csv')
|
[
"pandas.read_csv"
] |
[((111, 153), 'pandas.read_csv', 'pd.read_csv', (['"""Imiona_dzieci_2000-2019.csv"""'], {}), "('Imiona_dzieci_2000-2019.csv')\n", (122, 153), True, 'import pandas as pd\n')]
|
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
# vi: set ft=python sts=4 ts=4 sw=4 et:
### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ##
#
# See COPYING file distributed along with the PyMVPA package for the
# copyright and license terms.
#
### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ### ##
"""Unit tests for PyMVPA Procrustean mapper"""
import unittest
import numpy as np
import itertools
from numpy.linalg import norm
from mvpa2.base import externals
from mvpa2.datasets.base import dataset_wizard
from mvpa2.testing import *
from mvpa2.testing.datasets import *
from mvpa2.mappers.procrustean import ProcrusteanMapper
svds = ["numpy"]
if externals.exists("liblapack.so"):
svds += ["dgesvd"]
if externals.exists("scipy"):
svds += ["scipy"]
class ProcrusteanMapperTests(unittest.TestCase):
@sweepargs(oblique=(False, True))
@sweepargs(svd=svds)
@reseed_rng()
def test_simple(self, svd, oblique):
d_orig = datasets["uni2large"].samples
d_orig2 = datasets["uni4large"].samples
for sdim, nf_s, nf_t, full_test in (
("Same 2D", 2, 2, True),
("Same 10D", 10, 10, True),
("2D -> 3D", 2, 3, True),
("3D -> 2D", 3, 2, False),
):
# figure out some "random" rotation
d = max(nf_s, nf_t)
R = get_random_rotation(nf_s, nf_t, d_orig)
if nf_s == nf_t:
adR = np.abs(1.0 - np.linalg.det(R))
self.assertTrue(
adR < 1e-10,
"Determinant of rotation matrix should " "be 1. Got it 1+%g" % adR,
)
self.assertTrue(norm(np.dot(R, R.T) - np.eye(R.shape[0])) < 1e-10)
for (s, scaling), demean in itertools.product(
((0.3, True), (1.0, False)), (False, True)
):
pm = ProcrusteanMapper(
scaling=scaling, oblique=oblique, svd=svd, demean=demean
)
# pm2 = ProcrusteanMapper(scaling=scaling, oblique=oblique)
if demean:
t1, t2 = d_orig[23, 1], d_orig[22, 1]
else:
t1, t2 = 0, 0
full_test = False # although runs, not intended to perform properly
# Create source/target data
d = d_orig[:, :nf_s]
d_s = d + t1
d_t = np.dot(s * d, R) + t2
# train bloody mapper(s)
ds = dataset_wizard(samples=d_s, targets=d_t)
pm.train(ds)
## not possible with new interface
# pm2.train(d_s, d_t)
## verify that both created the same transformation
# npm2proj = norm(pm.proj - pm2.proj)
# self.assertTrue(npm2proj <= 1e-10,
# msg="Got transformation different by norm %g."
# " Had to be less than 1e-10" % npm2proj)
# self.assertTrue(norm(pm._offset_in - pm2._offset_in) <= 1e-10)
# self.assertTrue(norm(pm._offset_out - pm2._offset_out) <= 1e-10)
# do forward transformation on the same source data
d_s_f = pm.forward(d_s)
self.assertEqual(
d_s_f.shape,
d_t.shape,
msg="Mapped shape should be identical to the d_t",
)
dsf = d_s_f - d_t
ndsf = norm(dsf) / norm(d_t)
if full_test:
dsR = norm(s * R - pm.proj)
if not oblique:
self.assertTrue(
dsR <= 1e-12,
msg="We should have got reconstructed rotation+scaling "
"perfectly. Now got d scale*R=%g" % dsR,
)
self.assertTrue(
np.abs(s - pm._scale) < 1e-12,
msg="We should have got reconstructed scale "
"perfectly. Now got %g for %g" % (pm._scale, s),
)
self.assertTrue(
ndsf <= 1e-12,
msg="%s: Failed to get to the target space correctly."
" normed error=%g" % (sdim, ndsf),
)
# Test if we get back
d_s_f_r = pm.reverse(d_s_f)
# Test if recon proj is true inverse except for high->low projection
if nf_s <= nf_t:
assert_almost_equal(
np.dot(pm._proj, pm._recon),
np.eye(pm._proj.shape[0]),
err_msg="Deviation from identity matrix is too large",
)
dsfr = d_s_f_r - d_s
ndsfr = norm(dsfr) / norm(d_s)
if full_test:
self.assertTrue(
ndsfr <= 1e-12,
msg="%s: Failed to reconstruct into source space correctly."
" normed error=%g" % (sdim, ndsfr),
)
@reseed_rng()
def test_reflection(self, rep=10):
for i in range(rep):
from mvpa2.testing.datasets import get_random_rotation
d = np.random.random((100, 2))
T = get_random_rotation(d.shape[1])
d2 = np.dot(d, T)
# scale it up a bit
d2 *= 1.2
# add a reflection by flipping the first dimension
d2[:, 0] *= -1
ds = dataset_wizard(samples=d, targets=d2)
norm0 = np.linalg.norm(d - d2)
mapper = ProcrusteanMapper(scaling=False, reflection=False)
mapper.train(ds)
norm1 = np.linalg.norm(d2 - mapper.forward(ds).samples)
eps = 1e-7
self.assertLess(
norm1,
norm0 + eps,
msg="Procrustes should reduce difference, "
"but %f > %f" % (norm1, norm0),
)
mapper = ProcrusteanMapper(scaling=True, reflection=False)
mapper.train(ds)
norm2 = np.linalg.norm(d2 - mapper.forward(ds).samples)
self.assertLess(
norm2,
norm1 + eps,
msg="Procrustes with scaling should work better, "
"but %f > %f" % (norm2, norm1),
)
mapper = ProcrusteanMapper(scaling=False, reflection=True)
mapper.train(ds)
norm3 = np.linalg.norm(d2 - mapper.forward(ds).samples)
self.assertLess(
norm3,
norm1 + eps,
msg="Procrustes with reflection should work better, "
"but %f > %f" % (norm3, norm1),
)
mapper = ProcrusteanMapper(scaling=True, reflection=True)
mapper.train(ds)
norm4 = np.linalg.norm(d2 - mapper.forward(ds).samples)
self.assertLess(
norm4,
norm3 + eps,
msg="Procrustes with scaling should work better, "
"but %f > %f" % (norm4, norm3),
)
self.assertLess(
norm4,
norm2 + eps,
msg="Procrustes with reflection should work better, "
"but %f > %f" % (norm4, norm2),
)
def suite(): # pragma: no cover
return unittest.makeSuite(ProcrusteanMapperTests)
if __name__ == "__main__": # pragma: no cover
from . import runner
runner.run()
|
[
"numpy.abs",
"numpy.eye",
"numpy.random.random",
"mvpa2.base.externals.exists",
"unittest.makeSuite",
"itertools.product",
"mvpa2.datasets.base.dataset_wizard",
"numpy.linalg.det",
"numpy.dot",
"numpy.linalg.norm",
"mvpa2.testing.datasets.get_random_rotation",
"mvpa2.mappers.procrustean.ProcrusteanMapper"
] |
[((733, 765), 'mvpa2.base.externals.exists', 'externals.exists', (['"""liblapack.so"""'], {}), "('liblapack.so')\n", (749, 765), False, 'from mvpa2.base import externals\n'), ((793, 818), 'mvpa2.base.externals.exists', 'externals.exists', (['"""scipy"""'], {}), "('scipy')\n", (809, 818), False, 'from mvpa2.base import externals\n'), ((7626, 7668), 'unittest.makeSuite', 'unittest.makeSuite', (['ProcrusteanMapperTests'], {}), '(ProcrusteanMapperTests)\n', (7644, 7668), False, 'import unittest\n'), ((1416, 1455), 'mvpa2.testing.datasets.get_random_rotation', 'get_random_rotation', (['nf_s', 'nf_t', 'd_orig'], {}), '(nf_s, nf_t, d_orig)\n', (1435, 1455), False, 'from mvpa2.testing.datasets import get_random_rotation\n'), ((1833, 1894), 'itertools.product', 'itertools.product', (['((0.3, True), (1.0, False))', '(False, True)'], {}), '(((0.3, True), (1.0, False)), (False, True))\n', (1850, 1894), False, 'import itertools\n'), ((5484, 5510), 'numpy.random.random', 'np.random.random', (['(100, 2)'], {}), '((100, 2))\n', (5500, 5510), True, 'import numpy as np\n'), ((5527, 5558), 'mvpa2.testing.datasets.get_random_rotation', 'get_random_rotation', (['d.shape[1]'], {}), '(d.shape[1])\n', (5546, 5558), False, 'from mvpa2.testing.datasets import get_random_rotation\n'), ((5576, 5588), 'numpy.dot', 'np.dot', (['d', 'T'], {}), '(d, T)\n', (5582, 5588), True, 'import numpy as np\n'), ((5750, 5787), 'mvpa2.datasets.base.dataset_wizard', 'dataset_wizard', ([], {'samples': 'd', 'targets': 'd2'}), '(samples=d, targets=d2)\n', (5764, 5787), False, 'from mvpa2.datasets.base import dataset_wizard\n'), ((5809, 5831), 'numpy.linalg.norm', 'np.linalg.norm', (['(d - d2)'], {}), '(d - d2)\n', (5823, 5831), True, 'import numpy as np\n'), ((5854, 5904), 'mvpa2.mappers.procrustean.ProcrusteanMapper', 'ProcrusteanMapper', ([], {'scaling': '(False)', 'reflection': '(False)'}), '(scaling=False, reflection=False)\n', (5871, 5904), False, 'from mvpa2.mappers.procrustean import ProcrusteanMapper\n'), ((6250, 6299), 'mvpa2.mappers.procrustean.ProcrusteanMapper', 'ProcrusteanMapper', ([], {'scaling': '(True)', 'reflection': '(False)'}), '(scaling=True, reflection=False)\n', (6267, 6299), False, 'from mvpa2.mappers.procrustean import ProcrusteanMapper\n'), ((6629, 6678), 'mvpa2.mappers.procrustean.ProcrusteanMapper', 'ProcrusteanMapper', ([], {'scaling': '(False)', 'reflection': '(True)'}), '(scaling=False, reflection=True)\n', (6646, 6678), False, 'from mvpa2.mappers.procrustean import ProcrusteanMapper\n'), ((7011, 7059), 'mvpa2.mappers.procrustean.ProcrusteanMapper', 'ProcrusteanMapper', ([], {'scaling': '(True)', 'reflection': '(True)'}), '(scaling=True, reflection=True)\n', (7028, 7059), False, 'from mvpa2.mappers.procrustean import ProcrusteanMapper\n'), ((1947, 2022), 'mvpa2.mappers.procrustean.ProcrusteanMapper', 'ProcrusteanMapper', ([], {'scaling': 'scaling', 'oblique': 'oblique', 'svd': 'svd', 'demean': 'demean'}), '(scaling=scaling, oblique=oblique, svd=svd, demean=demean)\n', (1964, 2022), False, 'from mvpa2.mappers.procrustean import ProcrusteanMapper\n'), ((2585, 2625), 'mvpa2.datasets.base.dataset_wizard', 'dataset_wizard', ([], {'samples': 'd_s', 'targets': 'd_t'}), '(samples=d_s, targets=d_t)\n', (2599, 2625), False, 'from mvpa2.datasets.base import dataset_wizard\n'), ((2500, 2516), 'numpy.dot', 'np.dot', (['(s * d)', 'R'], {}), '(s * d, R)\n', (2506, 2516), True, 'import numpy as np\n'), ((3593, 3602), 'numpy.linalg.norm', 'norm', (['dsf'], {}), '(dsf)\n', (3597, 3602), False, 'from numpy.linalg import norm\n'), ((3605, 3614), 'numpy.linalg.norm', 'norm', (['d_t'], {}), '(d_t)\n', (3609, 3614), False, 'from numpy.linalg import norm\n'), ((3671, 3692), 'numpy.linalg.norm', 'norm', (['(s * R - pm.proj)'], {}), '(s * R - pm.proj)\n', (3675, 3692), False, 'from numpy.linalg import norm\n'), ((5016, 5026), 'numpy.linalg.norm', 'norm', (['dsfr'], {}), '(dsfr)\n', (5020, 5026), False, 'from numpy.linalg import norm\n'), ((5029, 5038), 'numpy.linalg.norm', 'norm', (['d_s'], {}), '(d_s)\n', (5033, 5038), False, 'from numpy.linalg import norm\n'), ((1520, 1536), 'numpy.linalg.det', 'np.linalg.det', (['R'], {}), '(R)\n', (1533, 1536), True, 'import numpy as np\n'), ((4774, 4801), 'numpy.dot', 'np.dot', (['pm._proj', 'pm._recon'], {}), '(pm._proj, pm._recon)\n', (4780, 4801), True, 'import numpy as np\n'), ((4827, 4852), 'numpy.eye', 'np.eye', (['pm._proj.shape[0]'], {}), '(pm._proj.shape[0])\n', (4833, 4852), True, 'import numpy as np\n'), ((1747, 1761), 'numpy.dot', 'np.dot', (['R', 'R.T'], {}), '(R, R.T)\n', (1753, 1761), True, 'import numpy as np\n'), ((1764, 1782), 'numpy.eye', 'np.eye', (['R.shape[0]'], {}), '(R.shape[0])\n', (1770, 1782), True, 'import numpy as np\n'), ((4063, 4084), 'numpy.abs', 'np.abs', (['(s - pm._scale)'], {}), '(s - pm._scale)\n', (4069, 4084), True, 'import numpy as np\n')]
|
# Generated by Django 2.2.7 on 2019-12-15 12:15
from django.db import migrations, models
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Warframe',
fields=[
('id', models.AutoField(primary_key=True, serialize=False)),
('name', models.CharField(max_length=100, unique=True)),
('health', models.IntegerField()),
('shield', models.IntegerField()),
('armor', models.IntegerField()),
('power', models.IntegerField()),
('sprint_speed', models.FloatField()),
('power_strength', models.FloatField(default=1)),
('power_duration', models.FloatField(default=1)),
('power_range', models.FloatField(default=1)),
('power_efficiency', models.FloatField(default=1)),
('img', models.URLField(max_length=255, null=True)),
],
),
]
|
[
"django.db.models.FloatField",
"django.db.models.IntegerField",
"django.db.models.AutoField",
"django.db.models.URLField",
"django.db.models.CharField"
] |
[((304, 355), 'django.db.models.AutoField', 'models.AutoField', ([], {'primary_key': '(True)', 'serialize': '(False)'}), '(primary_key=True, serialize=False)\n', (320, 355), False, 'from django.db import migrations, models\n'), ((383, 428), 'django.db.models.CharField', 'models.CharField', ([], {'max_length': '(100)', 'unique': '(True)'}), '(max_length=100, unique=True)\n', (399, 428), False, 'from django.db import migrations, models\n'), ((458, 479), 'django.db.models.IntegerField', 'models.IntegerField', ([], {}), '()\n', (477, 479), False, 'from django.db import migrations, models\n'), ((509, 530), 'django.db.models.IntegerField', 'models.IntegerField', ([], {}), '()\n', (528, 530), False, 'from django.db import migrations, models\n'), ((559, 580), 'django.db.models.IntegerField', 'models.IntegerField', ([], {}), '()\n', (578, 580), False, 'from django.db import migrations, models\n'), ((609, 630), 'django.db.models.IntegerField', 'models.IntegerField', ([], {}), '()\n', (628, 630), False, 'from django.db import migrations, models\n'), ((666, 685), 'django.db.models.FloatField', 'models.FloatField', ([], {}), '()\n', (683, 685), False, 'from django.db import migrations, models\n'), ((723, 751), 'django.db.models.FloatField', 'models.FloatField', ([], {'default': '(1)'}), '(default=1)\n', (740, 751), False, 'from django.db import migrations, models\n'), ((789, 817), 'django.db.models.FloatField', 'models.FloatField', ([], {'default': '(1)'}), '(default=1)\n', (806, 817), False, 'from django.db import migrations, models\n'), ((852, 880), 'django.db.models.FloatField', 'models.FloatField', ([], {'default': '(1)'}), '(default=1)\n', (869, 880), False, 'from django.db import migrations, models\n'), ((920, 948), 'django.db.models.FloatField', 'models.FloatField', ([], {'default': '(1)'}), '(default=1)\n', (937, 948), False, 'from django.db import migrations, models\n'), ((975, 1017), 'django.db.models.URLField', 'models.URLField', ([], {'max_length': '(255)', 'null': '(True)'}), '(max_length=255, null=True)\n', (990, 1017), False, 'from django.db import migrations, models\n')]
|
#!/pxrpythonsubst
#
# Copyright 2016 Pixar
#
# Licensed under the Apache License, Version 2.0 (the "Apache License")
# with the following modification; you may not use this file except in
# compliance with the Apache License and the following modification to it:
# Section 6. Trademarks. is deleted and replaced with:
#
# 6. Trademarks. This License does not grant permission to use the trade
# names, trademarks, service marks, or product names of the Licensor
# and its affiliates, except as required to comply with Section 4(c) of
# the License and to reproduce the content of the NOTICE file.
#
# You may obtain a copy of the Apache License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the Apache License with the above modification is
# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the Apache License for the specific
# language governing permissions and limitations under the Apache License.
#
from pxr import Tf
import logging
import unittest
class TestStringUtils(unittest.TestCase):
"""
Test Tf String Utils (The python wrapped porting of the utility functions).
"""
def setUp(self):
self.log = logging.getLogger()
def test_StringSplit(self):
"""Testing StringSplit() function. This function is supposed to behave
like the split method on python string objects."""
self.log.info("Testing string split cases")
self.assertEqual([], Tf.StringSplit("",""))
self.assertEqual([], Tf.StringSplit("abcd",""))
self.assertEqual([], Tf.StringSplit("","ccc"))
s = "abcd"
self.assertEqual(s.split("a"), Tf.StringSplit(s, "a"))
self.assertEqual(s.split("b"), Tf.StringSplit(s, "b"))
self.assertEqual(s.split("c"), Tf.StringSplit(s, "c"))
self.assertEqual(s.split("d"), Tf.StringSplit(s, "d"))
self.assertEqual(s.split("abcd"), Tf.StringSplit(s, "abcd"))
self.assertEqual(s.split("ab"), Tf.StringSplit(s, "ab"))
s = "a:+b:+c:+d"
self.assertEqual(s.split(":+"), Tf.StringSplit(s, ":+"))
s = "a:+b:+c:d"
self.assertEqual(s.split(":+"), Tf.StringSplit(s, ":+"))
def test_Unicode(self):
"""Testing that we can pass python unicode objects to wrapped
functions expecting std::string"""
self.log.info("Testing unicode calls")
self.assertEqual(Tf.StringSplit('123', '2'), ['1', '3'])
self.assertEqual(Tf.StringSplit('123', u'2'), ['1', '3'])
self.assertEqual(Tf.StringSplit(u'123', '2'), ['1', '3'])
self.assertEqual(Tf.StringSplit(u'123', u'2'), ['1', '3'])
self.assertEqual(Tf.DictionaryStrcmp('apple', 'banana'), -1)
self.assertEqual(Tf.DictionaryStrcmp('apple', u'banana'), -1)
self.assertEqual(Tf.DictionaryStrcmp(u'apple', 'banana'), -1)
self.assertEqual(Tf.DictionaryStrcmp(u'apple', u'banana'), -1)
def test_StringToLong(self):
def checks(val):
self.assertEqual(Tf.StringToLong(repr(val)), val)
def checku(val):
self.assertEqual(Tf.StringToULong(repr(val)), val)
# A range of valid values.
for i in range(1000000):
checku(i)
for i in range(-500000, 500000):
checks(i)
# A wider range of valid values.
for i in range(0, 1000000000, 9337):
checks(i)
for i in range(-500000000, 500000000, 9337):
checks(i)
# Get the max/min values.
ulmax, lmax, lmin = (
Tf._GetULongMax(), Tf._GetLongMax(), Tf._GetLongMin())
# Check the extrema and one before to ensure they work.
for n in [ulmax-1, ulmax]:
checku(n)
for n in [lmin, lmin+1, lmax-1, lmax]:
checks(n)
# Check that some beyond the extrema over/underflow.
#
# Unsigned overflow.
for i in range(1, 1000):
with self.assertRaises(ValueError):
checku(ulmax + i)
with self.assertRaises(ValueError):
checks(lmax + i)
with self.assertRaises(ValueError):
checks(lmin - i)
def test_Identifiers(self):
self.assertFalse(Tf.IsValidIdentifier(''))
self.assertTrue(Tf.IsValidIdentifier('hello9'))
self.assertFalse(Tf.IsValidIdentifier('9hello'))
self.assertTrue(Tf.IsValidIdentifier('hello_world'))
self.assertTrue(Tf.IsValidIdentifier('HELLO_WORLD'))
self.assertTrue(Tf.IsValidIdentifier('hello_world_1234'))
self.assertFalse(Tf.IsValidIdentifier('hello_#world#_1234'))
self.assertFalse(Tf.IsValidIdentifier('h e l l o'))
self.assertEqual(Tf.MakeValidIdentifier(''), '_')
self.assertEqual(Tf.MakeValidIdentifier('hello9'), 'hello9')
self.assertEqual(Tf.MakeValidIdentifier('9hello'), '_hello')
self.assertEqual(
Tf.MakeValidIdentifier('hello_#world#_1234'), 'hello__world__1234')
self.assertFalse(Tf.IsValidIdentifier('h e l l o'), 'h_e_l_l_o')
self.assertFalse(Tf.IsValidIdentifier('!@#$%'), '_____')
if __name__ == '__main__':
unittest.main()
|
[
"logging.getLogger",
"pxr.Tf.MakeValidIdentifier",
"pxr.Tf.IsValidIdentifier",
"pxr.Tf.DictionaryStrcmp",
"pxr.Tf.StringSplit",
"pxr.Tf._GetLongMax",
"pxr.Tf._GetULongMax",
"pxr.Tf._GetLongMin",
"unittest.main"
] |
[((5274, 5289), 'unittest.main', 'unittest.main', ([], {}), '()\n', (5287, 5289), False, 'import unittest\n'), ((1307, 1326), 'logging.getLogger', 'logging.getLogger', ([], {}), '()\n', (1324, 1326), False, 'import logging\n'), ((1581, 1603), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['""""""', '""""""'], {}), "('', '')\n", (1595, 1603), False, 'from pxr import Tf\n'), ((1633, 1659), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['"""abcd"""', '""""""'], {}), "('abcd', '')\n", (1647, 1659), False, 'from pxr import Tf\n'), ((1689, 1714), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['""""""', '"""ccc"""'], {}), "('', 'ccc')\n", (1703, 1714), False, 'from pxr import Tf\n'), ((1774, 1796), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['s', '"""a"""'], {}), "(s, 'a')\n", (1788, 1796), False, 'from pxr import Tf\n'), ((1837, 1859), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['s', '"""b"""'], {}), "(s, 'b')\n", (1851, 1859), False, 'from pxr import Tf\n'), ((1900, 1922), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['s', '"""c"""'], {}), "(s, 'c')\n", (1914, 1922), False, 'from pxr import Tf\n'), ((1963, 1985), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['s', '"""d"""'], {}), "(s, 'd')\n", (1977, 1985), False, 'from pxr import Tf\n'), ((2029, 2054), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['s', '"""abcd"""'], {}), "(s, 'abcd')\n", (2043, 2054), False, 'from pxr import Tf\n'), ((2096, 2119), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['s', '"""ab"""'], {}), "(s, 'ab')\n", (2110, 2119), False, 'from pxr import Tf\n'), ((2187, 2210), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['s', '""":+"""'], {}), "(s, ':+')\n", (2201, 2210), False, 'from pxr import Tf\n'), ((2277, 2300), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['s', '""":+"""'], {}), "(s, ':+')\n", (2291, 2300), False, 'from pxr import Tf\n'), ((2516, 2542), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['"""123"""', '"""2"""'], {}), "('123', '2')\n", (2530, 2542), False, 'from pxr import Tf\n'), ((2581, 2608), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['"""123"""', 'u"""2"""'], {}), "('123', u'2')\n", (2595, 2608), False, 'from pxr import Tf\n'), ((2647, 2674), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['u"""123"""', '"""2"""'], {}), "(u'123', '2')\n", (2661, 2674), False, 'from pxr import Tf\n'), ((2713, 2741), 'pxr.Tf.StringSplit', 'Tf.StringSplit', (['u"""123"""', 'u"""2"""'], {}), "(u'123', u'2')\n", (2727, 2741), False, 'from pxr import Tf\n'), ((2781, 2819), 'pxr.Tf.DictionaryStrcmp', 'Tf.DictionaryStrcmp', (['"""apple"""', '"""banana"""'], {}), "('apple', 'banana')\n", (2800, 2819), False, 'from pxr import Tf\n'), ((2850, 2889), 'pxr.Tf.DictionaryStrcmp', 'Tf.DictionaryStrcmp', (['"""apple"""', 'u"""banana"""'], {}), "('apple', u'banana')\n", (2869, 2889), False, 'from pxr import Tf\n'), ((2920, 2959), 'pxr.Tf.DictionaryStrcmp', 'Tf.DictionaryStrcmp', (['u"""apple"""', '"""banana"""'], {}), "(u'apple', 'banana')\n", (2939, 2959), False, 'from pxr import Tf\n'), ((2990, 3030), 'pxr.Tf.DictionaryStrcmp', 'Tf.DictionaryStrcmp', (['u"""apple"""', 'u"""banana"""'], {}), "(u'apple', u'banana')\n", (3009, 3030), False, 'from pxr import Tf\n'), ((3661, 3678), 'pxr.Tf._GetULongMax', 'Tf._GetULongMax', ([], {}), '()\n', (3676, 3678), False, 'from pxr import Tf\n'), ((3680, 3696), 'pxr.Tf._GetLongMax', 'Tf._GetLongMax', ([], {}), '()\n', (3694, 3696), False, 'from pxr import Tf\n'), ((3698, 3714), 'pxr.Tf._GetLongMin', 'Tf._GetLongMin', ([], {}), '()\n', (3712, 3714), False, 'from pxr import Tf\n'), ((4344, 4368), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['""""""'], {}), "('')\n", (4364, 4368), False, 'from pxr import Tf\n'), ((4394, 4424), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['"""hello9"""'], {}), "('hello9')\n", (4414, 4424), False, 'from pxr import Tf\n'), ((4451, 4481), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['"""9hello"""'], {}), "('9hello')\n", (4471, 4481), False, 'from pxr import Tf\n'), ((4507, 4542), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['"""hello_world"""'], {}), "('hello_world')\n", (4527, 4542), False, 'from pxr import Tf\n'), ((4568, 4603), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['"""HELLO_WORLD"""'], {}), "('HELLO_WORLD')\n", (4588, 4603), False, 'from pxr import Tf\n'), ((4629, 4669), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['"""hello_world_1234"""'], {}), "('hello_world_1234')\n", (4649, 4669), False, 'from pxr import Tf\n'), ((4696, 4738), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['"""hello_#world#_1234"""'], {}), "('hello_#world#_1234')\n", (4716, 4738), False, 'from pxr import Tf\n'), ((4765, 4798), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['"""h e l l o"""'], {}), "('h e l l o')\n", (4785, 4798), False, 'from pxr import Tf\n'), ((4826, 4852), 'pxr.Tf.MakeValidIdentifier', 'Tf.MakeValidIdentifier', (['""""""'], {}), "('')\n", (4848, 4852), False, 'from pxr import Tf\n'), ((4884, 4916), 'pxr.Tf.MakeValidIdentifier', 'Tf.MakeValidIdentifier', (['"""hello9"""'], {}), "('hello9')\n", (4906, 4916), False, 'from pxr import Tf\n'), ((4953, 4985), 'pxr.Tf.MakeValidIdentifier', 'Tf.MakeValidIdentifier', (['"""9hello"""'], {}), "('9hello')\n", (4975, 4985), False, 'from pxr import Tf\n'), ((5035, 5079), 'pxr.Tf.MakeValidIdentifier', 'Tf.MakeValidIdentifier', (['"""hello_#world#_1234"""'], {}), "('hello_#world#_1234')\n", (5057, 5079), False, 'from pxr import Tf\n'), ((5128, 5161), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['"""h e l l o"""'], {}), "('h e l l o')\n", (5148, 5161), False, 'from pxr import Tf\n'), ((5201, 5230), 'pxr.Tf.IsValidIdentifier', 'Tf.IsValidIdentifier', (['"""!@#$%"""'], {}), "('!@#$%')\n", (5221, 5230), False, 'from pxr import Tf\n')]
|
# coding: utf-8
# Copyright (c) 2016, 2020, Oracle and/or its affiliates. All rights reserved.
# This software is dual-licensed to you under the Universal Permissive License (UPL) 1.0 as shown at https://oss.oracle.com/licenses/upl or Apache License 2.0 as shown at http://www.apache.org/licenses/LICENSE-2.0. You may choose either license.
from oci.util import formatted_flat_dict, NONE_SENTINEL, value_allowed_none_or_none_sentinel # noqa: F401
from oci.decorators import init_model_state_from_kwargs
@init_model_state_from_kwargs
class FailedMetricRecord(object):
"""
The record of a single metric object that failed input validation and the reason for the failure.
"""
def __init__(self, **kwargs):
"""
Initializes a new FailedMetricRecord object with values from keyword arguments.
The following keyword arguments are supported (corresponding to the getters/setters of this class):
:param message:
The value to assign to the message property of this FailedMetricRecord.
:type message: str
:param metric_data:
The value to assign to the metric_data property of this FailedMetricRecord.
:type metric_data: MetricDataDetails
"""
self.swagger_types = {
'message': 'str',
'metric_data': 'MetricDataDetails'
}
self.attribute_map = {
'message': 'message',
'metric_data': 'metricData'
}
self._message = None
self._metric_data = None
@property
def message(self):
"""
**[Required]** Gets the message of this FailedMetricRecord.
An error message indicating the reason that the indicated metric object failed input validation.
:return: The message of this FailedMetricRecord.
:rtype: str
"""
return self._message
@message.setter
def message(self, message):
"""
Sets the message of this FailedMetricRecord.
An error message indicating the reason that the indicated metric object failed input validation.
:param message: The message of this FailedMetricRecord.
:type: str
"""
self._message = message
@property
def metric_data(self):
"""
**[Required]** Gets the metric_data of this FailedMetricRecord.
Identifier of a metric object that failed input validation.
:return: The metric_data of this FailedMetricRecord.
:rtype: MetricDataDetails
"""
return self._metric_data
@metric_data.setter
def metric_data(self, metric_data):
"""
Sets the metric_data of this FailedMetricRecord.
Identifier of a metric object that failed input validation.
:param metric_data: The metric_data of this FailedMetricRecord.
:type: MetricDataDetails
"""
self._metric_data = metric_data
def __repr__(self):
return formatted_flat_dict(self)
def __eq__(self, other):
if other is None:
return False
return self.__dict__ == other.__dict__
def __ne__(self, other):
return not self == other
|
[
"oci.util.formatted_flat_dict"
] |
[((2974, 2999), 'oci.util.formatted_flat_dict', 'formatted_flat_dict', (['self'], {}), '(self)\n', (2993, 2999), False, 'from oci.util import formatted_flat_dict, NONE_SENTINEL, value_allowed_none_or_none_sentinel\n')]
|
import bpy
import numpy as np
import math
import mathutils
import time
import os
class Prism:
""" ^"""
""" / \\"""
""" / ^ \\"""
""" / | \\"""
""" /'alpha'\\ <-- lenght of this side is calculated based on 'width' and 'alpha'"""
"""/ \\"""
"""----------- """
""" ^"""
""" |"""
"""This side is defined via 'width',"""
"""parallel to z-axis of Sigray defined"""
"""The angle opposite to this side is 'alpha'"""
"""'height' defines the distance between the two triangular sides of the prism"""
def __init__(self, width, height, alpha):
self.width = width
self.height = height
self.alpha = math.radians(alpha)
def clear_scene(self):
"""This function clears the whole scene and all objects contained in it"""
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.delete(use_global=False)
def define_prism(self, loc = (0, 0, 0), angle = None, base_width = None):
"""The default location assigned is (0, 0, 0). Using the 'update_coordinates'-function allows for reassignment of coordinates"""
x, y, z = loc
name = "prism"
meshes = bpy.data.meshes
if angle == None:
angle = self.alpha
else:
angle = math.radians(angle)
if base_width == None:
base_width = self.width
else:
base_width = base_width
points = [ [x, y, z], [x + base_width, y, z], [x + (base_width / 2), y + (base_width / (2 * np.tan(angle / 2))), z],
[x, y, z + self.height], [x + base_width, y, z + self.height], [x + (base_width / 2), y + (base_width / (2 * np.tan(angle / 2))), z + self.height] ]
faces = [ [4,5,2],[1,0,3],[2,5,3],[4,3,5],[1,2,0],[1,4,2],[4,1,3],[0,2,3] ]
shape_vertices = []
for p in points:
print(p)
shape_vertices.append ( mathutils.Vector((p[0],p[1],p[2])) )
new_mesh = bpy.data.meshes.new ( name + "_mesh" )
new_mesh.from_pydata ( shape_vertices, [], faces )
new_mesh.update()
new_obj = bpy.data.objects.new ( name, new_mesh )
return new_obj
def link_prism(self, object):
"""Any created object in Blender needs to be linked to the scene, in order to be displayed"""
bpy.context.collection.objects.link(object)
def update_coordinates(self, new_location):
"""This function allows for reassignment of coordinates"""
return self.define_prism(loc = new_location)
def update_alpha(self, new_alpha):
"""This function allows for reassignment of the angle alpha"""
return self.define_prism(angle = new_alpha)
def update_width(self, new_width):
"""This function allows for reassignment of the width of the prism"""
return self.define_prism(base_width = new_width)
def make_array(self, x, y, no_of_prisms, separation):
for p in range(no_of_prisms):
if p == 0:
self.link_prism(self.update_coordinates((x, y, 0)))
else:
self.link_prism(self.update_coordinates( (p * (self.width + separation) + x, y, 0)))
|
[
"bpy.ops.object.delete",
"mathutils.Vector",
"numpy.tan",
"bpy.ops.object.select_all",
"bpy.data.objects.new",
"bpy.data.meshes.new",
"math.radians",
"bpy.context.collection.objects.link"
] |
[((732, 751), 'math.radians', 'math.radians', (['alpha'], {}), '(alpha)\n', (744, 751), False, 'import math\n'), ((877, 919), 'bpy.ops.object.select_all', 'bpy.ops.object.select_all', ([], {'action': '"""SELECT"""'}), "(action='SELECT')\n", (902, 919), False, 'import bpy\n'), ((929, 968), 'bpy.ops.object.delete', 'bpy.ops.object.delete', ([], {'use_global': '(False)'}), '(use_global=False)\n', (950, 968), False, 'import bpy\n'), ((2093, 2128), 'bpy.data.meshes.new', 'bpy.data.meshes.new', (["(name + '_mesh')"], {}), "(name + '_mesh')\n", (2112, 2128), False, 'import bpy\n'), ((2240, 2276), 'bpy.data.objects.new', 'bpy.data.objects.new', (['name', 'new_mesh'], {}), '(name, new_mesh)\n', (2260, 2276), False, 'import bpy\n'), ((2453, 2496), 'bpy.context.collection.objects.link', 'bpy.context.collection.objects.link', (['object'], {}), '(object)\n', (2488, 2496), False, 'import bpy\n'), ((1370, 1389), 'math.radians', 'math.radians', (['angle'], {}), '(angle)\n', (1382, 1389), False, 'import math\n'), ((2034, 2070), 'mathutils.Vector', 'mathutils.Vector', (['(p[0], p[1], p[2])'], {}), '((p[0], p[1], p[2]))\n', (2050, 2070), False, 'import mathutils\n'), ((1635, 1652), 'numpy.tan', 'np.tan', (['(angle / 2)'], {}), '(angle / 2)\n', (1641, 1652), True, 'import numpy as np\n'), ((1791, 1808), 'numpy.tan', 'np.tan', (['(angle / 2)'], {}), '(angle / 2)\n', (1797, 1808), True, 'import numpy as np\n')]
|
# -*- coding: utf-8 -*-
from setuptools import setup, find_packages
with open('requirements.txt') as f:
install_requires = f.read().strip().split('\n')
# get version from __version__ variable in proceso/__init__.py
from proceso import __version__ as version
setup(
name='proceso',
version=version,
description='A customization app for Proceso',
author='<NAME>',
author_email='<EMAIL>',
packages=find_packages(),
zip_safe=False,
include_package_data=True,
install_requires=install_requires
)
|
[
"setuptools.find_packages"
] |
[((405, 420), 'setuptools.find_packages', 'find_packages', ([], {}), '()\n', (418, 420), False, 'from setuptools import setup, find_packages\n')]
|
# -*- coding: utf-8 -*-
# Copyright (c) 2013 <NAME> <<EMAIL>>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
#
"""
envelopes.envelope
==================
This module contains the Envelope class.
"""
import sys
if sys.version_info[0] == 2:
from email import Encoders as email_encoders
elif sys.version_info[0] == 3:
from email import encoders as email_encoders
basestring = str
def unicode(_str, _charset):
return str(_str.encode(_charset), _charset)
else:
raise RuntimeError('Unsupported Python version: %d.%d.%d' % (
sys.version_info[0], sys.version_info[1], sys.version_info[2]
))
from email.header import Header
from email.mime.base import MIMEBase
from email.mime.multipart import MIMEMultipart
from email.mime.application import MIMEApplication
from email.mime.audio import MIMEAudio
from email.mime.image import MIMEImage
from email.mime.text import MIMEText
import mimetypes
import os
import re
from .conn import SMTP
from .compat import encoded
class MessageEncodeError(Exception):
pass
class Envelope(object):
"""
The Envelope class.
**Address formats**
The following formats are supported for e-mail addresses:
* ``"<EMAIL>"`` - just the e-mail address part as a string,
* ``"Some User <<EMAIL>>"`` - name and e-mail address parts as a string,
* ``("<EMAIL>", "Some User")`` - e-mail address and name parts as a tuple.
Whenever you come to manipulate addresses feel free to use any (or all) of
the formats above.
:param to_addr: ``To`` address or list of ``To`` addresses
:param from_addr: ``From`` address
:param subject: message subject
:param html_body: optional HTML part of the message
:param text_body: optional plain text part of the message
:param cc_addr: optional single CC address or list of CC addresses
:param bcc_addr: optional single BCC address or list of BCC addresses
:param headers: optional dictionary of headers
:param charset: message charset
"""
ADDR_FORMAT = '%s <%s>'
ADDR_REGEXP = re.compile(r'^(.*) <([^@]+@[^@]+)>$')
def __init__(self, to_addr=None, from_addr=None, subject=None,
html_body=None, text_body=None, cc_addr=None, bcc_addr=None,
headers=None, charset='utf-8'):
if to_addr:
if isinstance(to_addr, list):
self._to = to_addr
else:
self._to = [to_addr]
else:
self._to = []
self._from = from_addr
self._subject = subject
self._parts = []
if text_body:
self._parts.append(('text/plain', text_body, charset))
if html_body:
self._parts.append(('text/html', html_body, charset))
if cc_addr:
if isinstance(cc_addr, list):
self._cc = cc_addr
else:
self._cc = [cc_addr]
else:
self._cc = []
if bcc_addr:
if isinstance(bcc_addr, list):
self._bcc = bcc_addr
else:
self._bcc = [bcc_addr]
else:
self._bcc = []
if headers:
self._headers = headers
else:
self._headers = {}
self._charset = charset
self._addr_format = unicode(self.ADDR_FORMAT, charset)
def __repr__(self):
return u'<Envelope from="%s" to="%s" subject="%s">' % (
self._addrs_to_header([self._from]),
self._addrs_to_header(self._to),
self._subject
)
@property
def to_addr(self):
"""List of ``To`` addresses."""
return self._to
def add_to_addr(self, to_addr):
"""Adds a ``To`` address."""
self._to.append(to_addr)
def clear_to_addr(self):
"""Clears list of ``To`` addresses."""
self._to = []
@property
def from_addr(self):
return self._from
@from_addr.setter
def from_addr(self, from_addr):
self._from = from_addr
@property
def cc_addr(self):
"""List of CC addresses."""
return self._cc
def add_cc_addr(self, cc_addr):
"""Adds a CC address."""
self._cc.append(cc_addr)
def clear_cc_addr(self):
"""Clears list of CC addresses."""
self._cc = []
@property
def bcc_addr(self):
"""List of BCC addresses."""
return self._bcc
def add_bcc_addr(self, bcc_addr):
"""Adds a BCC address."""
self._bcc.append(bcc_addr)
def clear_bcc_addr(self):
"""Clears list of BCC addresses."""
self._bcc = []
@property
def charset(self):
"""Message charset."""
return self._charset
@charset.setter
def charset(self, charset):
self._charset = charset
self._addr_format = unicode(self.ADDR_FORMAT, charset)
def _addr_tuple_to_addr(self, addr_tuple):
addr = ''
if len(addr_tuple) == 2 and addr_tuple[1]:
addr = self._addr_format % (
self._header(addr_tuple[1] or ''),
addr_tuple[0] or ''
)
elif addr_tuple[0]:
addr = addr_tuple[0]
return addr
@property
def headers(self):
"""Dictionary of custom headers."""
return self._headers
def add_header(self, key, value):
"""Adds a custom header."""
self._headers[key] = value
def clear_headers(self):
"""Clears custom headers."""
self._headers = {}
def _addrs_to_header(self, addrs):
_addrs = []
for addr in addrs:
if not addr:
continue
if isinstance(addr, basestring):
if self._is_ascii(addr):
_addrs.append(self._encoded(addr))
else:
# these headers need special care when encoding, see:
# http://tools.ietf.org/html/rfc2047#section-8
# Need to break apart the name from the address if there are
# non-ascii chars
m = self.ADDR_REGEXP.match(addr)
if m:
t = (m.group(2), m.group(1))
_addrs.append(self._addr_tuple_to_addr(t))
else:
# What can we do? Just pass along what the user gave us and hope they did it right
_addrs.append(self._encoded(addr))
elif isinstance(addr, tuple):
_addrs.append(self._addr_tuple_to_addr(addr))
else:
self._raise(MessageEncodeError,
'%s is not a valid address' % str(addr))
_header = ','.join(_addrs)
return _header
def _raise(self, exc_class, message):
raise exc_class(self._encoded(message))
def _header(self, _str):
if self._is_ascii(_str):
return _str
return Header(_str, self._charset).encode()
def _is_ascii(self, _str):
return all(ord(c) < 128 for c in _str)
def _encoded(self, _str):
return encoded(_str, self._charset)
def to_mime_message(self):
"""Returns the envelope as
:py:class:`email.mime.multipart.MIMEMultipart`."""
msg = MIMEMultipart('alternative')
msg['Subject'] = self._header(self._subject or '')
msg['From'] = self._encoded(self._addrs_to_header([self._from]))
msg['To'] = self._encoded(self._addrs_to_header(self._to))
if self._cc:
msg['CC'] = self._addrs_to_header(self._cc)
if self._headers:
for key, value in self._headers.items():
msg[key] = self._header(value)
for part in self._parts:
type_maj, type_min = part[0].split('/')
if type_maj == 'text' and type_min in ('html', 'plain'):
msg.attach(MIMEText(part[1], type_min, self._charset))
else:
msg.attach(part[1])
return msg
def add_attachment(self, file_path, mimetype=None):
"""Attaches a file located at *file_path* to the envelope. If
*mimetype* is not specified an attempt to guess it is made. If nothing
is guessed then `application/octet-stream` is used."""
if not mimetype:
mimetype, _ = mimetypes.guess_type(file_path)
if mimetype is None:
mimetype = 'application/octet-stream'
type_maj, type_min = mimetype.split('/')
with open(file_path, 'rb') as fh:
part_data = fh.read()
part = MIMEBase(type_maj, type_min)
part.set_payload(part_data)
email_encoders.encode_base64(part)
part_filename = os.path.basename(self._encoded(file_path))
part.add_header('Content-Disposition', 'attachment; filename="%s"'
% part_filename)
self._parts.append((mimetype, part))
def send(self, *args, **kwargs):
"""Sends the envelope using a freshly created SMTP connection. *args*
and *kwargs* are passed directly to :py:class:`envelopes.conn.SMTP`
constructor.
Returns a tuple of SMTP object and whatever its send method returns."""
conn = SMTP(*args, **kwargs)
send_result = conn.send(self)
return conn, send_result
|
[
"email.mime.base.MIMEBase",
"re.compile",
"email.encoders.encode_base64",
"email.mime.multipart.MIMEMultipart",
"mimetypes.guess_type",
"email.header.Header",
"email.mime.text.MIMEText"
] |
[((3070, 3106), 're.compile', 're.compile', (['"""^(.*) <([^@]+@[^@]+)>$"""'], {}), "('^(.*) <([^@]+@[^@]+)>$')\n", (3080, 3106), False, 'import re\n'), ((8321, 8349), 'email.mime.multipart.MIMEMultipart', 'MIMEMultipart', (['"""alternative"""'], {}), "('alternative')\n", (8334, 8349), False, 'from email.mime.multipart import MIMEMultipart\n'), ((9375, 9406), 'mimetypes.guess_type', 'mimetypes.guess_type', (['file_path'], {}), '(file_path)\n', (9395, 9406), False, 'import mimetypes\n'), ((9633, 9661), 'email.mime.base.MIMEBase', 'MIMEBase', (['type_maj', 'type_min'], {}), '(type_maj, type_min)\n', (9641, 9661), False, 'from email.mime.base import MIMEBase\n'), ((9714, 9748), 'email.encoders.encode_base64', 'email_encoders.encode_base64', (['part'], {}), '(part)\n', (9742, 9748), True, 'from email import encoders as email_encoders\n'), ((7990, 8017), 'email.header.Header', 'Header', (['_str', 'self._charset'], {}), '(_str, self._charset)\n', (7996, 8017), False, 'from email.header import Header\n'), ((8937, 8979), 'email.mime.text.MIMEText', 'MIMEText', (['part[1]', 'type_min', 'self._charset'], {}), '(part[1], type_min, self._charset)\n', (8945, 8979), False, 'from email.mime.text import MIMEText\n')]
|
import types
import tkinter
import Pmw
import sys
import collections
class OptionMenu(Pmw.MegaWidget):
def __init__(self, parent = None, **kw):
# Define the megawidget options.
INITOPT = Pmw.INITOPT
optiondefs = (
('command', None, None),
('items', (), INITOPT),
('initialitem', None, INITOPT),
('labelmargin', 0, INITOPT),
('labelpos', None, INITOPT),
('sticky', 'ew', INITOPT),
)
self.defineoptions(kw, optiondefs)
# Initialise the base class (after defining the options).
Pmw.MegaWidget.__init__(self, parent)
# Create the components.
interior = self.interior()
self._menubutton = self.createcomponent('menubutton',
(), None,
tkinter.Menubutton, (interior,),
borderwidth = 2,
indicatoron = 1,
relief = 'raised',
anchor = 'c',
highlightthickness = 2,
direction = 'flush',
takefocus = 1,
)
self._menubutton.grid(column = 2, row = 2, sticky = self['sticky'])
self._menu = self.createcomponent('menu',
(), None,
tkinter.Menu, (self._menubutton,),
tearoff=0
)
self._menubutton.configure(menu = self._menu)
interior.grid_columnconfigure(2, weight = 1)
interior.grid_rowconfigure(2, weight = 1)
# Create the label.
self.createlabel(interior)
# Add the items specified by the initialisation option.
self._itemList = []
self.setitems(self['items'], self['initialitem'])
# Check keywords and initialise options.
self.initialiseoptions()
def setitems(self, items, index = None):
#cleaning up old items only required for Python < 2.5.4
if sys.version_info < (2, 5, 4):
# Clean up old items and callback commands.
for oldIndex in range(len(self._itemList)):
tclCommandName = str(self._menu.entrycget(oldIndex, 'command'))
if tclCommandName != '':
self._menu.deletecommand(tclCommandName)
self._menu.delete(0, 'end')
self._itemList = list(items)
# Set the items in the menu component.
for item in items:
self._menu.add_command(label = item,
command = lambda self = self, item = item: self._invoke(item))
# Set the currently selected value.
if index is None:
var = str(self._menubutton.cget('textvariable'))
if var != '':
# None means do not change text variable.
return
if len(items) == 0:
text = ''
elif str(self._menubutton.cget('text')) in items:
# Do not change selection if it is still valid
return
else:
text = items[0]
else:
index = self.index(index)
text = self._itemList[index]
self.setvalue(text)
def getcurselection(self):
var = str(self._menubutton.cget('textvariable'))
if var == '':
return str(self._menubutton.cget('text'))
else:
return self._menu.tk.globalgetvar(var)
def getvalue(self):
return self.getcurselection()
def setvalue(self, text):
var = str(self._menubutton.cget('textvariable'))
if var == '':
self._menubutton.configure(text = text)
else:
self._menu.tk.globalsetvar(var, text)
def index(self, index):
listLength = len(self._itemList)
if type(index) == int:
if index < listLength:
return index
else:
raise ValueError('index "%s" is out of range' % index)
elif index is Pmw.END:
if listLength > 0:
return listLength - 1
else:
raise ValueError('OptionMenu has no items')
else:
if index is Pmw.SELECT:
if listLength > 0:
index = self.getcurselection()
else:
raise ValueError('OptionMenu has no items')
if index in self._itemList:
return self._itemList.index(index)
raise ValueError('bad index "%s": must be a ' \
'name, a number, Pmw.END or Pmw.SELECT' % (index,))
def invoke(self, index = Pmw.SELECT):
index = self.index(index)
text = self._itemList[index]
return self._invoke(text)
def _invoke(self, text):
self.setvalue(text)
command = self['command']
if isinstance(command, collections.Callable):
return command(text)
|
[
"Pmw.MegaWidget.__init__"
] |
[((688, 725), 'Pmw.MegaWidget.__init__', 'Pmw.MegaWidget.__init__', (['self', 'parent'], {}), '(self, parent)\n', (711, 725), False, 'import Pmw\n')]
|
import os
from pypdflite.pdflite import PDFLite
from pypdflite.pdfobjects.pdfcolor import PDFColor
def TableTest(test_dir):
""" Functional test for text, paragraph, and page
splitting.
"""
data = [["Heading1", "Heading2", "Heading3"],
["Cell a2", "Cell b2", "Cell c2"],
["Cell a3", "Cell b3", "Cell c3"]]
#Create PDFLITE object, initialize with path & filename.
writer = PDFLite(os.path.join(test_dir, "tests/TableTest.pdf"))
# If desired (in production code), set compression
# writer.setCompression(True)
# Set general information metadata
writer.set_information(title="Testing Table") # set optional information
# Use get_document method to get the generated document object.
document = writer.get_document()
document.set_cursor(100, 100)
document.set_font(family='arial', style='UB', size=12)
underline = document.get_font()
document.set_font(family='arial', size=12)
default_font = document.get_font()
# Example for adding short and long text and whitespaces
mytable = document.add_table(3, 3)
green = PDFColor(name='green')
default = document.add_cell_format({'font': default_font, 'align': 'left', 'border': (0, 1)})
justleft = document.add_cell_format({'left': (0, 1)})
header_format = document.add_cell_format({'font': underline, 'align': 'right', 'border': (0, 1)})
green_format = document.add_cell_format({'font': default_font, 'border': (0, 1), 'fill_color': green})
#mytable.set_column_width(1, 200)
#mytable.set_row_height(2, 200)
mytable.write_row(0, 0, data[0], header_format)
mytable.write_row(1, 0, data[1], justleft)
mytable.write_row(2, 0, data[2], green_format)
document.draw_table(mytable)
document.add_newline(4)
document.add_text("Testing followup text")
# Close writer
writer.close()
if __name__ == "__main__":
TableTest()
|
[
"pypdflite.pdfobjects.pdfcolor.PDFColor",
"os.path.join"
] |
[((1161, 1183), 'pypdflite.pdfobjects.pdfcolor.PDFColor', 'PDFColor', ([], {'name': '"""green"""'}), "(name='green')\n", (1169, 1183), False, 'from pypdflite.pdfobjects.pdfcolor import PDFColor\n'), ((450, 495), 'os.path.join', 'os.path.join', (['test_dir', '"""tests/TableTest.pdf"""'], {}), "(test_dir, 'tests/TableTest.pdf')\n", (462, 495), False, 'import os\n')]
|
from typing import List, Optional
from pydantic import BaseModel
from pydantic import validator
class CrossReferenceSchemaRelated(BaseModel):
curie: str
pages: Optional[List[str]] = None
is_obsolete: Optional[bool] = None
@validator('curie')
def name_must_contain_space(cls, v):
if v.count(":") != 1 and not v.startswith("DOI:"):
raise ValueError('must contain a single colon')
return v
class Config():
orm_mode = True
extra = "forbid"
schema_extra = {
"example": {
"curie": "MOD:curie",
"pages": [
"reference"
]
}
}
class CrossReferenceSchemaPost(CrossReferenceSchemaRelated):
resource_curie: Optional[str] = None
reference_curie: Optional[str] = None
class Config():
orm_mod = True
extra = "forbid"
schema_extra = {
"example": {
"curie": "MOD:curie",
"pages": [
"reference"
],
"reference_curie": "AGR:AGRReference<number>"
}
}
class CrossReferencePageSchemaShow(BaseModel):
name: Optional[str] = None
url: Optional[str] = None
class Config():
orm_mode = True
extra = "forbid"
class CrossReferenceSchemaShow(BaseModel):
curie: str
url: Optional[str] = None
pages: Optional[List[CrossReferencePageSchemaShow]] = None
is_obsolete: bool
class CrossReferenceSchema(BaseModel):
curie: str
pages: Optional[List[CrossReferencePageSchemaShow]] = None
url: Optional[str] = None
is_obsolete: Optional[bool] = False
resource_curie: Optional[str] = None
reference_curie: Optional[str] = None
author_ids: Optional[List[int]] = None
editor_ids: Optional[List[int]] = None
class Config():
orm_mode = True
extra = "forbid"
class CrossReferenceSchemaUpdate(BaseModel):
pages: Optional[List[str]] = None
resource_curie: Optional[str] = None
reference_curie: Optional[str] = None
is_obsolete: Optional[bool] = None
class Config():
orm_mode = True
extra = "forbid"
|
[
"pydantic.validator"
] |
[((243, 261), 'pydantic.validator', 'validator', (['"""curie"""'], {}), "('curie')\n", (252, 261), False, 'from pydantic import validator\n')]
|
# -*- coding: utf-8 -*-
# Generated by the protocol buffer compiler. DO NOT EDIT!
# source: syft_proto/frameworks/crypten/onnx_model.proto
from google.protobuf import descriptor as _descriptor
from google.protobuf import message as _message
from google.protobuf import reflection as _reflection
from google.protobuf import symbol_database as _symbol_database
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from syft_proto.types.syft.v1 import id_pb2 as syft__proto_dot_types_dot_syft_dot_v1_dot_id__pb2
DESCRIPTOR = _descriptor.FileDescriptor(
name='syft_proto/frameworks/crypten/onnx_model.proto',
package='syft_proto.frameworks.torch.tensors.interpreters.v1',
syntax='proto3',
serialized_options=b'\n@org.openmined.syftproto.frameworks.torch.tensors.interpreters.v1',
create_key=_descriptor._internal_create_key,
serialized_pb=b'\n.syft_proto/frameworks/crypten/onnx_model.proto\x12\x33syft_proto.frameworks.torch.tensors.interpreters.v1\x1a!syft_proto/types/syft/v1/id.proto\"\x9a\x01\n\tOnnxModel\x12,\n\x02id\x18\x01 \x01(\x0b\x32\x1c.syft_proto.types.syft.v1.IdR\x02id\x12)\n\x10serialized_model\x18\x02 \x01(\x0cR\x0fserializedModel\x12\x12\n\x04tags\x18\x03 \x03(\tR\x04tags\x12 \n\x0b\x64\x65scription\x18\x04 \x01(\tR\x0b\x64\x65scriptionBB\n@org.<EMAIL>mined.syftproto.frameworks.torch.tensors.interpreters.v1b\x06proto3'
,
dependencies=[syft__proto_dot_types_dot_syft_dot_v1_dot_id__pb2.DESCRIPTOR,])
_ONNXMODEL = _descriptor.Descriptor(
name='OnnxModel',
full_name='syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel',
filename=None,
file=DESCRIPTOR,
containing_type=None,
create_key=_descriptor._internal_create_key,
fields=[
_descriptor.FieldDescriptor(
name='id', full_name='syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.id', index=0,
number=1, type=11, cpp_type=10, label=1,
has_default_value=False, default_value=None,
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='id', file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='serialized_model', full_name='syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.serialized_model', index=1,
number=2, type=12, cpp_type=9, label=1,
has_default_value=False, default_value=b"",
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='serializedModel', file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='tags', full_name='syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.tags', index=2,
number=3, type=9, cpp_type=9, label=3,
has_default_value=False, default_value=[],
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='tags', file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
_descriptor.FieldDescriptor(
name='description', full_name='syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.description', index=3,
number=4, type=9, cpp_type=9, label=1,
has_default_value=False, default_value=b"".decode('utf-8'),
message_type=None, enum_type=None, containing_type=None,
is_extension=False, extension_scope=None,
serialized_options=None, json_name='description', file=DESCRIPTOR, create_key=_descriptor._internal_create_key),
],
extensions=[
],
nested_types=[],
enum_types=[
],
serialized_options=None,
is_extendable=False,
syntax='proto3',
extension_ranges=[],
oneofs=[
],
serialized_start=139,
serialized_end=293,
)
_ONNXMODEL.fields_by_name['id'].message_type = syft__proto_dot_types_dot_syft_dot_v1_dot_id__pb2._ID
DESCRIPTOR.message_types_by_name['OnnxModel'] = _ONNXMODEL
_sym_db.RegisterFileDescriptor(DESCRIPTOR)
OnnxModel = _reflection.GeneratedProtocolMessageType('OnnxModel', (_message.Message,), {
'DESCRIPTOR' : _ONNXMODEL,
'__module__' : 'syft_proto.frameworks.crypten.onnx_model_pb2'
# @@protoc_insertion_point(class_scope:syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel)
})
_sym_db.RegisterMessage(OnnxModel)
DESCRIPTOR._options = None
# @@protoc_insertion_point(module_scope)
|
[
"google.protobuf.reflection.GeneratedProtocolMessageType",
"google.protobuf.symbol_database.Default",
"google.protobuf.descriptor.FieldDescriptor",
"google.protobuf.descriptor.FileDescriptor"
] |
[((408, 434), 'google.protobuf.symbol_database.Default', '_symbol_database.Default', ([], {}), '()\n', (432, 434), True, 'from google.protobuf import symbol_database as _symbol_database\n'), ((549, 1461), 'google.protobuf.descriptor.FileDescriptor', '_descriptor.FileDescriptor', ([], {'name': '"""syft_proto/frameworks/crypten/onnx_model.proto"""', 'package': '"""syft_proto.frameworks.torch.tensors.interpreters.v1"""', 'syntax': '"""proto3"""', 'serialized_options': "b'\\n@org.openmined.syftproto.frameworks.torch.tensors.interpreters.v1'", 'create_key': '_descriptor._internal_create_key', 'serialized_pb': 'b\'\\n.syft_proto/frameworks/crypten/onnx_model.proto\\x123syft_proto.frameworks.torch.tensors.interpreters.v1\\x1a!syft_proto/types/syft/v1/id.proto"\\x9a\\x01\\n\\tOnnxModel\\x12,\\n\\x02id\\x18\\x01 \\x01(\\x0b2\\x1c.syft_proto.types.syft.v1.IdR\\x02id\\x12)\\n\\x10serialized_model\\x18\\x02 \\x01(\\x0cR\\x0fserializedModel\\x12\\x12\\n\\x04tags\\x18\\x03 \\x03(\\tR\\x04tags\\x12 \\n\\x0bdescription\\x18\\x04 \\x01(\\tR\\x0bdescriptionBB\\n@org.<EMAIL>mined.syftproto.frameworks.torch.tensors.interpreters.v1b\\x06proto3\'', 'dependencies': '[syft__proto_dot_types_dot_syft_dot_v1_dot_id__pb2.DESCRIPTOR]'}), '(name=\n \'syft_proto/frameworks/crypten/onnx_model.proto\', package=\n \'syft_proto.frameworks.torch.tensors.interpreters.v1\', syntax=\'proto3\',\n serialized_options=\n b\'\\n@org.openmined.syftproto.frameworks.torch.tensors.interpreters.v1\',\n create_key=_descriptor._internal_create_key, serialized_pb=\n b\'\\n.syft_proto/frameworks/crypten/onnx_model.proto\\x123syft_proto.frameworks.torch.tensors.interpreters.v1\\x1a!syft_proto/types/syft/v1/id.proto"\\x9a\\x01\\n\\tOnnxModel\\x12,\\n\\x02id\\x18\\x01 \\x01(\\x0b2\\x1c.syft_proto.types.syft.v1.IdR\\x02id\\x12)\\n\\x10serialized_model\\x18\\x02 \\x01(\\x0cR\\x0fserializedModel\\x12\\x12\\n\\x04tags\\x18\\x03 \\x03(\\tR\\x04tags\\x12 \\n\\x0bdescription\\x18\\x04 \\x01(\\tR\\x0bdescriptionBB\\n@org.<EMAIL>mined.syftproto.frameworks.torch.tensors.interpreters.v1b\\x06proto3\'\n , dependencies=[syft__proto_dot_types_dot_syft_dot_v1_dot_id__pb2.\n DESCRIPTOR])\n', (575, 1461), True, 'from google.protobuf import descriptor as _descriptor\n'), ((4064, 4236), 'google.protobuf.reflection.GeneratedProtocolMessageType', '_reflection.GeneratedProtocolMessageType', (['"""OnnxModel"""', '(_message.Message,)', "{'DESCRIPTOR': _ONNXMODEL, '__module__':\n 'syft_proto.frameworks.crypten.onnx_model_pb2'}"], {}), "('OnnxModel', (_message.Message,),\n {'DESCRIPTOR': _ONNXMODEL, '__module__':\n 'syft_proto.frameworks.crypten.onnx_model_pb2'})\n", (4104, 4236), True, 'from google.protobuf import reflection as _reflection\n'), ((1722, 2162), 'google.protobuf.descriptor.FieldDescriptor', '_descriptor.FieldDescriptor', ([], {'name': '"""id"""', 'full_name': '"""syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.id"""', 'index': '(0)', 'number': '(1)', 'type': '(11)', 'cpp_type': '(10)', 'label': '(1)', 'has_default_value': '(False)', 'default_value': 'None', 'message_type': 'None', 'enum_type': 'None', 'containing_type': 'None', 'is_extension': '(False)', 'extension_scope': 'None', 'serialized_options': 'None', 'json_name': '"""id"""', 'file': 'DESCRIPTOR', 'create_key': '_descriptor._internal_create_key'}), "(name='id', full_name=\n 'syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.id',\n index=0, number=1, type=11, cpp_type=10, label=1, has_default_value=\n False, default_value=None, message_type=None, enum_type=None,\n containing_type=None, is_extension=False, extension_scope=None,\n serialized_options=None, json_name='id', file=DESCRIPTOR, create_key=\n _descriptor._internal_create_key)\n", (1749, 2162), True, 'from google.protobuf import descriptor as _descriptor\n'), ((2179, 2658), 'google.protobuf.descriptor.FieldDescriptor', '_descriptor.FieldDescriptor', ([], {'name': '"""serialized_model"""', 'full_name': '"""syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.serialized_model"""', 'index': '(1)', 'number': '(2)', 'type': '(12)', 'cpp_type': '(9)', 'label': '(1)', 'has_default_value': '(False)', 'default_value': "b''", 'message_type': 'None', 'enum_type': 'None', 'containing_type': 'None', 'is_extension': '(False)', 'extension_scope': 'None', 'serialized_options': 'None', 'json_name': '"""serializedModel"""', 'file': 'DESCRIPTOR', 'create_key': '_descriptor._internal_create_key'}), "(name='serialized_model', full_name=\n 'syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.serialized_model'\n , index=1, number=2, type=12, cpp_type=9, label=1, has_default_value=\n False, default_value=b'', message_type=None, enum_type=None,\n containing_type=None, is_extension=False, extension_scope=None,\n serialized_options=None, json_name='serializedModel', file=DESCRIPTOR,\n create_key=_descriptor._internal_create_key)\n", (2206, 2658), True, 'from google.protobuf import descriptor as _descriptor\n'), ((2675, 3117), 'google.protobuf.descriptor.FieldDescriptor', '_descriptor.FieldDescriptor', ([], {'name': '"""tags"""', 'full_name': '"""syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.tags"""', 'index': '(2)', 'number': '(3)', 'type': '(9)', 'cpp_type': '(9)', 'label': '(3)', 'has_default_value': '(False)', 'default_value': '[]', 'message_type': 'None', 'enum_type': 'None', 'containing_type': 'None', 'is_extension': '(False)', 'extension_scope': 'None', 'serialized_options': 'None', 'json_name': '"""tags"""', 'file': 'DESCRIPTOR', 'create_key': '_descriptor._internal_create_key'}), "(name='tags', full_name=\n 'syft_proto.frameworks.torch.tensors.interpreters.v1.OnnxModel.tags',\n index=2, number=3, type=9, cpp_type=9, label=3, has_default_value=False,\n default_value=[], message_type=None, enum_type=None, containing_type=\n None, is_extension=False, extension_scope=None, serialized_options=None,\n json_name='tags', file=DESCRIPTOR, create_key=_descriptor.\n _internal_create_key)\n", (2702, 3117), True, 'from google.protobuf import descriptor as _descriptor\n')]
|
# -*-coding:Utf-8 -*
# Copyright (c) 2010-2017 <NAME>
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice, this
# list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions and the following disclaimer in the documentation
# and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT
# OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
"""Fichier contenant le contexte 'communication:immersion'"""
from primaires.format.constantes import ponctuations_finales
from primaires.interpreteur.contexte import Contexte
from primaires.communication.contextes.invitation import Invitation
class Immersion(Contexte):
"""Contexte d'immersion dans un canal de communication.
"""
def __init__(self, pere):
"""Constructeur du contexte"""
Contexte.__init__(self, pere)
self.opts.prompt_prf = ""
self.opts.prompt_clr = ""
self.canal = None
self.options = {
# Options d'user
"q" : self.opt_quit,
"w" : self.opt_who,
"h" : self.opt_help,
"i" : self.opt_invite,
"me" : self.opt_emote,
# Options de modo
"e" : self.opt_eject,
"b" : self.opt_ban,
"a" : self.opt_announce,
# Options d'admin
"p" : self.opt_promote,
"ed" : self.opt_edit,
"d" : self.opt_dissolve,
}
def __getstate__(self):
"""Nettoyage des options"""
dico_attr = Contexte.__getstate__(self)
dico_attr["options"] = dico_attr["options"].copy()
for rac, fonction in dico_attr["options"].items():
dico_attr["options"][rac] = fonction.__name__
return dico_attr
def __setstate__(self, dico_attr):
"""Récupération du contexte"""
Contexte.__setstate__(self, dico_attr)
for rac, nom in self.options.items():
fonction = getattr(self, nom)
self.options[rac] = fonction
@property
def u_nom(self):
return "immersion:" + self.canal.nom
def accueil(self):
"""Message d'accueil du contexte"""
canal = self.canal
res = canal.clr + ">|ff| Immersion dans le canal " + canal.nom
res += "\n Entrez |ent|/h|ff| pour afficher l'aide."
return res
def opt_quit(self, arguments):
"""Option quitter : /q"""
canal = self.canal
personnage = self.pere.joueur
canal.immerger_ou_sortir(personnage)
personnage << canal.clr + ">|ff| Retour au jeu."
def opt_who(self, arguments):
"""Option qui : /w"""
personnage = self.pere.joueur
res = self.canal.clr + ">|ff| Joueurs connectés :"
for connecte in self.canal.connectes:
if connecte in type(self).importeur.connex.joueurs_connectes:
if connecte is self.canal.auteur:
statut = "|rgc|@"
elif connecte in self.canal.moderateurs:
statut = "|jn|*"
else:
statut = "|bc|"
res += "\n " + statut + connecte.nom + "|ff|"
if connecte in self.canal.immerges:
res += " (immergé)"
personnage << res
def opt_help(self, arguments):
"""Options d'affichage de l'aide : /h"""
personnage = self.pere.joueur
canal = self.canal
res = canal.clr + ">|ff| Aide du canal |ent|{}|ff| ({}) :\n".format(
canal.nom, canal.resume)
res += str(canal.description)
res += "\n Administrateur : |rgc|"
res += (canal.auteur and canal.auteur.nom or "aucun") + "|ff|"
modos = ""
if len(canal.moderateurs) == 1:
modos = "\n Modérateur : |jn|" + canal.moderateurs[0].nom + "|ff|"
elif len(canal.moderateurs) > 1:
modos = "\n Modérateurs : |jn|" + "|ff|, |jn|".join(
sorted([modo.nom for modo in canal.moderateurs])) + "|ff|"
res += modos
res += "\n Commandes disponibles :"
res += "\n - |cmd|/h|ff| : affiche ce message d'aide"
res += "\n - |cmd|/w|ff| : liste les joueurs connectés au canal"
res += "\n - |cmd|/i <joueur>|ff| : invite un joueur à rejoindre "
res += "le canal"
res += "\n - |cmd|/me <message>|ff| : joue une emote dans le canal"
res += "\n - |cmd|/q|ff| : permet de sortir du mode immersif"
if personnage in canal.moderateurs or personnage is canal.auteur \
or personnage.est_immortel():
res += "\n Commandes de modération :"
res += "\n - |cmd|/e <joueur>|ff| : éjecte un joueur"
res += "\n - |cmd|/b <joueur>|ff| : bannit ou rappelle un joueur"
res += "\n - |cmd|/a <message>|ff| : permet d'envoyer une "
res += "annonce impersonnelle"
if personnage is canal.auteur or personnage.est_immortel():
res += "\n Commandes d'administration :"
res += "\n - |cmd|/p <joueur>|ff| : promeut ou déchoit un joueur "
res += "modérateur"
res += "\n - |cmd|/ed|ff| : ouvre l'éditeur du canal"
res += "\n - |cmd|/d|ff| : dissout le canal"
personnage << res
def opt_invite(self, arguments):
"""Option pour inviter un ami à rejoindre le cana : /i <joueur>"""
canal = self.canal
if not arguments or arguments.isspace():
self.pere.joueur << "|err|Vous devez spécifier un joueur.|ff|"
return
nom_joueur = arguments.split(" ")[0]
joueur = None
for t_joueur in type(self).importeur.connex.joueurs_connectes:
if nom_joueur == t_joueur.nom.lower():
joueur = t_joueur
break
if joueur is None:
self.pere.joueur << "|err|Le joueur passé en paramètre n'a pu " \
"être trouvé.|ff|"
return
if joueur in canal.connectes:
self.pere.joueur << "|err|Ce joueur est déjà connecté au canal.|ff|"
return
contexte = Invitation(joueur.instance_connexion)
contexte.emetteur = self.pere.joueur
contexte.canal = canal
contexte.actualiser()
self.pere.joueur << "|att|Vous venez d'inviter {} à rejoindre le " \
"canal {}.|ff|".format(joueur.nom, canal.nom)
def opt_emote(self, arguments):
"""Option d'emote dans le contexte immersif"""
canal = self.canal
joueur = self.pere.joueur
if not arguments or arguments.isspace():
joueur << "|err|Vous devez préciser une action.|ff|"
return
message = arguments.rstrip(" \n")
if not message[-1] in ponctuations_finales:
message += "."
im = canal.clr + "<" + joueur.nom + " " + message + ">|ff|"
ex = canal.clr + "[" + canal.nom + "] " + joueur.nom + " "
ex += message + "|ff|"
for connecte in canal.connectes:
if connecte in type(self).importeur.connex.joueurs_connectes:
if connecte in canal.immerges:
connecte << im
else:
connecte << ex
def opt_eject(self, arguments):
"""Option permettant d'éjecter un joueur connecté : /e <joueur>"""
canal = self.canal
if not self.pere.joueur in canal.moderateurs and \
self.pere.joueur is not canal.auteur and not \
self.pere.joueur.est_immortel():
self.pere.joueur << "|err|Vous n'avez pas accès à cette option.|ff|"
return
if not arguments or arguments.isspace():
self.pere.joueur << "|err|Vous devez spécifier un joueur.|ff|"
return
nom_joueur = arguments.split(" ")[0]
joueur = None
for connecte in canal.connectes:
if nom_joueur == connecte.nom.lower():
joueur = connecte
break
if joueur is None:
self.pere.joueur << "|err|Ce joueur n'est pas connecté au " \
"canal.|ff|"
return
if joueur is self.pere.joueur:
self.pere.joueur << "|err|Vous ne pouvez vous éjecter " \
"vous-même.|ff|"
return
if joueur in canal.moderateurs or joueur is canal.auteur:
self.pere.joueur << "|err|Vous ne pouvez éjecter ce joueur.|ff|"
return
canal.ejecter(joueur)
def opt_ban(self, arguments):
"""Option permettant de bannir un joueur connecté : /b <joueur>"""
canal = self.canal
if not self.pere.joueur in canal.moderateurs and \
self.pere.joueur is not canal.auteur and not \
self.pere.joueur.est_immortel():
self.pere.joueur << "|err|Vous n'avez pas accès à cette option.|ff|"
return
nom_joueur = arguments.split(" ")[0]
joueur = None
for t_joueur in type(self).importeur.connex.joueurs:
if nom_joueur == t_joueur.nom.lower():
joueur = t_joueur
break
if joueur is None:
self.pere.joueur << "|err|Le joueur passé en paramètre n'a pu " \
"être trouvé.|ff|"
return
if joueur is self.pere.joueur:
self.pere.joueur << "|err|Vous ne pouvez vous bannir vous-même.|ff|"
return
if joueur in canal.moderateurs or joueur is canal.auteur:
self.pere.joueur << "|err|Vous ne pouvez éjecter ce joueur.|ff|"
return
canal.bannir(joueur)
def opt_announce(self, arguments):
"""Option permettant d'envoyer une annonce : /a <message>"""
canal = self.canal
if not self.pere.joueur in canal.moderateurs and \
self.pere.joueur is not canal.auteur and not \
self.pere.joueur.est_immortel():
self.pere.joueur << "|err|Vous n'avez pas accès à cette option.|ff|"
return
message = arguments.rstrip(" \n")
canal.envoyer_imp(message)
def opt_promote(self, arguments):
"""Option permettant de promouvoir un joueur connecté : /p <joueur>"""
canal = self.canal
if self.pere.joueur is not canal.auteur and not \
self.pere.joueur.est_immortel():
self.pere.joueur << "|err|Vous n'avez pas accès à cette option.|ff|"
return
nom_joueur = arguments.split(" ")[0]
joueur = None
for connecte in canal.connectes:
if nom_joueur == connecte.nom.lower():
joueur = connecte
break
if joueur is None:
self.pere.joueur << "|err|Ce joueur n'est pas connecté au " \
"canal.|ff|"
return
if joueur is self.pere.joueur:
self.pere.joueur << "|err|Vous ne pouvez vous promouvoir " \
"vous-même.|ff|"
return
if joueur is canal.auteur:
self.pere.joueur << "|err|Ce joueur est déjà administrateur.|ff|"
return
canal.promouvoir_ou_dechoir(joueur)
def opt_edit(self, arguments):
"""Option ouvrant un éditeur du canal"""
canal = self.canal
if self.pere.joueur is not canal.auteur and not \
self.pere.joueur.est_immortel():
self.pere.joueur << "|err|Vous n'avez pas accès à cette option.|ff|"
return
editeur = type(self).importeur.interpreteur.construire_editeur(
"chedit", self.pere.joueur, canal)
self.pere.joueur.contextes.ajouter(editeur)
editeur.actualiser()
def opt_dissolve(self, arguments):
"""Option permettant de dissoudre le canal"""
canal = self.canal
if self.pere.joueur is not canal.auteur and not \
self.pere.joueur.est_immortel():
self.pere.joueur << "|err|Vous n'avez pas accès à cette option.|ff|"
return
joueur = self.pere.joueur
canal.immerger_ou_sortir(joueur, False)
canal.rejoindre_ou_quitter(joueur, False)
joueur << "|err|Le canal {} a été dissous.|ff|".format(canal.nom)
canal.dissoudre()
def interpreter(self, msg):
"""Méthode d'interprétation du contexte"""
if msg.startswith("/"):
# C'est une option
# On extrait le nom de l'option
mots = msg.split(" ")
option = mots[0][1:]
arguments = " ".join(mots[1:])
if option not in self.options.keys():
self.pere << "|err|Option invalide ({}).|ff|".format(option)
else: # On appelle la fonction correspondante à l'option
fonction = self.options[option]
fonction(arguments)
else:
self.canal.envoyer(self.pere.joueur, msg)
|
[
"primaires.communication.contextes.invitation.Invitation",
"primaires.interpreteur.contexte.Contexte.__init__",
"primaires.interpreteur.contexte.Contexte.__getstate__",
"primaires.interpreteur.contexte.Contexte.__setstate__"
] |
[((1995, 2024), 'primaires.interpreteur.contexte.Contexte.__init__', 'Contexte.__init__', (['self', 'pere'], {}), '(self, pere)\n', (2012, 2024), False, 'from primaires.interpreteur.contexte import Contexte\n'), ((2714, 2741), 'primaires.interpreteur.contexte.Contexte.__getstate__', 'Contexte.__getstate__', (['self'], {}), '(self)\n', (2735, 2741), False, 'from primaires.interpreteur.contexte import Contexte\n'), ((3034, 3072), 'primaires.interpreteur.contexte.Contexte.__setstate__', 'Contexte.__setstate__', (['self', 'dico_attr'], {}), '(self, dico_attr)\n', (3055, 3072), False, 'from primaires.interpreteur.contexte import Contexte\n'), ((7409, 7446), 'primaires.communication.contextes.invitation.Invitation', 'Invitation', (['joueur.instance_connexion'], {}), '(joueur.instance_connexion)\n', (7419, 7446), False, 'from primaires.communication.contextes.invitation import Invitation\n')]
|
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Mon Jan 13 12:07:22 2020
@author: medrclaa
Stand alone script for testing arc in ARC. simply run
pytest arc_test.py
To ensure the working environment is suitable for running experiments.
If you only wish to run a single experiment then you an easily hash the
other 2 for quicker testing time.
"""
import unittest
import os
"""
run file in ukf_experiments. putting test at top level allows the
large number of
"""
"if running this file on its own. this will move cwd up to ukf_experiments."
if os.path.split(os.getcwd())[1] != "ukf_experiments":
os.chdir("..")
import arc.arc as arc
from modules.ukf_fx import HiddenPrints
class Test_arc(unittest.TestCase):
"""test the ukf runs for all 3 experiments in arc
this is a fairly long test but tests vitually everything runs bar the
plotting.
"""
@classmethod
def setUpClass(cls):
pass
def test_ex0(self):
"""run the arc test for the experiment 0 module
pass the test if the whole arc test completes.
Note that arc_test.py does similar but is actually runnable in
arc to check the environment is suitable there.
"""
with HiddenPrints():
arc.main(arc.ex0_input, arc.ex0_save, test=True)
def test_ex1(self):
"""another arc module for experiment 1
We choose n =5 and proportion observed prop = 0.5
"""
with HiddenPrints():
arc.main(arc.ex1_input, test=True)
def test_ex2(self):
"""another arc module test for experiment 2
We choose n = 5 and aggregate square size bin_size = 50
"""
with HiddenPrints():
arc.main(arc.ex2_input, test=True)
if __name__ == '__main__':
"test the three experiments arc functions are working"
" each test uses 5 agents and some arbitrary parameters for the sake of speed"
arc_tests =Test_arc.setUpClass()
unittest.main()
|
[
"modules.ukf_fx.HiddenPrints",
"os.getcwd",
"os.chdir",
"arc.arc.main",
"unittest.main"
] |
[((623, 637), 'os.chdir', 'os.chdir', (['""".."""'], {}), "('..')\n", (631, 637), False, 'import os\n'), ((2089, 2104), 'unittest.main', 'unittest.main', ([], {}), '()\n', (2102, 2104), False, 'import unittest\n'), ((581, 592), 'os.getcwd', 'os.getcwd', ([], {}), '()\n', (590, 592), False, 'import os\n'), ((1286, 1300), 'modules.ukf_fx.HiddenPrints', 'HiddenPrints', ([], {}), '()\n', (1298, 1300), False, 'from modules.ukf_fx import HiddenPrints\n'), ((1314, 1362), 'arc.arc.main', 'arc.main', (['arc.ex0_input', 'arc.ex0_save'], {'test': '(True)'}), '(arc.ex0_input, arc.ex0_save, test=True)\n', (1322, 1362), True, 'import arc.arc as arc\n'), ((1561, 1575), 'modules.ukf_fx.HiddenPrints', 'HiddenPrints', ([], {}), '()\n', (1573, 1575), False, 'from modules.ukf_fx import HiddenPrints\n'), ((1589, 1623), 'arc.arc.main', 'arc.main', (['arc.ex1_input'], {'test': '(True)'}), '(arc.ex1_input, test=True)\n', (1597, 1623), True, 'import arc.arc as arc\n'), ((1808, 1822), 'modules.ukf_fx.HiddenPrints', 'HiddenPrints', ([], {}), '()\n', (1820, 1822), False, 'from modules.ukf_fx import HiddenPrints\n'), ((1836, 1870), 'arc.arc.main', 'arc.main', (['arc.ex2_input'], {'test': '(True)'}), '(arc.ex2_input, test=True)\n', (1844, 1870), True, 'import arc.arc as arc\n')]
|
import os
from pathlib import Path
from typing import List
from challenges.day3 import frequency_character
def _read_input() -> List[str]:
"""Read the input file."""
travel_map = []
current_path = Path(os.path.dirname(os.path.realpath(__file__)))
image_path = current_path / "resources" / "day3_puzzle_input.txt"
with image_path.open("r", encoding="utf-8") as input_file:
for line in input_file:
travel_map.append(str(line.strip()))
return travel_map
def test_sample_input_part1():
sample_input = ["..##.......", "#...#...#..", ".#....#..#.", "..#.#...#.#", ".#...##..#.",
"..#.##.....", ".#.#.#....#", ".#........#", "#.##...#...", "#...##....#",
".#..#...#.#"]
expected_trees_hit = 7
hit_trees = frequency_character(
sample_input, right=3, down=1, char="#")
assert hit_trees == expected_trees_hit
def test_puzzle_input_part1():
input_map = _read_input()
result = frequency_character(
input_map, right=3, down=1, char="#")
print(f"Result: {result}")
assert result == 276
def test_sample_input_part2():
sample_input = ["..##.......", "#...#...#..", ".#....#..#.", "..#.#...#.#", ".#...##..#.",
"..#.##.....", ".#.#.#....#", ".#........#", "#.##...#...", "#...##....#",
".#..#...#.#"]
expected_trees_multiplier = 336
# right, down, expected
test_paths = [(1, 1, 2), (3, 1, 7), (5, 1, 3), (7, 1, 4), (1, 2, 2)]
result = 1
for test_path in test_paths:
hit_trees = frequency_character(
sample_input, right=test_path[0], down=test_path[1], char="#")
assert hit_trees == test_path[2]
result *= hit_trees
assert result == expected_trees_multiplier
def test_puzzle_input_part2():
input_map = _read_input()
test_paths = [(1, 1, 2), (3, 1, 7), (5, 1, 3), (7, 1, 4), (1, 2, 2)]
result = 1
for test_path in test_paths:
hit_trees = frequency_character(
input_map, right=test_path[0], down=test_path[1], char="#")
result *= hit_trees
print(f"Result: {result}")
assert result == 7812180000
|
[
"os.path.realpath",
"challenges.day3.frequency_character"
] |
[((798, 858), 'challenges.day3.frequency_character', 'frequency_character', (['sample_input'], {'right': '(3)', 'down': '(1)', 'char': '"""#"""'}), "(sample_input, right=3, down=1, char='#')\n", (817, 858), False, 'from challenges.day3 import frequency_character\n'), ((987, 1044), 'challenges.day3.frequency_character', 'frequency_character', (['input_map'], {'right': '(3)', 'down': '(1)', 'char': '"""#"""'}), "(input_map, right=3, down=1, char='#')\n", (1006, 1044), False, 'from challenges.day3 import frequency_character\n'), ((1574, 1660), 'challenges.day3.frequency_character', 'frequency_character', (['sample_input'], {'right': 'test_path[0]', 'down': 'test_path[1]', 'char': '"""#"""'}), "(sample_input, right=test_path[0], down=test_path[1],\n char='#')\n", (1593, 1660), False, 'from challenges.day3 import frequency_character\n'), ((1992, 2071), 'challenges.day3.frequency_character', 'frequency_character', (['input_map'], {'right': 'test_path[0]', 'down': 'test_path[1]', 'char': '"""#"""'}), "(input_map, right=test_path[0], down=test_path[1], char='#')\n", (2011, 2071), False, 'from challenges.day3 import frequency_character\n'), ((232, 258), 'os.path.realpath', 'os.path.realpath', (['__file__'], {}), '(__file__)\n', (248, 258), False, 'import os\n')]
|
# -*- coding: utf-8 -*-
import pytest
from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer
from raiden.utils import make_privkey_address, sha3
PRIVKEY, ADDRESS = make_privkey_address()
def test_signature():
ping = Ping(nonce=0)
ping.sign(PRIVKEY, ADDRESS)
assert ping.sender == ADDRESS
def test_encoding():
ping = Ping(nonce=0)
ping.sign(PRIVKEY, ADDRESS)
decoded_ping = decode(ping.encode())
assert isinstance(decoded_ping, Ping)
assert decoded_ping.sender == ADDRESS == ping.sender
assert ping.nonce == decoded_ping.nonce
assert ping.signature == decoded_ping.signature
assert ping.cmdid == decoded_ping.cmdid
assert ping.hash == decoded_ping.hash
def test_hash():
ping = Ping(nonce=0)
ping.sign(PRIVKEY, ADDRESS)
data = ping.encode()
msghash = sha3(data)
decoded_ping = decode(data)
assert sha3(decoded_ping.encode()) == msghash
def test_ack():
echo = sha3(PRIVKEY)
ack = Ack(ADDRESS, echo)
assert ack.echo == echo
data = ack.encode()
msghash = sha3(data)
decoded_ack = decode(data)
assert decoded_ack.echo == ack.echo
assert decoded_ack.sender == ack.sender
assert sha3(decoded_ack.encode()) == msghash
def test_mediated_transfer():
nonce = balance = 1
token = recipient = target = initiator = ADDRESS
hashlock = locksroot = sha3(ADDRESS)
amount = expiration = 1
fee = 0
lock = Lock(amount, expiration, hashlock)
mediated_transfer = MediatedTransfer(
1, # TODO: fill in identifier
nonce,
token,
balance,
recipient,
locksroot,
lock,
target,
initiator,
fee,
)
assert roundtrip_serialize_mediated_transfer(mediated_transfer)
def make_lock_with_amount(amount):
return Lock(amount, 1, "a" * 32)
def make_mediated_transfer_with_amount(amount):
return MediatedTransfer(
0,
1,
ADDRESS,
amount,
ADDRESS,
"",
make_lock_with_amount(amount),
ADDRESS,
ADDRESS,
0
)
def make_mediated_transfer_with_nonce(nonce):
return MediatedTransfer(
0,
nonce,
ADDRESS,
1,
ADDRESS,
"",
make_lock_with_amount(1),
ADDRESS,
ADDRESS,
0
)
def make_mediated_transfer_with_fee(fee):
return MediatedTransfer(
0,
1,
ADDRESS,
1,
ADDRESS,
"",
make_lock_with_amount(1),
ADDRESS,
ADDRESS,
fee
)
def roundtrip_serialize_mediated_transfer(mediated_transfer):
mediated_transfer.sign(PRIVKEY, ADDRESS)
decoded_mediated_transfer = decode(mediated_transfer.encode())
assert decoded_mediated_transfer == mediated_transfer
return True
@pytest.mark.parametrize("amount", [-1, 2 ** 256])
@pytest.mark.parametrize("make", [make_lock_with_amount,
make_mediated_transfer_with_amount])
def test_amount_out_of_bounds(amount, make):
with pytest.raises(ValueError):
make(amount)
@pytest.mark.parametrize("amount", [0, 2 ** 256 - 1])
def test_mediated_transfer_amount_min_max(amount):
mediated_transfer = make_mediated_transfer_with_amount(amount)
assert roundtrip_serialize_mediated_transfer(mediated_transfer)
@pytest.mark.parametrize("nonce", [2 ** 64])
def test_mediated_transfer_nonce_out_of_bounds(nonce):
with pytest.raises(ValueError):
make_mediated_transfer_with_nonce(nonce)
@pytest.mark.parametrize("nonce", [2 ** 64 - 1])
def test_mediated_transfer_nonce_max(nonce):
mediated_transfer = make_mediated_transfer_with_nonce(nonce)
assert roundtrip_serialize_mediated_transfer(mediated_transfer)
@pytest.mark.parametrize("fee", [2 ** 256])
def test_mediated_transfer_fee_out_of_bounds(fee):
with pytest.raises(ValueError):
make_mediated_transfer_with_fee(fee)
@pytest.mark.parametrize("fee", [0, 2 ** 256 - 1])
def test_mediated_transfer_fee_min_max(fee):
mediated_transfer = make_mediated_transfer_with_fee(fee)
assert roundtrip_serialize_mediated_transfer(mediated_transfer)
|
[
"raiden.messages.MediatedTransfer",
"raiden.messages.Ping",
"raiden.messages.Ack",
"raiden.utils.sha3",
"raiden.utils.make_privkey_address",
"pytest.mark.parametrize",
"raiden.messages.Lock",
"raiden.messages.decode",
"pytest.raises"
] |
[((180, 202), 'raiden.utils.make_privkey_address', 'make_privkey_address', ([], {}), '()\n', (200, 202), False, 'from raiden.utils import make_privkey_address, sha3\n'), ((2841, 2890), 'pytest.mark.parametrize', 'pytest.mark.parametrize', (['"""amount"""', '[-1, 2 ** 256]'], {}), "('amount', [-1, 2 ** 256])\n", (2864, 2890), False, 'import pytest\n'), ((2892, 2988), 'pytest.mark.parametrize', 'pytest.mark.parametrize', (['"""make"""', '[make_lock_with_amount, make_mediated_transfer_with_amount]'], {}), "('make', [make_lock_with_amount,\n make_mediated_transfer_with_amount])\n", (2915, 2988), False, 'import pytest\n'), ((3124, 3176), 'pytest.mark.parametrize', 'pytest.mark.parametrize', (['"""amount"""', '[0, 2 ** 256 - 1]'], {}), "('amount', [0, 2 ** 256 - 1])\n", (3147, 3176), False, 'import pytest\n'), ((3366, 3409), 'pytest.mark.parametrize', 'pytest.mark.parametrize', (['"""nonce"""', '[2 ** 64]'], {}), "('nonce', [2 ** 64])\n", (3389, 3409), False, 'import pytest\n'), ((3553, 3600), 'pytest.mark.parametrize', 'pytest.mark.parametrize', (['"""nonce"""', '[2 ** 64 - 1]'], {}), "('nonce', [2 ** 64 - 1])\n", (3576, 3600), False, 'import pytest\n'), ((3782, 3824), 'pytest.mark.parametrize', 'pytest.mark.parametrize', (['"""fee"""', '[2 ** 256]'], {}), "('fee', [2 ** 256])\n", (3805, 3824), False, 'import pytest\n'), ((3960, 4009), 'pytest.mark.parametrize', 'pytest.mark.parametrize', (['"""fee"""', '[0, 2 ** 256 - 1]'], {}), "('fee', [0, 2 ** 256 - 1])\n", (3983, 4009), False, 'import pytest\n'), ((238, 251), 'raiden.messages.Ping', 'Ping', ([], {'nonce': '(0)'}), '(nonce=0)\n', (242, 251), False, 'from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer\n'), ((352, 365), 'raiden.messages.Ping', 'Ping', ([], {'nonce': '(0)'}), '(nonce=0)\n', (356, 365), False, 'from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer\n'), ((750, 763), 'raiden.messages.Ping', 'Ping', ([], {'nonce': '(0)'}), '(nonce=0)\n', (754, 763), False, 'from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer\n'), ((835, 845), 'raiden.utils.sha3', 'sha3', (['data'], {}), '(data)\n', (839, 845), False, 'from raiden.utils import make_privkey_address, sha3\n'), ((865, 877), 'raiden.messages.decode', 'decode', (['data'], {}), '(data)\n', (871, 877), False, 'from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer\n'), ((957, 970), 'raiden.utils.sha3', 'sha3', (['PRIVKEY'], {}), '(PRIVKEY)\n', (961, 970), False, 'from raiden.utils import make_privkey_address, sha3\n'), ((981, 999), 'raiden.messages.Ack', 'Ack', (['ADDRESS', 'echo'], {}), '(ADDRESS, echo)\n', (984, 999), False, 'from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer\n'), ((1066, 1076), 'raiden.utils.sha3', 'sha3', (['data'], {}), '(data)\n', (1070, 1076), False, 'from raiden.utils import make_privkey_address, sha3\n'), ((1095, 1107), 'raiden.messages.decode', 'decode', (['data'], {}), '(data)\n', (1101, 1107), False, 'from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer\n'), ((1377, 1390), 'raiden.utils.sha3', 'sha3', (['ADDRESS'], {}), '(ADDRESS)\n', (1381, 1390), False, 'from raiden.utils import make_privkey_address, sha3\n'), ((1443, 1477), 'raiden.messages.Lock', 'Lock', (['amount', 'expiration', 'hashlock'], {}), '(amount, expiration, hashlock)\n', (1447, 1477), False, 'from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer\n'), ((1502, 1600), 'raiden.messages.MediatedTransfer', 'MediatedTransfer', (['(1)', 'nonce', 'token', 'balance', 'recipient', 'locksroot', 'lock', 'target', 'initiator', 'fee'], {}), '(1, nonce, token, balance, recipient, locksroot, lock,\n target, initiator, fee)\n', (1518, 1600), False, 'from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer\n'), ((1828, 1853), 'raiden.messages.Lock', 'Lock', (['amount', '(1)', "('a' * 32)"], {}), "(amount, 1, 'a' * 32)\n", (1832, 1853), False, 'from raiden.messages import Ping, Ack, decode, Lock, MediatedTransfer\n'), ((3073, 3098), 'pytest.raises', 'pytest.raises', (['ValueError'], {}), '(ValueError)\n', (3086, 3098), False, 'import pytest\n'), ((3474, 3499), 'pytest.raises', 'pytest.raises', (['ValueError'], {}), '(ValueError)\n', (3487, 3499), False, 'import pytest\n'), ((3885, 3910), 'pytest.raises', 'pytest.raises', (['ValueError'], {}), '(ValueError)\n', (3898, 3910), False, 'import pytest\n')]
|
import pytest
from billy.utils.search import google_book_search
class TestGoogleBookSearch(object):
def test_search_returns_200(self, mock):
"""Ensure a basic search returns a 200 request"""
assert google_book_search("<NAME>")["status"] == 200
def test_search_body_returns_dict(self, mock):
"""Ensure we're getting a JSON dict back from google_book_search()"""
assert type(google_book_search("<NAME>")["body"]) is dict
|
[
"billy.utils.search.google_book_search"
] |
[((220, 248), 'billy.utils.search.google_book_search', 'google_book_search', (['"""<NAME>"""'], {}), "('<NAME>')\n", (238, 248), False, 'from billy.utils.search import google_book_search\n'), ((416, 444), 'billy.utils.search.google_book_search', 'google_book_search', (['"""<NAME>"""'], {}), "('<NAME>')\n", (434, 444), False, 'from billy.utils.search import google_book_search\n')]
|
import math
class Config_1:
DATASET_ROOT_DIR = '../data/test1/Data' # The data set root directory
DATASET_SCALE = 0 # How many users' trajectory data are choosed
TRAJACTORY_SCALE = 20 # How many trajectories are choosed per user
RANGE = { # To pick trajectory points within the range
'status': False
}
GROUP_SIZE_THRESHOLD = 3 # group size threshold φ
COHERENCE_THRESHOLD = 0.4 # coherence threshold τ
SCALING_FACTOR = 1.5 # scaling factor δ
TURNING_ALPHA = 5 # tuning parameter α
TURNING_BETA = 2 # tuning parameter β
RADIUS = SCALING_FACTOR * \
((-math.log(COHERENCE_THRESHOLD)) ** (1 / TURNING_ALPHA))
class Config_2:
DATASET_ROOT_DIR = '../data/test2/Data' # The data set root directory
DATASET_SCALE = 3 # How many users' trajectory data are choosed
TRAJACTORY_SCALE = 4 # How many trajectories are choosed per user
RANGE = { # To pick trajectory points within the range
'status': True,
'longitude_upper_bound': 116.32,
'longitude_lower_bound': 116.304,
'latitude_upper_bound': 40.018,
'latitude_lower_bound': 40.004,
}
GROUP_SIZE_THRESHOLD = 3 # group size threshold φ
COHERENCE_THRESHOLD = 0.99 # coherence threshold τ
SCALING_FACTOR = 15e-4 # scaling factor δ
TURNING_ALPHA = 5 # tuning parameter α
TURNING_BETA = 2 # tuning parameter β
RADIUS = SCALING_FACTOR * \
((-math.log(COHERENCE_THRESHOLD)) ** (1 / TURNING_ALPHA))
class Config_3:
DATASET_ROOT_DIR = '../data/test3/Data' # The data set root directory
DATASET_SCALE = 0 # How many users' trajectory data are choosed
TRAJACTORY_SCALE = 20 # How many trajectories are choosed per user
RANGE = { # To pick trajectory points within the range
'status': False
}
GROUP_SIZE_THRESHOLD = 3 # group size threshold φ
COHERENCE_THRESHOLD = 0.49 # coherence threshold τ
SCALING_FACTOR = 1.1 # scaling factor δ
TURNING_ALPHA = 5 # tuning parameter α
TURNING_BETA = 2 # tuning parameter β
RADIUS = SCALING_FACTOR * \
((-math.log(COHERENCE_THRESHOLD)) ** (1 / TURNING_ALPHA))
class Config(Config_3):
__attr__ = ['DATASET_ROOT_DIR', 'DATASET_SCALE', 'TRAJACTORY_SCALE', 'RANGE',
'GROUP_SIZE_THRESHOLD', 'COHERENCE_THRESHOLD', 'SCALING_FACTOR',
'TURNING_ALPHA', 'TURNING_BETA', 'RADIUS']
def __str__(self):
s = ""
for attr in self.__attr__:
s += attr + ' ' + str(getattr(self, attr)) + '\n'
return s
def __repr__(self):
return self.__str__()
|
[
"math.log"
] |
[((746, 775), 'math.log', 'math.log', (['COHERENCE_THRESHOLD'], {}), '(COHERENCE_THRESHOLD)\n', (754, 775), False, 'import math\n'), ((1698, 1727), 'math.log', 'math.log', (['COHERENCE_THRESHOLD'], {}), '(COHERENCE_THRESHOLD)\n', (1706, 1727), False, 'import math\n'), ((2486, 2515), 'math.log', 'math.log', (['COHERENCE_THRESHOLD'], {}), '(COHERENCE_THRESHOLD)\n', (2494, 2515), False, 'import math\n')]
|
import os
import psutil
import time
def process_time():
p = psutil.Process(os.getpid())
return time.time() - p.create_time()
|
[
"time.time",
"os.getpid"
] |
[((81, 92), 'os.getpid', 'os.getpid', ([], {}), '()\n', (90, 92), False, 'import os\n'), ((105, 116), 'time.time', 'time.time', ([], {}), '()\n', (114, 116), False, 'import time\n')]
|
import discord
from discord.ext import commands
from discord.utils import get
class c269(commands.Cog, name="c269"):
def __init__(self, bot: commands.Bot):
self.bot = bot
@commands.command(name='Vir_the_True_Elementalist', aliases=['c269', 'Elementalist_1', 'Distasta_Master_1'])
async def example_embed(self, ctx):
embed = discord.Embed(title='Vir the True Elementalist',
color=0xFDE68A)
embed.set_thumbnail(url='https://www.duelingbook.com/images/custom-pics/2300000/2360695.jpg')
embed.add_field(name='Status (Archetype)', value='Casual:3/Tournament:3 (Elementalist/Distasta Master)', inline=True)
embed.add_field(name='Type (Attribute)', value='Spellcaster/Normal (LIGHT)', inline=False)
embed.add_field(name='Level (ATK/DEF)', value='3 (1200/950)', inline=False)
embed.add_field(name='Lore Text', value='Some say that whenever a disaster occurs, Vir is near by practicing his magic. However, if Vir ever learns the secrets of the Book of Natural Disasters, with knowledge of the ancient scriptures, he will be able to tame the Distasta Masters and bring the world into a new age of doom.', inline=False)
embed.set_footer(text='Set Code: GMMP')
await ctx.send(embed=embed)
def setup(bot: commands.Bot):
bot.add_cog(c269(bot))
|
[
"discord.Embed",
"discord.ext.commands.command"
] |
[((190, 301), 'discord.ext.commands.command', 'commands.command', ([], {'name': '"""Vir_the_True_Elementalist"""', 'aliases': "['c269', 'Elementalist_1', 'Distasta_Master_1']"}), "(name='Vir_the_True_Elementalist', aliases=['c269',\n 'Elementalist_1', 'Distasta_Master_1'])\n", (206, 301), False, 'from discord.ext import commands\n'), ((358, 422), 'discord.Embed', 'discord.Embed', ([], {'title': '"""Vir the True Elementalist"""', 'color': '(16639626)'}), "(title='Vir the True Elementalist', color=16639626)\n", (371, 422), False, 'import discord\n')]
|
"""
This example shows some more complex querying
Key points are filtering by related names and using Q objects
"""
import asyncio
from tortoise import Tortoise, fields
from tortoise.models import Model
from tortoise.query_utils import Q
class Tournament(Model):
id = fields.IntField(pk=True)
name = fields.TextField()
def __str__(self):
return self.name
class Event(Model):
id = fields.IntField(pk=True)
name = fields.TextField()
tournament = fields.ForeignKeyField("models.Tournament", related_name="events")
participants = fields.ManyToManyField(
"models.Team", related_name="events", through="event_team"
)
def __str__(self):
return self.name
class Team(Model):
id = fields.IntField(pk=True)
name = fields.TextField()
def __str__(self):
return self.name
async def run():
await Tortoise.init(config_file="config.json")
await Tortoise.generate_schemas()
tournament = Tournament(name="Tournament")
await tournament.save()
second_tournament = Tournament(name="Tournament 2")
await second_tournament.save()
event_first = Event(name="1", tournament=tournament)
await event_first.save()
event_second = await Event.create(name="2", tournament=second_tournament)
await Event.create(name="3", tournament=tournament)
await Event.create(name="4", tournament=second_tournament)
await Event.filter(tournament=tournament)
team_first = Team(name="First")
await team_first.save()
team_second = Team(name="Second")
await team_second.save()
await team_first.events.add(event_first)
await event_second.participants.add(team_second)
print(
await Event.filter(Q(id__in=[event_first.id, event_second.id]) | Q(name="3"))
.filter(participants__not=team_second.id)
.order_by("tournament__id")
.distinct()
)
print(await Team.filter(events__tournament_id=tournament.id).order_by("-events__name"))
print(
await Tournament.filter(events__name__in=["1", "3"])
.order_by("-events__participants__name")
.distinct()
)
print(await Team.filter(name__icontains="CON"))
print(await Tournament.filter(events__participants__name__startswith="Fir"))
print(await Tournament.filter(id__icontains=1).count())
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
|
[
"tortoise.Tortoise.generate_schemas",
"tortoise.fields.ManyToManyField",
"tortoise.fields.IntField",
"tortoise.query_utils.Q",
"tortoise.fields.ForeignKeyField",
"asyncio.get_event_loop",
"tortoise.fields.TextField",
"tortoise.Tortoise.init"
] |
[((276, 300), 'tortoise.fields.IntField', 'fields.IntField', ([], {'pk': '(True)'}), '(pk=True)\n', (291, 300), False, 'from tortoise import Tortoise, fields\n'), ((312, 330), 'tortoise.fields.TextField', 'fields.TextField', ([], {}), '()\n', (328, 330), False, 'from tortoise import Tortoise, fields\n'), ((411, 435), 'tortoise.fields.IntField', 'fields.IntField', ([], {'pk': '(True)'}), '(pk=True)\n', (426, 435), False, 'from tortoise import Tortoise, fields\n'), ((447, 465), 'tortoise.fields.TextField', 'fields.TextField', ([], {}), '()\n', (463, 465), False, 'from tortoise import Tortoise, fields\n'), ((483, 549), 'tortoise.fields.ForeignKeyField', 'fields.ForeignKeyField', (['"""models.Tournament"""'], {'related_name': '"""events"""'}), "('models.Tournament', related_name='events')\n", (505, 549), False, 'from tortoise import Tortoise, fields\n'), ((569, 656), 'tortoise.fields.ManyToManyField', 'fields.ManyToManyField', (['"""models.Team"""'], {'related_name': '"""events"""', 'through': '"""event_team"""'}), "('models.Team', related_name='events', through=\n 'event_team')\n", (591, 656), False, 'from tortoise import Tortoise, fields\n'), ((745, 769), 'tortoise.fields.IntField', 'fields.IntField', ([], {'pk': '(True)'}), '(pk=True)\n', (760, 769), False, 'from tortoise import Tortoise, fields\n'), ((781, 799), 'tortoise.fields.TextField', 'fields.TextField', ([], {}), '()\n', (797, 799), False, 'from tortoise import Tortoise, fields\n'), ((2372, 2396), 'asyncio.get_event_loop', 'asyncio.get_event_loop', ([], {}), '()\n', (2394, 2396), False, 'import asyncio\n'), ((878, 918), 'tortoise.Tortoise.init', 'Tortoise.init', ([], {'config_file': '"""config.json"""'}), "(config_file='config.json')\n", (891, 918), False, 'from tortoise import Tortoise, fields\n'), ((929, 956), 'tortoise.Tortoise.generate_schemas', 'Tortoise.generate_schemas', ([], {}), '()\n', (954, 956), False, 'from tortoise import Tortoise, fields\n'), ((1726, 1769), 'tortoise.query_utils.Q', 'Q', ([], {'id__in': '[event_first.id, event_second.id]'}), '(id__in=[event_first.id, event_second.id])\n', (1727, 1769), False, 'from tortoise.query_utils import Q\n'), ((1772, 1783), 'tortoise.query_utils.Q', 'Q', ([], {'name': '"""3"""'}), "(name='3')\n", (1773, 1783), False, 'from tortoise.query_utils import Q\n')]
|
import os
from remotepixel import cbers_ndvi
CBERS_SCENE = "CBERS_4_MUX_20171121_057_094_L2"
CBERS_BUCKET = os.path.join(os.path.dirname(__file__), "fixtures", "cbers-pds")
CBERS_PATH = os.path.join(
CBERS_BUCKET, "CBERS4/MUX/057/094/CBERS_4_MUX_20171121_057_094_L2/"
)
def test_point_valid(monkeypatch):
"""Should work as expected (read data, calculate NDVI and return json info)."""
monkeypatch.setattr(cbers_ndvi, "CBERS_BUCKET", CBERS_BUCKET)
expression = "(b8 - b7) / (b8 + b7)"
coords = [53.9097, 5.3674]
expectedContent = {
"date": "2017-11-21",
"scene": CBERS_SCENE,
"ndvi": -0.1320754716981132,
}
assert cbers_ndvi.point(CBERS_SCENE, coords, expression) == expectedContent
def test_point_invalid(monkeypatch):
"""Should work as expected and retour 0 for outside point."""
monkeypatch.setattr(cbers_ndvi, "CBERS_BUCKET", CBERS_BUCKET)
expression = "(b8 - b7) / (b8 + b7)"
coords = [53.9097, 2.3674]
expectedContent = {"date": "2017-11-21", "scene": CBERS_SCENE, "ndvi": 0.}
assert cbers_ndvi.point(CBERS_SCENE, coords, expression) == expectedContent
def test_area_valid(monkeypatch):
"""Should work as expected (read data, calculate NDVI and return img)."""
monkeypatch.setattr(cbers_ndvi, "CBERS_BUCKET", CBERS_BUCKET)
expression = "(b8 - b7) / (b8 + b7)"
bbox = [53.0859375, 5.266007882805496, 53.4375, 5.615985819155334]
res = cbers_ndvi.area(CBERS_SCENE, bbox, expression)
assert res["date"] == "2017-11-21"
|
[
"remotepixel.cbers_ndvi.point",
"os.path.dirname",
"os.path.join",
"remotepixel.cbers_ndvi.area"
] |
[((189, 274), 'os.path.join', 'os.path.join', (['CBERS_BUCKET', '"""CBERS4/MUX/057/094/CBERS_4_MUX_20171121_057_094_L2/"""'], {}), "(CBERS_BUCKET,\n 'CBERS4/MUX/057/094/CBERS_4_MUX_20171121_057_094_L2/')\n", (201, 274), False, 'import os\n'), ((124, 149), 'os.path.dirname', 'os.path.dirname', (['__file__'], {}), '(__file__)\n', (139, 149), False, 'import os\n'), ((1447, 1493), 'remotepixel.cbers_ndvi.area', 'cbers_ndvi.area', (['CBERS_SCENE', 'bbox', 'expression'], {}), '(CBERS_SCENE, bbox, expression)\n', (1462, 1493), False, 'from remotepixel import cbers_ndvi\n'), ((674, 723), 'remotepixel.cbers_ndvi.point', 'cbers_ndvi.point', (['CBERS_SCENE', 'coords', 'expression'], {}), '(CBERS_SCENE, coords, expression)\n', (690, 723), False, 'from remotepixel import cbers_ndvi\n'), ((1076, 1125), 'remotepixel.cbers_ndvi.point', 'cbers_ndvi.point', (['CBERS_SCENE', 'coords', 'expression'], {}), '(CBERS_SCENE, coords, expression)\n', (1092, 1125), False, 'from remotepixel import cbers_ndvi\n')]
|
#!/usr/bin/env python
#
# __COPYRIGHT__
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and associated documentation files (the
# "Software"), to deal in the Software without restriction, including
# without limitation the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the Software, and to
# permit persons to whom the Software is furnished to do so, subject to
# the following conditions:
#
# The above copyright notice and this permission notice shall be included
# in all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
# LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
# OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
# WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
__revision__ = "__FILE__ __REVISION__ __DATE__ __DEVELOPER__"
"""
Verify that the time subcommand's --which option doesn't fail, and prints
an appropriate error message, if a log file doesn't have its specific
requested results.
"""
import TestSCons_time
test = TestSCons_time.TestSCons_time()
header = """\
set key bottom left
plot '-' title "Startup" with lines lt 1
# Startup
"""
footer = """\
e
"""
line_fmt = "%s 11.123456\n"
lines = []
for i in range(9):
logfile_name = 'foo-%s-0.log' % i
if i == 5:
test.write(test.workpath(logfile_name), "NO RESULTS HERE!\n")
else:
test.fake_logfile(logfile_name)
lines.append(line_fmt % i)
expect = [header] + lines + [footer]
stderr = "file 'foo-5-0.log' has no results!\n"
test.run(arguments = 'time --fmt gnuplot --which total foo*.log',
stdout = ''.join(expect),
stderr = stderr)
expect = [header] + [footer]
test.run(arguments = 'time --fmt gnuplot foo-5-0.log',
stdout = ''.join(expect),
stderr = stderr)
test.pass_test()
# Local Variables:
# tab-width:4
# indent-tabs-mode:nil
# End:
# vim: set expandtab tabstop=4 shiftwidth=4:
|
[
"TestSCons_time.TestSCons_time"
] |
[((1367, 1398), 'TestSCons_time.TestSCons_time', 'TestSCons_time.TestSCons_time', ([], {}), '()\n', (1396, 1398), False, 'import TestSCons_time\n')]
|
"""
The container to store indexes in active learning.
Serve as the basic type of 'set' operation.
"""
# Authors: <NAME>
# License: BSD 3 clause
from __future__ import division
import collections
import copy
import numpy as np
from .multi_label_tools import check_index_multilabel, infer_label_size_multilabel, flattern_multilabel_index, \
integrate_multilabel_index
from ..utils.ace_warnings import *
from ..utils.interface import BaseCollection
from ..utils.misc import randperm
class IndexCollection(BaseCollection):
"""Index Collection.
Index Collection class is a basic data type of setting operation.
Multiple different type of element is supported for Active learning.
Also check the validity of given operation.
Note that:
1. The types of elements should be same
1. If multiple elements to update, it should be a list, numpy.ndarray or IndexCollection
object, otherwise, it will be cheated as one single element. (If single element
contains multiple values, take tuple as the type of element.)
Parameters
----------
data : list or np.ndarray or object, optional (default=None)
shape [n_element]. Element should be int or tuple.
The meaning of elements can be defined by users.
Some examples of elements:
(example_index, label_index) for instance-label pair query.
(example_index, feature_index) for feature query,
(example_index, example_index) for active clustering;
If int, it may be the index of an instance, for example.
Attributes
----------
index: list, shape (1, n_elements)
A list contains all elements in this container.
Examples
--------
>>> a = IndexCollection([1, 2, 3])
>>> a.update([4,5])
[1, 2, 3, 4, 5]
>>> a.difference_update([1,2])
[3, 4, 5]
"""
def __init__(self, data=None):
if data is None or len(data) == 0:
self._innercontainer = []
else:
if isinstance(data, IndexCollection):
self._innercontainer = copy.deepcopy(data.index)
self._element_type = data.get_elementType()
return
if not isinstance(data, (list, np.ndarray)):
data = [data]
self._innercontainer = list(np.unique([i for i in data], axis=0))
if len(self._innercontainer) != len(data):
warnings.warn("There are %d same elements in the given data" % (len(data) - len(self._innercontainer)),
category=RepeatElementWarning,
stacklevel=3)
datatype = collections.Counter([type(i) for i in self._innercontainer])
if len(datatype) != 1:
raise TypeError("Different types found in the given _indexes.")
tmp_data = self._innercontainer[0]
if isinstance(tmp_data, np.generic):
# self._element_type = type(np.asscalar(tmp_data)) # deprecated in numpy v1.16
self._element_type = type(tmp_data.item())
else:
self._element_type = type(tmp_data)
@property
def index(self):
"""
Get the index of data.
"""
return copy.deepcopy(self._innercontainer)
def __getitem__(self, item):
return self._innercontainer.__getitem__(item)
def get_elementType(self):
"""
Return the type of data.
"""
return self._element_type
def pop(self):
"""
Return the popped value. Raise KeyError if empty.
"""
return self._innercontainer.pop()
def add(self, value):
"""
Add element.
It will warn if the value to add is existent.
Parameters
----------
value: object
same type of the element already in the set.
Raise if unknown type is given.
Returns
-------
self: object
return self.
"""
if self._element_type is None:
self._element_type = type(value)
# check validation
if isinstance(value, np.generic):
# value = np.asscalar(value) # deprecated in numpy v1.16
value = value.item()
if not isinstance(value, self._element_type):
raise TypeError(
"A %s parameter is expected, but received: %s" % (str(self._element_type), str(type(value))))
if value in self._innercontainer:
warnings.warn("Adding element %s has already in the collection, skip." % (value.__str__()),
category=RepeatElementWarning,
stacklevel=3)
else:
self._innercontainer.append(value)
return self
def discard(self, value):
"""Remove an element.
It will warn if the value to discard is inexistent.
Parameters
----------
value: object
Value to discard.
Returns
-------
self: object
Return self.
"""
if value not in self._innercontainer:
warnings.warn("Element %s to discard is not in the collection, skip." % (value.__str__()),
category=InexistentElementWarning,
stacklevel=3)
else:
self._innercontainer.remove(value)
return self
def difference_update(self, other):
"""Remove all elements of another array from this container.
Parameters
----------
other: object
Elements to discard. Note that, if multiple indexes are contained,
a list, numpy.ndarray or IndexCollection should be given. Otherwise,
it will be cheated as an object.
Returns
-------
self: object
Return self.
"""
if not isinstance(other, (list, np.ndarray, IndexCollection)):
other = [other]
for item in other:
self.discard(item)
return self
def update(self, other):
"""Update self with the union of itself and others.
Parameters
----------
other: object
Elements to add. Note that, if multiple indexes are contained,
a list, numpy.ndarray or IndexCollection should be given. Otherwise,
it will be cheated as an object.
Returns
-------
self: object
Return self.
"""
if not isinstance(other, (list, np.ndarray, IndexCollection)):
other = [other]
for item in other:
self.add(item)
return self
def random_sampling(self, rate=0.3):
"""Return a random sampled subset of this collection.
Parameters
----------
rate: float, optional (default=None)
The rate of sampling. Must be a number in [0,1].
Returns
-------
array: IndexCollection
The sampled index collection.
"""
assert (0 < rate < 1)
perm = randperm(len(self) - 1, round(rate * len(self)))
return IndexCollection([self.index[i] for i in perm])
class MultiLabelIndexCollection(IndexCollection):
"""Class for managing multi-label indexes.
This class stores indexes in multi-label. Each element should be a tuple.
A single index should only have 1 element (example_index, ) to query all labels or
2 elements (example_index, [label_indexes]) to query specific labels.
Some examples of valid multi-label indexes include:
queried_index = (1, [3,4])
queried_index = (1, [3])
queried_index = (1, 3)
queried_index = (1, (3))
queried_index = (1, (3,4))
queried_index = (1, ) # query all labels
Several validity checking are implemented in this class.
Such as repeated elements, Index out of bound.
Parameters
----------
data : list or np.ndarray of a single tuple, optional (default=None)
shape [n_element]. All elements should be tuples.
label_size: int, optional (default=None)
The number of classes. If not provided, an infer is attempted, raise if fail.
Attributes
----------
index: list, shape (1, n_elements)
A list contains all elements in this container.
Examples
--------
>>> multi_lab_ind1 = MultiLabelIndexCollection([(0, 1), (0, 2), (0, (3, 4)), (1, (0, 1))], label_size=5)
{(0, 1), (1, 1), (0, 4), (1, 0), (0, 2), (0, 3)}
>>> multi_lab_ind1.update((0, 0))
{(0, 1), (0, 0), (1, 1), (0, 4), (1, 0), (0, 2), (0, 3)}
>>> multi_lab_ind1.update([(1, 2), (1, (3, 4))])
{(0, 1), (1, 2), (0, 0), (1, 3), (1, 4), (1, 1), (0, 4), (1, 0), (0, 2), (0, 3)}
>>> multi_lab_ind1.update([(2,)])
{(0, 1), (1, 2), (0, 0), (1, 3), (2, 2), (1, 4), (2, 1), (2, 0), (1, 1), (2, 3), (2, 4), (0, 4), (1, 0), (0, 2), (0, 3)}
>>> multi_lab_ind1.difference_update([(0,)])
{(1, 2), (1, 3), (2, 2), (1, 4), (2, 1), (2, 0), (1, 1), (2, 3), (2, 4), (1, 0)}
"""
def __init__(self, data=None, label_size=None):
if data is None or len(data) == 0:
self._innercontainer = set()
if label_size is None:
warnings.warn("This collection does not have a label_size value, set it manually or "
"it will raise when decomposing indexes.",
category=ValidityWarning)
self._label_size = label_size
else:
if isinstance(data, MultiLabelIndexCollection):
self._innercontainer = copy.deepcopy(data.index)
self._label_size = data._label_size
return
# check given indexes
data = check_index_multilabel(data)
if label_size is None:
self._label_size = infer_label_size_multilabel(data, check_arr=False)
else:
self._label_size = label_size
# decompose all label queries.
decomposed_data = flattern_multilabel_index(data, self._label_size, check_arr=False)
self._innercontainer = set(decomposed_data)
if len(self._innercontainer) != len(decomposed_data):
warnings.warn(
"There are %d same elements in the given data" % (len(data) - len(self._innercontainer)),
category=RepeatElementWarning,
stacklevel=3)
@property
def index(self):
"""
Get the index of data.
"""
return list(self._innercontainer)
def add(self, value):
"""Add element.
It will warn if the value to add is existent. Raise if
invalid type of value is given.
Parameters
----------
value: tuple
Index for adding. Raise if index is out of bound.
Returns
-------
self: object
return self.
"""
# check validation
assert (isinstance(value, tuple))
if len(value) == 1:
value = [(value[0], i) for i in range(self._label_size)]
return self.update(value)
elif len(value) == 2:
if isinstance(value[1], collections.Iterable):
for item in value[1]:
if item >= self._label_size:
raise ValueError("Index %s is out of bound %s" % (str(item), str(self._label_size)))
else:
if value[1] >= self._label_size:
raise ValueError("Index %s is out of bound %s" % (str(value[1]), str(self._label_size)))
else:
raise ValueError("A tuple with 1 or 2 elements is expected, but received: %s" % str(value))
if value in self._innercontainer:
warnings.warn("Adding element %s has already in the collection, skip." % (value.__str__()),
category=RepeatElementWarning,
stacklevel=3)
else:
self._innercontainer.add(value)
return self
def discard(self, value):
"""Remove an element.
It will warn if the value to discard is inexistent. Raise if
invalid type of value is given.
Parameters
----------
value: tuple
Index for adding. Raise if index is out of bound.
Returns
-------
self: object
return self.
"""
assert (isinstance(value, tuple))
if len(value) == 1:
value = [(value[0], i) for i in range(self._label_size)]
return self.difference_update(value)
if value not in self._innercontainer:
warnings.warn("Element %s to discard is not in the collection, skip." % (value.__str__()),
category=InexistentElementWarning,
stacklevel=3)
else:
self._innercontainer.discard(value)
return self
def difference_update(self, other):
"""Remove all elements of another array from this container.
Parameters
----------
other: object
Elements to discard. Note that, if multiple indexes are contained,
a list, numpy.ndarray or MultiLabelIndexCollection should be given. Otherwise,
a tuple should be given.
Returns
-------
self: object
Return self.
"""
if isinstance(other, (list, np.ndarray, MultiLabelIndexCollection)):
label_ind = flattern_multilabel_index(other, self._label_size)
for j in label_ind:
self.discard(j)
elif isinstance(other, tuple):
self.discard(other)
else:
raise TypeError(
"A list or np.ndarray is expected if multiple indexes are "
"contained. Otherwise, a tuple should be provided")
return self
def update(self, other):
"""Update self with the union of itself and others.
Parameters
----------
other: object
Elements to add. Note that, if multiple indexes are contained,
a list, numpy.ndarray or MultiLabelIndexCollection should be given. Otherwise,
a tuple should be given.
Returns
-------
self: object
Return self.
"""
if isinstance(other, (list, np.ndarray, MultiLabelIndexCollection)):
label_ind = flattern_multilabel_index(other, self._label_size)
for j in label_ind:
self.add(j)
elif isinstance(other, tuple):
self.add(other)
else:
raise TypeError(
"A list or np.ndarray is expected if multiple indexes are "
"contained. Otherwise, a tuple should be provided")
return self
def get_onedim_index(self, order='C', ins_num=None):
"""Get the 1d index.
Parameters
----------
order : {'C', 'F'}, optional (default='C')
Determines whether the indices should be viewed as indexing in
row-major (C-style) or column-major (Matlab-style) order.
ins_num: int, optional
The total number of instance. Must be provided if the order is 'F'.
Examples
--------
>>> b = [1, 4, 11]
>>> mi = MultiLabelIndexCollection.construct_by_1d_array(array=b, label_mat_shape=(3, 4))
>>> print(mi)
{(1, 0), (2, 3), (1, 1)}
>>> print('col major:', mi.get_onedim_index(order='F', ins_num=3))
col major: [1, 11, 4]
>>> print('row major:', mi.get_onedim_index(order='C'))
row major: [4, 11, 5]
"""
if order == 'F':
if ins_num is None:
raise ValueError("The ins_num must be provided if the order is 'F'.")
return [tup[0] + tup[1] * ins_num for tup in self._innercontainer]
elif order == 'C':
return [tup[0] * self._label_size + tup[1] for tup in self._innercontainer]
else:
raise ValueError("The value of order must be one of {'C', 'F'}")
def get_instance_index(self):
"""Get the index of instances contained in this object.
If it is a labeled set, it is equivalent to the indexes of fully and partially labeled instances.
Returns
-------
partlab: list
The indexes of partially labeled instances.
"""
return np.unique([tp[0] for tp in self._innercontainer])
def _get_cond_instance(self, cond):
"""Return the indexes of instances according to the cond.
cond = 0: return the instances which are unbroken.
cond = 1: return the instances which have missing entries.
"""
tmp = integrate_multilabel_index(self.index, label_size=self._label_size, check_arr=False)
if cond == 0:
return [tp[0] for tp in tmp if len(tp) == 1]
else:
return [tp[0] for tp in tmp if len(tp) > 1]
def get_unbroken_instances(self):
"""Return the indexes of unbroken instances whose entries are all known."""
return self._get_cond_instance(cond=0)
def get_break_instances(self):
"""Return the indexes of break instances which have missing entries."""
return self._get_cond_instance(cond=1)
def get_matrix_mask(self, mat_shape, fill_value=1, sparse=True, sparse_format='lil_matrix'):
"""Return an array which has the same shape with the label matrix.
If an entry is known, then, the corresponding value in the mask is 1, otherwise, 0.
Parameters
----------
mat_shape: tuple
The shape of label matrix. [n_samples, n_classes]
fill_value: int
The value filled in the mask when the entry is in the container.
sparse: bool
Whether to return a sparse matrix or a dense matrix (numpy.ndarray).
sparse_format: str
The format of the returned sparse matrix. Only available if sparse==True
should be one onf [bsr_matrix, coo_matrix, csc_matrix, csr_matrix, dia_matrix, dok_matrix, lil_matrix].
Please refer to https://docs.scipy.org/doc/scipy-0.18.1/reference/sparse.html
for the definition of each sparse format.
Returns
-------
mask: {scipy.sparse.csr_matrix, scipy.sparse.csc_matrix}
The mask of the label matrix.
"""
assert isinstance(mat_shape, tuple)
if sparse:
try:
exec("from scipy.sparse import " + sparse_format)
except:
raise ValueError(
"sparse format " + sparse_format + "is not defined. Valid format should be one of "
"[bsr_matrix, coo_matrix, csc_matrix, csr_matrix, "
"dia_matrix, dok_matrix, lil_matrix].")
mask = eval(sparse_format + '(mat_shape)')
else:
if fill_value == 1:
mask = np.zeros(mat_shape, dtype=bool)
for item in self._innercontainer:
mask[item] = True
else:
mask = np.zeros(mat_shape)
for item in self._innercontainer:
mask[item] = fill_value
return mask
@classmethod
def construct_by_1d_array(cls, array, label_mat_shape, order='F'):
"""Construct a MultiLabelIndexCollection object by providing a
1d array, and the number of classes.
Parameters
----------
array: {list, np.ndarray}
An 1d array of indexes.
label_mat_shape: tuple of ints
The shape of label matrix. The 1st element is the number of instances,
and the 2nd element is the total classes.
order : {'C', 'F'}, optional
Determines whether the indices should be viewed as indexing in
row-major (C-style) or column-major (Matlab-style) order.
Returns
-------
multi_ind: MultiLabelIndexCollection
The MultiLabelIndexCollection object.
Examples
--------
>>> b = [1, 4, 11]
>>> mi = MultiLabelIndexCollection.construct_by_1d_array(array=b, label_mat_shape=(3, 4))
>>> print(mi)
{(1, 0), (2, 3), (1, 1)}
>>> print('col major:', mi.get_onedim_index(order='F', ins_num=3))
col major: [1, 11, 4]
>>> print('row major:', mi.get_onedim_index(order='C'))
row major: [4, 11, 5]
"""
assert len(label_mat_shape) == 2
row, col = np.unravel_index(array, dims=label_mat_shape, order=order)
return cls(data=[(row[i], col[i]) for i in range(len(row))], label_size=label_mat_shape[1])
@classmethod
def construct_by_element_mask(cls, mask):
"""Construct a MultiLabelIndexCollection object by providing a
2d array whose shape should be the same as the matrix shape.
Parameters
----------
mask: {list, np.ndarray}
The 2d mask matrix of elements.
There must be only 1 and 0 in the matrix, in which,
1 means the corresponding element is known, and will be
added to the MultiLabelIndexCollection container.
Otherwise, it will be cheated as an unknown element.
Examples
--------
>>> import numpy as np
>>> mask = np.asarray([
[0, 1],
[1, 0],
[1, 0]
]) # 3 rows, 2 lines
>>> mi = MultiLabelIndexCollection.construct_by_element_mask(mask=mask)
>>> print(mi)
{(0, 1), (2, 0), (1, 0)}
"""
mask = np.asarray(mask)
ue = np.unique(mask)
if not (len(mask.shape) == 2 and len(ue) == 2 and 0 in ue and 1 in ue):
raise ValueError("The mask matrix should be a 2d array, and there must be only "
"1 and 0 in the matrix, in which, 1 means the corresponding "
"element is known, and will be added to the MultiLabelIndexCollection container.")
nz_row, nz_col = np.nonzero(mask)
return cls(data=[(nz_row[i], nz_col[i]) for i in range(len(nz_row))], label_size=mask.shape[1])
class FeatureIndexCollection(MultiLabelIndexCollection):
"""Container to store the indexes in feature querying scenario.
This class stores indexes in incomplete feature matrix setting. Each element should be a tuple.
A single index should only have 1 element (example_index, ) to query all features or
2 elements (example_index, [feature_indexes]) to query specific features.
Some examples of valid indexes include:
queried_index = (1, [3,4])
queried_index = (1, [3])
queried_index = (1, 3)
queried_index = (1, (3))
queried_index = (1, (3,4))
queried_index = (1, ) # query all _labels
Several validity checking are implemented in this class.
Such as repeated elements, Index out of bound.
Parameters
----------
data : list or np.ndarray of a single tuple, optional (default=None)
shape [n_element]. All elements should be tuples.
feature_size: int, optional (default=None)
The number of features. If not provided, an infer is attempted, raise if fail.
Attributes
----------
index: list, shape (1, n_elements)
A list contains all elements in this container.
Examples
--------
>>> fea_ind1 = FeatureIndexCollection([(0, 1), (0, 2), (0, (3, 4)), (1, (0, 1))], feature_size=5)
{(0, 1), (1, 1), (0, 4), (1, 0), (0, 2), (0, 3)}
>>> fea_ind1.update((0, 0))
{(0, 1), (0, 0), (1, 1), (0, 4), (1, 0), (0, 2), (0, 3)}
>>> fea_ind1.update([(1, 2), (1, (3, 4))])
{(0, 1), (1, 2), (0, 0), (1, 3), (1, 4), (1, 1), (0, 4), (1, 0), (0, 2), (0, 3)}
>>> fea_ind1.update([(2,)])
{(0, 1), (1, 2), (0, 0), (1, 3), (2, 2), (1, 4), (2, 1), (2, 0), (1, 1), (2, 3), (2, 4), (0, 4), (1, 0), (0, 2), (0, 3)}
>>> fea_ind1.difference_update([(0,)])
{(1, 2), (1, 3), (2, 2), (1, 4), (2, 1), (2, 0), (1, 1), (2, 3), (2, 4), (1, 0)}
"""
def __init__(self, data, feature_size=None):
try:
super(FeatureIndexCollection, self).__init__(data=data, label_size=feature_size)
except(Exception, ValueError):
raise Exception("The inference of feature_size is failed, please set a specific value.")
def map_whole_index_to_train(train_idx, index_in_whole):
"""Map the indexes from whole dataset to training set.
Parameters
----------
train_idx: {list, numpy.ndarray}
The training indexes.
index_in_whole: {IndexCollection, MultiLabelIndexCollection}
The indexes need to be mapped of the whole data.
Returns
-------
index_in_train: {IndexCollection, MultiLabelIndexCollection}
The mapped indexes.
Examples
--------
>>> train_idx = [231, 333, 423]
>>> index_in_whole = IndexCollection([333, 423])
>>> print(map_whole_index_to_train(train_idx, index_in_whole))
[1, 2]
"""
if isinstance(index_in_whole, MultiLabelIndexCollection):
ind_type = 2
elif isinstance(index_in_whole, IndexCollection):
ind_type = 1
else:
raise TypeError("index_in_whole must be one of {IndexCollection, MultiLabelIndexCollection} type.")
tr_ob = []
for entry in index_in_whole:
if ind_type == 2:
assert entry[0] in train_idx
ind_in_train = np.argwhere(train_idx == entry[0])[0][0]
tr_ob.append((ind_in_train, entry[1]))
else:
assert entry in train_idx
tr_ob.append(np.argwhere(train_idx == entry)[0][0])
if ind_type == 2:
return MultiLabelIndexCollection(tr_ob)
else:
return IndexCollection(tr_ob)
|
[
"numpy.unique",
"numpy.asarray",
"numpy.zeros",
"numpy.argwhere",
"numpy.unravel_index",
"numpy.nonzero",
"copy.deepcopy"
] |
[((3255, 3290), 'copy.deepcopy', 'copy.deepcopy', (['self._innercontainer'], {}), '(self._innercontainer)\n', (3268, 3290), False, 'import copy\n'), ((16509, 16558), 'numpy.unique', 'np.unique', (['[tp[0] for tp in self._innercontainer]'], {}), '([tp[0] for tp in self._innercontainer])\n', (16518, 16558), True, 'import numpy as np\n'), ((20720, 20778), 'numpy.unravel_index', 'np.unravel_index', (['array'], {'dims': 'label_mat_shape', 'order': 'order'}), '(array, dims=label_mat_shape, order=order)\n', (20736, 20778), True, 'import numpy as np\n'), ((21807, 21823), 'numpy.asarray', 'np.asarray', (['mask'], {}), '(mask)\n', (21817, 21823), True, 'import numpy as np\n'), ((21837, 21852), 'numpy.unique', 'np.unique', (['mask'], {}), '(mask)\n', (21846, 21852), True, 'import numpy as np\n'), ((22255, 22271), 'numpy.nonzero', 'np.nonzero', (['mask'], {}), '(mask)\n', (22265, 22271), True, 'import numpy as np\n'), ((2073, 2098), 'copy.deepcopy', 'copy.deepcopy', (['data.index'], {}), '(data.index)\n', (2086, 2098), False, 'import copy\n'), ((2309, 2345), 'numpy.unique', 'np.unique', (['[i for i in data]'], {'axis': '(0)'}), '([i for i in data], axis=0)\n', (2318, 2345), True, 'import numpy as np\n'), ((9617, 9642), 'copy.deepcopy', 'copy.deepcopy', (['data.index'], {}), '(data.index)\n', (9630, 9642), False, 'import copy\n'), ((19137, 19168), 'numpy.zeros', 'np.zeros', (['mat_shape'], {'dtype': 'bool'}), '(mat_shape, dtype=bool)\n', (19145, 19168), True, 'import numpy as np\n'), ((19298, 19317), 'numpy.zeros', 'np.zeros', (['mat_shape'], {}), '(mat_shape)\n', (19306, 19317), True, 'import numpy as np\n'), ((25621, 25655), 'numpy.argwhere', 'np.argwhere', (['(train_idx == entry[0])'], {}), '(train_idx == entry[0])\n', (25632, 25655), True, 'import numpy as np\n'), ((25790, 25821), 'numpy.argwhere', 'np.argwhere', (['(train_idx == entry)'], {}), '(train_idx == entry)\n', (25801, 25821), True, 'import numpy as np\n')]
|
import platform as p
import uuid
import hashlib
def basic():
sb = []
sb.append(p.node())
sb.append( ''.join([ x for x in p.architecture() ]) )
sb.append(p.machine())
sb.append(p.processor())
sb.append(p.system())
sb.append(str(uuid.getnode())) # MAC address
text = '.'.join(sb)
return text
|
[
"platform.node",
"uuid.getnode",
"platform.system",
"platform.architecture",
"platform.processor",
"platform.machine"
] |
[((89, 97), 'platform.node', 'p.node', ([], {}), '()\n', (95, 97), True, 'import platform as p\n'), ((171, 182), 'platform.machine', 'p.machine', ([], {}), '()\n', (180, 182), True, 'import platform as p\n'), ((198, 211), 'platform.processor', 'p.processor', ([], {}), '()\n', (209, 211), True, 'import platform as p\n'), ((227, 237), 'platform.system', 'p.system', ([], {}), '()\n', (235, 237), True, 'import platform as p\n'), ((257, 271), 'uuid.getnode', 'uuid.getnode', ([], {}), '()\n', (269, 271), False, 'import uuid\n'), ((135, 151), 'platform.architecture', 'p.architecture', ([], {}), '()\n', (149, 151), True, 'import platform as p\n')]
|
url = "https://www.delish.com/cooking/recipe-ideas/recipes/a53823/easy-pad-thai-recipe/"
url2 = "https://www.allrecipes.com/recipe/92462/slow-cooker-texas-pulled-pork/"
# opener = urllib.URLopener()
# opener.addheader(('User-Agent', 'Mozilla/5.0'))
# f = urllib.urlopen(url)
import requests
import html2text
h = html2text.HTML2Text()
h.ignore_links = True
f = requests.get(url2)
g = h.handle(f.text)
arrayOflines = g.split("\n")
isPrinting = False
chunk = []
chunks = []
for line in arrayOflines:
if(len(line) != 0):
chunk.append(line)
else:
chunks.append(chunk)
chunk = []
print(chunks)
for c in chunks:
print(c)
print("\n \n")
# if 'ingredients' in line.lower() and len(line) < 15:
# print(line)
# if "ingredients" in line and len(line) < :
# print(len(line))
# isPrinting = True
# if(isPrinting):
# print(line)
# if(len(line) == 0):
# isPrinting = False
# print(arrayOflines)
|
[
"html2text.HTML2Text",
"requests.get"
] |
[((315, 336), 'html2text.HTML2Text', 'html2text.HTML2Text', ([], {}), '()\n', (334, 336), False, 'import html2text\n'), ((363, 381), 'requests.get', 'requests.get', (['url2'], {}), '(url2)\n', (375, 381), False, 'import requests\n')]
|
#!/usr/bin/env python
from __future__ import print_function
import inspect
import logging
import os
import platform
import sys
from time import sleep
from flaky import flaky
import pytest
import requests
from jira_test_manager import JiraTestManager
# _non_parallel is used to prevent some tests from failing due to concurrency
# issues because detox, Travis or Jenkins can run test in parallel for multiple
# python versions.
# The current workaround is to run these problematic tests only on py27
_non_parallel = True
if platform.python_version() < '3':
_non_parallel = False
try:
import unittest2 as unittest
except ImportError:
import pip
if hasattr(sys, 'real_prefix'):
pip.main(['install', '--upgrade', 'unittest2'])
else:
pip.main(['install', '--upgrade', '--user', 'unittest2'])
import unittest2 as unittest
else:
import unittest
cmd_folder = os.path.abspath(os.path.join(os.path.split(inspect.getfile(
inspect.currentframe()))[0], ".."))
if cmd_folder not in sys.path:
sys.path.insert(0, cmd_folder)
import jira # noqa
from jira import Role, Issue, JIRA, JIRAError, Project # noqa
from jira.resources import Resource, cls_for_resource # noqa
TEST_ROOT = os.path.dirname(__file__)
TEST_ICON_PATH = os.path.join(TEST_ROOT, 'icon.png')
TEST_ATTACH_PATH = os.path.join(TEST_ROOT, 'tests.py')
OAUTH = False
CONSUMER_KEY = 'oauth-consumer'
KEY_CERT_FILE = '/home/bspeakmon/src/atlassian-oauth-examples/rsa.pem'
KEY_CERT_DATA = None
try:
with open(KEY_CERT_FILE, 'r') as cert:
KEY_CERT_DATA = cert.read()
OAUTH = True
except Exception:
pass
if 'CI_JIRA_URL' in os.environ:
not_on_custom_jira_instance = pytest.mark.skipif(True, reason="Not applicable for custom JIRA instance")
logging.info('Picked up custom JIRA engine.')
else:
def noop(arg):
return arg
not_on_custom_jira_instance = noop
def jira_servicedesk_detection():
if 'CI_JIRA_URL' in os.environ:
url = os.environ['CI_JIRA_URL']
else:
url = 'https://pycontribs.atlassian.net'
url += '/rest/servicedeskapi/info'
return requests.get(url).status_code != 200
jira_servicedesk = pytest.mark.skipif(jira_servicedesk_detection(), reason="JIRA Service Desk is not available.")
@flaky
@jira_servicedesk
class ServiceDeskTests(unittest.TestCase):
def setUp(self):
self.test_manager = JiraTestManager()
self.jira = self.test_manager.jira_admin
self.desk = self.jira.desk
self.test_fullname_a = "TestCustomerFullName %s" % self.test_manager.project_a
self.test_email_a = "<EMAIL>" % self.test_manager.project_a
self.test_fullname_b = "TestCustomerFullName %s" % self.test_manager.project_b
self.test_email_b = "<EMAIL>" % self.test_manager.project_b
self.test_organization_name_a = "test_organization_%s" % self.test_manager.project_a
self.test_organization_name_b = "test_organization_%s" % self.test_manager.project_b
def test_create_and_delete_customer(self):
try:
self.jira.delete_user(self.test_email_a)
except JIRAError:
pass
customer = self.desk.create_customer(self.test_email_a, self.test_fullname_a)
self.assertEqual(customer.emailAddress, self.test_email_a)
self.assertEqual(customer.displayName, self.test_fullname_a)
result = self.jira.delete_user(self.test_email_a)
self.assertTrue(result)
def test_get_servicedesk_info(self):
result = self.desk.servicedesk_info()
self.assertNotEqual(result, False)
def test_create_and_delete_organization(self):
organization = self.desk.create_organization(self.test_organization_name_a)
self.assertEqual(organization.name, self.test_organization_name_a)
result = self.desk.delete_organization(organization.id)
self.assertTrue(result)
def test_get_organization(self):
organization = self.desk.create_organization(self.test_organization_name_a)
self.assertEqual(organization.name, self.test_organization_name_a)
result = self.desk.organization(organization.id)
self.assertEqual(result.id, organization.id)
self.assertEqual(result.name, self.test_organization_name_a)
result = self.desk.delete_organization(organization.id)
self.assertTrue(result)
def test_add_users_to_organization(self):
organization = self.desk.create_organization(self.test_organization_name_a)
self.assertEqual(organization.name, self.test_organization_name_a)
try:
self.jira.delete_user(self.test_email_a)
except JIRAError:
pass
try:
self.jira.delete_user(self.test_email_b)
except JIRAError:
pass
customer_a = self.desk.create_customer(self.test_email_a, self.test_fullname_a)
self.assertEqual(customer_a.emailAddress, self.test_email_a)
self.assertEqual(customer_a.displayName, self.test_fullname_a)
customer_b = self.desk.create_customer(self.test_email_b, self.test_fullname_b)
self.assertEqual(customer_b.emailAddress, self.test_email_b)
self.assertEqual(customer_b.displayName, self.test_fullname_b)
result = self.desk.add_users_to_organization(organization.id, [self.test_email_a, self.test_email_b])
self.assertTrue(result)
result = self.jira.delete_user(self.test_email_a)
self.assertTrue(result)
result = self.jira.delete_user(self.test_email_b)
self.assertTrue(result)
result = self.desk.delete_organization(organization.id)
self.assertTrue(result)
def test_remove_users_from_organization(self):
organization = self.desk.create_organization(self.test_organization_name_a)
self.assertEqual(organization.name, self.test_organization_name_a)
try:
self.jira.delete_user(self.test_email_a)
except JIRAError:
pass
try:
self.jira.delete_user(self.test_email_b)
except JIRAError:
pass
customer_a = self.desk.create_customer(self.test_email_a, self.test_fullname_a)
self.assertEqual(customer_a.emailAddress, self.test_email_a)
self.assertEqual(customer_a.displayName, self.test_fullname_a)
customer_b = self.desk.create_customer(self.test_email_b, self.test_fullname_b)
self.assertEqual(customer_b.emailAddress, self.test_email_b)
self.assertEqual(customer_b.displayName, self.test_fullname_b)
result = self.desk.add_users_to_organization(organization.id, [self.test_email_a, self.test_email_b])
self.assertTrue(result)
result = self.desk.remove_users_from_organization(organization.id, [self.test_email_a, self.test_email_b])
self.assertTrue(result)
result = self.jira.delete_user(self.test_email_a)
self.assertTrue(result)
result = self.jira.delete_user(self.test_email_b)
self.assertTrue(result)
result = self.desk.delete_organization(organization.id)
self.assertTrue(result)
def test_get_organizations(self):
organization_a = self.desk.create_organization(self.test_organization_name_a)
self.assertEqual(organization_a.name, self.test_organization_name_a)
organization_b = self.desk.create_organization(self.test_organization_name_b)
self.assertEqual(organization_b.name, self.test_organization_name_b)
organizations = self.desk.organizations(0, 1)
self.assertEqual(len(organizations), 1)
result = self.desk.delete_organization(organization_a.id)
self.assertTrue(result)
result = self.desk.delete_organization(organization_b.id)
self.assertTrue(result)
def test_get_users_in_organization(self):
organization = self.desk.create_organization(self.test_organization_name_a)
self.assertEqual(organization.name, self.test_organization_name_a)
try:
self.jira.delete_user(self.test_email_a)
except JIRAError:
pass
try:
self.jira.delete_user(self.test_email_b)
except JIRAError:
pass
customer_a = self.desk.create_customer(self.test_email_a, self.test_fullname_a)
self.assertEqual(customer_a.emailAddress, self.test_email_a)
self.assertEqual(customer_a.displayName, self.test_fullname_a)
customer_b = self.desk.create_customer(self.test_email_b, self.test_fullname_b)
self.assertEqual(customer_b.emailAddress, self.test_email_b)
self.assertEqual(customer_b.displayName, self.test_fullname_b)
result = self.desk.add_users_to_organization(organization.id, [self.test_email_a, self.test_email_b])
self.assertTrue(result)
result = self.desk.get_users_from_organization(organization.id)
self.assertEqual(len(result), 2)
result = self.jira.delete_user(self.test_email_a)
self.assertTrue(result)
result = self.jira.delete_user(self.test_email_b)
self.assertTrue(result)
result = self.desk.delete_organization(organization.id)
self.assertTrue(result)
def test_service_desks(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
def test_servicedesk(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
service_desk = self.desk.service_desk(service_desks[0].id)
self.assertEqual(service_desk.id, service_desks[0].id)
def test_request_types(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
request_types = self.desk.request_types(service_desks[0].id)
self.assertGreater(len(request_types), 0)
def test_request_type(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
request_types = self.desk.request_types(service_desks[0].id)
self.assertGreater(len(request_types), 0)
request_type = self.desk.request_type(service_desks[0].id, request_types[0].id)
self.assertEqual(request_type.id, request_types[0].id)
self.assertEqual(request_type.name, request_types[0].name)
def test_request_type_by_name(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
request_types = self.desk.request_types(service_desks[0].id)
self.assertGreater(len(request_types), 0)
request_type_by_name = self.desk.request_type_by_name(service_desks[0].id, request_types[0].name)
self.assertEqual(request_types[0].id, request_type_by_name.id)
self.assertEqual(request_types[0].name, request_type_by_name.name)
def test_create_and_delete_customer_request_with_prefetch(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
request_types = self.desk.request_types(service_desks[0].id)
self.assertGreater(len(request_types), 0)
fields = {
"serviceDeskId": int(service_desks[0].id),
"requestTypeId": int(request_types[0].id),
"raiseOnBehalfOf": self.test_manager.CI_JIRA_USER,
"requestFieldValues": {
"summary": "Request summary",
"description": "Request description"
}
}
request = self.desk.create_request(fields, prefetch=True)
self.jira.delete_issue(request.id)
self.assertIsNotNone(request.id)
self.assertIsNotNone(request.key)
self.assertEqual(request.fields.summary, "Request summary")
self.assertEqual(request.fields.description, "Request description")
def test_create_and_delete_customer_request_without_prefetch(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
request_types = self.desk.request_types(service_desks[0].id)
self.assertGreater(len(request_types), 0)
fields = {
"serviceDeskId": int(service_desks[0].id),
"requestTypeId": int(request_types[0].id),
"raiseOnBehalfOf": self.test_manager.CI_JIRA_USER,
"requestFieldValues": {
"summary": "Request summary",
"description": "Request description"
}
}
request = self.desk.create_request(fields, prefetch=False)
self.jira.delete_issue(request.id)
self.assertIsNotNone(request.id)
self.assertIsNotNone(request.key)
self.assertEqual(request.fields.summary, "Request summary")
self.assertEqual(request.fields.description, "Request description")
def test_get_customer_request_by_key_or_id(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
request_types = self.desk.request_types(service_desks[0].id)
self.assertGreater(len(request_types), 0)
fields = {
"serviceDeskId": int(service_desks[0].id),
"requestTypeId": int(request_types[0].id),
"raiseOnBehalfOf": self.test_manager.CI_JIRA_USER,
"requestFieldValues": {
"summary": "Request summary",
"description": "Request description"
}
}
request = self.desk.create_request(fields, prefetch=False)
expand = 'serviceDesk,requestType,participant,sla,status'
request_by_key = self.desk.request(request.key, expand=expand)
self.assertEqual(request.id, request_by_key.id)
self.assertEqual(request.key, request_by_key.key)
self.assertEqual(request_by_key.fields.summary, "Request summary")
self.assertEqual(request_by_key.fields.description, "Request description")
expand = 'serviceDesk,requestType,participant,sla,status'
request_by_id = self.desk.request(request.id, expand=expand)
self.jira.delete_issue(request.id)
self.assertEqual(request.id, request_by_id.id)
self.assertEqual(request.key, request_by_id.key)
self.assertEqual(request_by_id.fields.summary, "Request summary")
self.assertEqual(request_by_id.fields.description, "Request description")
def test_get_my_customer_requests(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
request_types = self.desk.request_types(service_desks[0].id)
self.assertGreater(len(request_types), 0)
fields = {
"serviceDeskId": int(service_desks[0].id),
"requestTypeId": int(request_types[0].id),
"raiseOnBehalfOf": self.test_manager.CI_JIRA_USER,
"requestFieldValues": {
"summary": "Request summary",
"description": "Request description"
}
}
request1 = self.desk.create_request(fields, prefetch=False)
fields = {
"serviceDeskId": int(service_desks[0].id),
"requestTypeId": int(request_types[0].id),
"raiseOnBehalfOf": self.test_manager.CI_JIRA_ADMIN,
"requestFieldValues": {
"summary": "Request summary",
"description": "Request description"
}
}
request2 = self.desk.create_request(fields, prefetch=False)
result = self.desk.my_customer_requests(request_ownership='OWNED_REQUESTS',
servicedesk_id=int(service_desks[0].id),
request_type_id=int(request_types[0].id))
count = 0
requests = (request1.id, request2.id)
for i in result:
if i.id in requests:
count += 1
self.assertEqual(count, 1)
result = self.desk.my_customer_requests(request_ownership='PARTICIPATED_REQUESTS',
servicedesk_id=int(service_desks[0].id),
request_type_id=int(request_types[0].id))
count = 0
requests_list = (request1.id, request2.id)
for i in result:
if i.id in requests_list:
count += 1
self.jira.delete_issue(request1.id)
self.jira.delete_issue(request2.id)
self.assertEqual(count, 0)
def test_request_comments(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
request_types = self.desk.request_types(service_desks[0].id)
self.assertGreater(len(request_types), 0)
fields = {
"serviceDeskId": int(service_desks[0].id),
"requestTypeId": int(request_types[0].id),
"raiseOnBehalfOf": self.test_manager.CI_JIRA_USER,
"requestFieldValues": {
"summary": "Request summary",
"description": "Request description"
}
}
request = self.desk.create_request(fields, prefetch=False)
self.jira.add_comment(request.id, "Public comment #1", is_internal=False)
self.jira.add_comment(request.id, "Internal comment #1", is_internal=True)
self.jira.add_comment(request.id, "Public comment #2", is_internal=False)
self.jira.add_comment(request.id, "Public comment #3", is_internal=False)
sleep(1)
public_comments = self.desk.request_comments(request.id, public=True, internal=False)
internal_comments = self.desk.request_comments(request.id, public=False, internal=True)
all_comments = self.desk.request_comments(request.id)
self.assertEqual(len(public_comments), 3)
self.assertEqual(len(internal_comments), 1)
self.assertEqual(len(all_comments), 4)
for comment in public_comments:
self.assertEqual(comment.public, True)
for comment in internal_comments:
self.assertEqual(comment.public, False)
self.jira.delete_issue(request.id)
def test_create_attachment(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
request_types = self.desk.request_types(service_desks[0].id)
self.assertGreater(len(request_types), 0)
fields = {
"serviceDeskId": int(service_desks[0].id),
"requestTypeId": int(request_types[0].id),
"raiseOnBehalfOf": self.test_manager.CI_JIRA_USER,
"requestFieldValues": {
"summary": "Request summary",
"description": "Request description"
}
}
request = self.desk.create_request(fields)
tmp_attachment = self.desk.attach_temporary_file(service_desks[0].id, open(TEST_ICON_PATH, 'rb'), "test.png")
self.assertEqual(len(tmp_attachment.temporaryAttachments), 1)
self.assertEqual(tmp_attachment.temporaryAttachments[0].fileName, 'test.png')
request_attachment = self.desk.servicedesk_attachment(request.id, tmp_attachment, is_public=False,
comment='Comment text')
self.jira.delete_issue(request.id)
self.assertEqual(request_attachment.comment.body, 'Comment text\n\n!test.png|thumbnail!')
if hasattr(request_attachment.attachments, 'values'):
# For Jira Servicedesk Cloud
self.assertGreater(len(request_attachment.attachments.values), 0)
self.assertEqual(request_attachment.attachments.values[0].filename, 'test.png')
self.assertGreater(request_attachment.attachments.values[0].size, 0)
else:
# For Jira Servicedesk Server
self.assertGreater(len(request_attachment.attachments), 0)
self.assertEqual(request_attachment.attachments[0].filename, 'test.png')
self.assertGreater(request_attachment.attachments[0].size, 0)
def test_attach_temporary_file(self):
service_desks = self.desk.service_desks()
self.assertGreater(len(service_desks), 0)
tmp_attachment = self.desk.attach_temporary_file(service_desks[0].id, open(TEST_ICON_PATH, 'rb'), "test.png")
self.assertEqual(len(tmp_attachment.temporaryAttachments), 1)
self.assertEqual(tmp_attachment.temporaryAttachments[0].fileName, 'test.png')
def test_create_customer_request(self):
try:
self.jira.create_project('TESTSD', template_name='IT Service Desk')
except JIRAError:
pass
service_desk = self.desk.service_desks()[0]
request_type = self.desk.request_types(service_desk.id)[0]
request = self.desk.create_customer_request(dict(
serviceDeskId=service_desk.id,
requestTypeId=int(request_type.id),
requestFieldValues=dict(
summary='Ticket title here',
description='Ticket body here'
)
))
self.assertEqual(request.fields.summary, 'Ticket title here')
self.assertEqual(request.fields.description, 'Ticket body here')
if __name__ == '__main__':
# when running tests we expect various errors and we don't want to display them by default
logging.getLogger("requests").setLevel(logging.FATAL)
logging.getLogger("urllib3").setLevel(logging.FATAL)
logging.getLogger("jira").setLevel(logging.FATAL)
# j = JIRA("https://issues.citrite.net")
# print(j.session())
dirname = "test-reports-%s%s" % (sys.version_info[0], sys.version_info[1])
unittest.main()
# pass
|
[
"logging.getLogger",
"sys.path.insert",
"unittest2.main",
"jira_test_manager.JiraTestManager",
"inspect.currentframe",
"os.path.join",
"time.sleep",
"requests.get",
"os.path.dirname",
"pytest.mark.skipif",
"logging.info",
"platform.python_version",
"pip.main"
] |
[((1265, 1290), 'os.path.dirname', 'os.path.dirname', (['__file__'], {}), '(__file__)\n', (1280, 1290), False, 'import os\n'), ((1308, 1343), 'os.path.join', 'os.path.join', (['TEST_ROOT', '"""icon.png"""'], {}), "(TEST_ROOT, 'icon.png')\n", (1320, 1343), False, 'import os\n'), ((1363, 1398), 'os.path.join', 'os.path.join', (['TEST_ROOT', '"""tests.py"""'], {}), "(TEST_ROOT, 'tests.py')\n", (1375, 1398), False, 'import os\n'), ((528, 553), 'platform.python_version', 'platform.python_version', ([], {}), '()\n', (551, 553), False, 'import platform\n'), ((1074, 1104), 'sys.path.insert', 'sys.path.insert', (['(0)', 'cmd_folder'], {}), '(0, cmd_folder)\n', (1089, 1104), False, 'import sys\n'), ((1733, 1807), 'pytest.mark.skipif', 'pytest.mark.skipif', (['(True)'], {'reason': '"""Not applicable for custom JIRA instance"""'}), "(True, reason='Not applicable for custom JIRA instance')\n", (1751, 1807), False, 'import pytest\n'), ((1812, 1857), 'logging.info', 'logging.info', (['"""Picked up custom JIRA engine."""'], {}), "('Picked up custom JIRA engine.')\n", (1824, 1857), False, 'import logging\n'), ((21727, 21742), 'unittest2.main', 'unittest.main', ([], {}), '()\n', (21740, 21742), True, 'import unittest2 as unittest\n'), ((2434, 2451), 'jira_test_manager.JiraTestManager', 'JiraTestManager', ([], {}), '()\n', (2449, 2451), False, 'from jira_test_manager import JiraTestManager\n'), ((17554, 17562), 'time.sleep', 'sleep', (['(1)'], {}), '(1)\n', (17559, 17562), False, 'from time import sleep\n'), ((2162, 2179), 'requests.get', 'requests.get', (['url'], {}), '(url)\n', (2174, 2179), False, 'import requests\n'), ((21407, 21436), 'logging.getLogger', 'logging.getLogger', (['"""requests"""'], {}), "('requests')\n", (21424, 21436), False, 'import logging\n'), ((21465, 21493), 'logging.getLogger', 'logging.getLogger', (['"""urllib3"""'], {}), "('urllib3')\n", (21482, 21493), False, 'import logging\n'), ((21522, 21547), 'logging.getLogger', 'logging.getLogger', (['"""jira"""'], {}), "('jira')\n", (21539, 21547), False, 'import logging\n'), ((730, 777), 'pip.main', 'pip.main', (["['install', '--upgrade', 'unittest2']"], {}), "(['install', '--upgrade', 'unittest2'])\n", (738, 777), False, 'import pip\n'), ((804, 861), 'pip.main', 'pip.main', (["['install', '--upgrade', '--user', 'unittest2']"], {}), "(['install', '--upgrade', '--user', 'unittest2'])\n", (812, 861), False, 'import pip\n'), ((1003, 1025), 'inspect.currentframe', 'inspect.currentframe', ([], {}), '()\n', (1023, 1025), False, 'import inspect\n')]
|
# -*- coding: utf-8 -*-
from __future__ import absolute_import, print_function, unicode_literals
import datetime
import decimal
import platform
import sys
import types
from itertools import chain
#stripped version of SIX
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
PY_35 = sys.version_info >= (3, 5)
PY_36 = sys.version_info >= (3, 6)
PY_37 = sys.version_info >= (3, 7)
WINDOWS = platform.system() == 'Windows'
LINUX = platform.system() == 'Linux'
MACOS = platform.system() == 'Darwin'
JYTHON = sys.platform.startswith('java')
if PY3:
string_types = str,
integer_types = int,
class_types = type,
text_type = str
binary_type = bytes
none_type = type(None)
import io
StringIO = io.StringIO
BytesIO = io.BytesIO
memoryview = memoryview
buffer_types = (bytes, bytearray, memoryview)
else:
string_types = basestring,
integer_types = (int, long)
class_types = (type, types.ClassType)
text_type = unicode
binary_type = str
none_type = types.NoneType
import StringIO
StringIO = BytesIO = StringIO.StringIO
# memoryview and buffer are not strictly equivalent, but should be fine for
# django core usage (mainly BinaryField). However, Jython doesn't support
# buffer (see http://bugs.jython.org/issue1521), so we have to be careful.
if JYTHON:
memoryview = memoryview
else:
memoryview = buffer
buffer_types = (bytearray, memoryview, buffer)
iterable_types = (list, tuple, set, frozenset, types.GeneratorType, chain)
protected_types = tuple(
chain(string_types, integer_types,
(float, decimal.Decimal, datetime.date, datetime.datetime,
datetime.time, bool, none_type)))
|
[
"itertools.chain",
"platform.system",
"sys.platform.startswith"
] |
[((519, 550), 'sys.platform.startswith', 'sys.platform.startswith', (['"""java"""'], {}), "('java')\n", (542, 550), False, 'import sys\n'), ((403, 420), 'platform.system', 'platform.system', ([], {}), '()\n', (418, 420), False, 'import platform\n'), ((442, 459), 'platform.system', 'platform.system', ([], {}), '()\n', (457, 459), False, 'import platform\n'), ((479, 496), 'platform.system', 'platform.system', ([], {}), '()\n', (494, 496), False, 'import platform\n'), ((1583, 1713), 'itertools.chain', 'chain', (['string_types', 'integer_types', '(float, decimal.Decimal, datetime.date, datetime.datetime, datetime.time,\n bool, none_type)'], {}), '(string_types, integer_types, (float, decimal.Decimal, datetime.date,\n datetime.datetime, datetime.time, bool, none_type))\n', (1588, 1713), False, 'from itertools import chain\n')]
|
import argparse
from sklearn.decomposition import LatentDirichletAllocation as LDA
import pickle
from biom import load_table
def main(args):
model = LDA(n_components=args.n_latent, max_iter=args.iterations,
verbose=1, learning_method='online')
table = load_table(args.train_biom)
X = table.matrix_data.T
model.fit(X)
with open(args.model_checkpoint, 'wb') as f:
pickle.dump(model, f)
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--train-biom', help='Training biom file',
required=True)
parser.add_argument('--n-latent', type=int, help='Number of components')
parser.add_argument('--iterations', type=int,
default=10000, required=False,
help='Number of iterations.')
parser.add_argument('--batch-size', type=int,
default=256, required=False,
help='Batch size')
parser.add_argument('--n-jobs', type=int,
default=-1, required=False,
help='Number of concurrent jobs.')
parser.add_argument('--model-checkpoint',
required=True,
help='Location of saved model.')
args = parser.parse_args()
main(args)
|
[
"biom.load_table",
"sklearn.decomposition.LatentDirichletAllocation",
"pickle.dump",
"argparse.ArgumentParser"
] |
[((155, 253), 'sklearn.decomposition.LatentDirichletAllocation', 'LDA', ([], {'n_components': 'args.n_latent', 'max_iter': 'args.iterations', 'verbose': '(1)', 'learning_method': '"""online"""'}), "(n_components=args.n_latent, max_iter=args.iterations, verbose=1,\n learning_method='online')\n", (158, 253), True, 'from sklearn.decomposition import LatentDirichletAllocation as LDA\n'), ((278, 305), 'biom.load_table', 'load_table', (['args.train_biom'], {}), '(args.train_biom)\n', (288, 305), False, 'from biom import load_table\n'), ((472, 497), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {}), '()\n', (495, 497), False, 'import argparse\n'), ((408, 429), 'pickle.dump', 'pickle.dump', (['model', 'f'], {}), '(model, f)\n', (419, 429), False, 'import pickle\n')]
|
import argparse
from datetime import datetime
import os
from catalyst import dl, utils
from catalyst.contrib.data import AllTripletsSampler
from catalyst.contrib.losses import TripletMarginLossWithSampler
from catalyst.data import BatchBalanceClassSampler
from torch import nn, optim
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from src.modules import resnet9
from src.settings import LOGS_ROOT
class CustomRunner(dl.Runner):
def handle_batch(self, batch) -> None:
images, targets = batch
embeddings, logits = self.model(images)
self.batch = {
"embeddings": embeddings,
"targets": targets,
"logits": logits,
}
def get_loggers(self):
return {
"console": dl.ConsoleLogger(),
"wandb": dl.WandbLogger(project="wandb_test", name="experiment_1"),
}
def main(use_ml: bool = False):
# data
transform_train = transforms.Compose(
[
transforms.RandomCrop(32, padding=4),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
)
transform_valid = transforms.Compose(
[
transforms.ToTensor(),
transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
]
)
train_dataset = datasets.CIFAR10(
os.getcwd(), train=True, download=True, transform=transform_train
)
valid_dataset = datasets.CIFAR10(
os.getcwd(), train=False, download=True, transform=transform_valid
)
# loaders
labels = train_dataset.targets
sampler = BatchBalanceClassSampler(labels=labels, num_classes=10, num_samples=10)
bs = sampler.batch_size
loaders = {
"train": DataLoader(train_dataset, batch_sampler=sampler, num_workers=4),
"valid": DataLoader(valid_dataset, batch_size=bs, num_workers=4, shuffle=False),
}
# model
model = resnet9(in_channels=3, num_classes=10)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)
scheduler = optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=200)
# optimizer = optim.Adam(model.parameters(), lr=1e-3)
# scheduler = optim.lr_scheduler.MultiStepLR(optimizer, [5, 8], gamma=0.3)
criterion_ce = nn.CrossEntropyLoss()
sampler_inbatch = AllTripletsSampler()
criterion_ml = TripletMarginLossWithSampler(margin=0.5, sampler_inbatch=sampler_inbatch)
criterion = {"ce": criterion_ce, "ml": criterion_ml}
# runner
runner = CustomRunner()
# callbacks
callbacks = [
dl.CriterionCallback(
input_key="logits",
target_key="targets",
metric_key="loss_ce",
criterion_key="ce",
),
dl.AccuracyCallback(input_key="logits", target_key="targets", topk=(1, 3, 5)),
dl.BackwardCallback(metric_key="loss" if use_ml else "loss_ce"),
dl.OptimizerCallback(metric_key="loss" if use_ml else "loss_ce"),
dl.SchedulerCallback(),
]
if use_ml:
callbacks.extend(
[
dl.ControlFlowCallbackWrapper(
base_callback=dl.CriterionCallback(
input_key="embeddings",
target_key="targets",
metric_key="loss_ml",
criterion_key="ml",
),
loaders=["train"],
),
dl.ControlFlowCallbackWrapper(
base_callback=dl.MetricAggregationCallback(
metric_key="loss",
metrics=["loss_ce", "loss_ml"],
mode="mean",
),
loaders=["train"],
),
]
)
# train
strtime = datetime.now().strftime("%Y%m%d-%H%M%S")
ml_flag = int(use_ml)
runner.train(
model=model,
criterion=criterion,
optimizer=optimizer,
scheduler=scheduler,
loaders=loaders,
num_epochs=200,
callbacks=callbacks,
logdir=f"{LOGS_ROOT}/image-ml{ml_flag}-{strtime}",
valid_loader="valid",
valid_metric="accuracy01",
minimize_valid_metric=False,
verbose=True,
load_best_on_end=True,
)
# evaluate
metrics = runner.evaluate_loader(
loader=loaders["valid"],
callbacks=[
dl.AccuracyCallback(input_key="logits", target_key="targets", topk=(1, 3, 5)),
dl.PrecisionRecallF1SupportCallback(
input_key="logits", target_key="targets", num_classes=10
),
],
)
print(metrics)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
utils.boolean_flag(parser, "use-ml", default=False)
args = parser.parse_args()
main(args.use_ml)
|
[
"torch.nn.CrossEntropyLoss",
"catalyst.dl.MetricAggregationCallback",
"catalyst.contrib.data.AllTripletsSampler",
"catalyst.dl.SchedulerCallback",
"argparse.ArgumentParser",
"catalyst.dl.OptimizerCallback",
"torchvision.transforms.ToTensor",
"catalyst.dl.ConsoleLogger",
"src.modules.resnet9",
"torchvision.transforms.RandomHorizontalFlip",
"catalyst.contrib.losses.TripletMarginLossWithSampler",
"torchvision.transforms.Normalize",
"catalyst.dl.BackwardCallback",
"catalyst.data.BatchBalanceClassSampler",
"torch.optim.lr_scheduler.CosineAnnealingLR",
"catalyst.dl.PrecisionRecallF1SupportCallback",
"catalyst.dl.WandbLogger",
"os.getcwd",
"torchvision.transforms.RandomCrop",
"datetime.datetime.now",
"catalyst.utils.boolean_flag",
"torch.utils.data.DataLoader",
"catalyst.dl.AccuracyCallback",
"catalyst.dl.CriterionCallback"
] |
[((1729, 1800), 'catalyst.data.BatchBalanceClassSampler', 'BatchBalanceClassSampler', ([], {'labels': 'labels', 'num_classes': '(10)', 'num_samples': '(10)'}), '(labels=labels, num_classes=10, num_samples=10)\n', (1753, 1800), False, 'from catalyst.data import BatchBalanceClassSampler\n'), ((2047, 2085), 'src.modules.resnet9', 'resnet9', ([], {'in_channels': '(3)', 'num_classes': '(10)'}), '(in_channels=3, num_classes=10)\n', (2054, 2085), False, 'from src.modules import resnet9\n'), ((2102, 2123), 'torch.nn.CrossEntropyLoss', 'nn.CrossEntropyLoss', ([], {}), '()\n', (2121, 2123), False, 'from torch import nn, optim\n'), ((2227, 2285), 'torch.optim.lr_scheduler.CosineAnnealingLR', 'optim.lr_scheduler.CosineAnnealingLR', (['optimizer'], {'T_max': '(200)'}), '(optimizer, T_max=200)\n', (2263, 2285), False, 'from torch import nn, optim\n'), ((2443, 2464), 'torch.nn.CrossEntropyLoss', 'nn.CrossEntropyLoss', ([], {}), '()\n', (2462, 2464), False, 'from torch import nn, optim\n'), ((2487, 2507), 'catalyst.contrib.data.AllTripletsSampler', 'AllTripletsSampler', ([], {}), '()\n', (2505, 2507), False, 'from catalyst.contrib.data import AllTripletsSampler\n'), ((2527, 2600), 'catalyst.contrib.losses.TripletMarginLossWithSampler', 'TripletMarginLossWithSampler', ([], {'margin': '(0.5)', 'sampler_inbatch': 'sampler_inbatch'}), '(margin=0.5, sampler_inbatch=sampler_inbatch)\n', (2555, 2600), False, 'from catalyst.contrib.losses import TripletMarginLossWithSampler\n'), ((4886, 4911), 'argparse.ArgumentParser', 'argparse.ArgumentParser', ([], {}), '()\n', (4909, 4911), False, 'import argparse\n'), ((4916, 4967), 'catalyst.utils.boolean_flag', 'utils.boolean_flag', (['parser', '"""use-ml"""'], {'default': '(False)'}), "(parser, 'use-ml', default=False)\n", (4934, 4967), False, 'from catalyst import dl, utils\n'), ((1474, 1485), 'os.getcwd', 'os.getcwd', ([], {}), '()\n', (1483, 1485), False, 'import os\n'), ((1592, 1603), 'os.getcwd', 'os.getcwd', ([], {}), '()\n', (1601, 1603), False, 'import os\n'), ((1862, 1925), 'torch.utils.data.DataLoader', 'DataLoader', (['train_dataset'], {'batch_sampler': 'sampler', 'num_workers': '(4)'}), '(train_dataset, batch_sampler=sampler, num_workers=4)\n', (1872, 1925), False, 'from torch.utils.data import DataLoader\n'), ((1944, 2014), 'torch.utils.data.DataLoader', 'DataLoader', (['valid_dataset'], {'batch_size': 'bs', 'num_workers': '(4)', 'shuffle': '(False)'}), '(valid_dataset, batch_size=bs, num_workers=4, shuffle=False)\n', (1954, 2014), False, 'from torch.utils.data import DataLoader\n'), ((2743, 2852), 'catalyst.dl.CriterionCallback', 'dl.CriterionCallback', ([], {'input_key': '"""logits"""', 'target_key': '"""targets"""', 'metric_key': '"""loss_ce"""', 'criterion_key': '"""ce"""'}), "(input_key='logits', target_key='targets', metric_key=\n 'loss_ce', criterion_key='ce')\n", (2763, 2852), False, 'from catalyst import dl, utils\n'), ((2916, 2993), 'catalyst.dl.AccuracyCallback', 'dl.AccuracyCallback', ([], {'input_key': '"""logits"""', 'target_key': '"""targets"""', 'topk': '(1, 3, 5)'}), "(input_key='logits', target_key='targets', topk=(1, 3, 5))\n", (2935, 2993), False, 'from catalyst import dl, utils\n'), ((3003, 3066), 'catalyst.dl.BackwardCallback', 'dl.BackwardCallback', ([], {'metric_key': "('loss' if use_ml else 'loss_ce')"}), "(metric_key='loss' if use_ml else 'loss_ce')\n", (3022, 3066), False, 'from catalyst import dl, utils\n'), ((3076, 3140), 'catalyst.dl.OptimizerCallback', 'dl.OptimizerCallback', ([], {'metric_key': "('loss' if use_ml else 'loss_ce')"}), "(metric_key='loss' if use_ml else 'loss_ce')\n", (3096, 3140), False, 'from catalyst import dl, utils\n'), ((3150, 3172), 'catalyst.dl.SchedulerCallback', 'dl.SchedulerCallback', ([], {}), '()\n', (3170, 3172), False, 'from catalyst import dl, utils\n'), ((796, 814), 'catalyst.dl.ConsoleLogger', 'dl.ConsoleLogger', ([], {}), '()\n', (812, 814), False, 'from catalyst import dl, utils\n'), ((837, 894), 'catalyst.dl.WandbLogger', 'dl.WandbLogger', ([], {'project': '"""wandb_test"""', 'name': '"""experiment_1"""'}), "(project='wandb_test', name='experiment_1')\n", (851, 894), False, 'from catalyst import dl, utils\n'), ((1015, 1051), 'torchvision.transforms.RandomCrop', 'transforms.RandomCrop', (['(32)'], {'padding': '(4)'}), '(32, padding=4)\n', (1036, 1051), False, 'from torchvision import datasets, transforms\n'), ((1065, 1098), 'torchvision.transforms.RandomHorizontalFlip', 'transforms.RandomHorizontalFlip', ([], {}), '()\n', (1096, 1098), False, 'from torchvision import datasets, transforms\n'), ((1112, 1133), 'torchvision.transforms.ToTensor', 'transforms.ToTensor', ([], {}), '()\n', (1131, 1133), False, 'from torchvision import datasets, transforms\n'), ((1147, 1218), 'torchvision.transforms.Normalize', 'transforms.Normalize', (['(0.4914, 0.4822, 0.4465)', '(0.2023, 0.1994, 0.201)'], {}), '((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.201))\n', (1167, 1218), False, 'from torchvision import datasets, transforms\n'), ((1302, 1323), 'torchvision.transforms.ToTensor', 'transforms.ToTensor', ([], {}), '()\n', (1321, 1323), False, 'from torchvision import datasets, transforms\n'), ((1337, 1408), 'torchvision.transforms.Normalize', 'transforms.Normalize', (['(0.4914, 0.4822, 0.4465)', '(0.2023, 0.1994, 0.201)'], {}), '((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.201))\n', (1357, 1408), False, 'from torchvision import datasets, transforms\n'), ((3982, 3996), 'datetime.datetime.now', 'datetime.now', ([], {}), '()\n', (3994, 3996), False, 'from datetime import datetime\n'), ((4592, 4669), 'catalyst.dl.AccuracyCallback', 'dl.AccuracyCallback', ([], {'input_key': '"""logits"""', 'target_key': '"""targets"""', 'topk': '(1, 3, 5)'}), "(input_key='logits', target_key='targets', topk=(1, 3, 5))\n", (4611, 4669), False, 'from catalyst import dl, utils\n'), ((4683, 4781), 'catalyst.dl.PrecisionRecallF1SupportCallback', 'dl.PrecisionRecallF1SupportCallback', ([], {'input_key': '"""logits"""', 'target_key': '"""targets"""', 'num_classes': '(10)'}), "(input_key='logits', target_key=\n 'targets', num_classes=10)\n", (4718, 4781), False, 'from catalyst import dl, utils\n'), ((3316, 3428), 'catalyst.dl.CriterionCallback', 'dl.CriterionCallback', ([], {'input_key': '"""embeddings"""', 'target_key': '"""targets"""', 'metric_key': '"""loss_ml"""', 'criterion_key': '"""ml"""'}), "(input_key='embeddings', target_key='targets',\n metric_key='loss_ml', criterion_key='ml')\n", (3336, 3428), False, 'from catalyst import dl, utils\n'), ((3684, 3780), 'catalyst.dl.MetricAggregationCallback', 'dl.MetricAggregationCallback', ([], {'metric_key': '"""loss"""', 'metrics': "['loss_ce', 'loss_ml']", 'mode': '"""mean"""'}), "(metric_key='loss', metrics=['loss_ce',\n 'loss_ml'], mode='mean')\n", (3712, 3780), False, 'from catalyst import dl, utils\n')]
|
import pandas as pd
import rapidfuzz
import math
import numpy as np
# ------------------------- #
# --------- DATA ---------- #
# ------------------------- #
# Read in mock census and PES data
CEN = pd.read_csv('Data/Mock_Rwanda_Data_Census.csv')
PES = pd.read_csv('Data/Mock_Rwanda_Data_Pes.csv')
# select needed columns
CEN = CEN[['id_indi_cen', 'firstnm_cen', 'lastnm_cen', 'age_cen', 'month_cen', 'year_cen', 'sex_cen', 'province_cen']]
PES = PES[['id_indi_pes', 'firstnm_pes', 'lastnm_pes', 'age_pes', 'month_pes', 'year_pes', 'sex_pes', 'province_pes']]
# ----------------------------- #
# --------- BLOCKING ---------- #
# ----------------------------- #
# Block on province geographic variable
BP1 = 'province'
# Combine
for i, BP in enumerate([BP1], 1):
if i == 1:
combined_blocks = PES.merge(CEN, left_on = BP + '_pes', right_on = BP + '_cen', how = 'inner').drop_duplicates(['id_indi_cen', 'id_indi_pes'])
print("1" + str(combined_blocks.count()))
# Count
len(combined_blocks) # 50042
# -------------------------------------------------- #
# --------------- AGREEMENT VECTORS ---------------- #
# -------------------------------------------------- #
# Agreement vector is created which is then inputted into the EM Algorithm.
# Set v1, v2,... vn as the agreement variables
# Select agreement variables
v1 = 'firstnm'
v2 = 'lastnm'
v3 = 'month'
v4 = 'year'
v5 = 'sex'
# All agreement variables used to calculate match weights & probabilities
all_variables = [v1, v2, v3, v4, v5]
# Variables using partial agreement (string similarity)
edit_distance_variables = [v1, v2]
dob_variables = [v3, v4]
remaining_variables = [v5]
# Cut off values for edit distance variables
cutoff_values = [0.45, 0.45]
# Replace NaN with blank spaces to assure the right data types for string similarity metrics
for variable in edit_distance_variables:
cen_var = variable+ '_cen'
pes_var = variable + '_pes'
combined_blocks[cen_var] = combined_blocks[cen_var].fillna("")
combined_blocks[pes_var] = combined_blocks[pes_var].fillna("")
def SLD(s,t):
# Computing the standardised levenshtein edit distance between two strings
# using the rapidfuzz string matching library for it's fast string comparisons
# Dividing result by 100 to return a score between 0 and 1
standardised = (rapidfuzz.string_metric.normalized_levenshtein(s, t)/100)
return standardised;
# Create forename/ last name Edit Distance score columns for all pairs
combined_blocks['firstnm_agreement'] = combined_blocks.apply(lambda x: SLD(x['firstnm_pes'], x['firstnm_cen']), axis=1)
combined_blocks['lastnm_agreement'] = combined_blocks.apply(lambda x: SLD(x['lastnm_pes'], x['lastnm_cen']), axis=1)
# --------------------------------------------------------- #
# ---------------- INITIAL M & U VALUES ------------------- #
# --------------------------------------------------------- #
# Read in M and U values
m_values = pd.read_csv('Data/m_values.csv')
u_values = pd.read_csv('Data/u_values.csv')
# Save individual M values from file
FN_M = m_values[m_values.variable == 'firstnm'].iloc[0][1]
SN_M = m_values[m_values.variable == 'lastnm'].iloc[0][1]
SEX_M = m_values[m_values.variable == 'sex'].iloc[0][1]
MONTH_M = m_values[m_values.variable == 'month'].iloc[0][1]
YEAR_M = m_values[m_values.variable == 'year'].iloc[0][1]
# Save individual U values from file
FN_U = u_values[u_values.variable == 'firstnm'].iloc[0][1]
SN_U = u_values[u_values.variable == 'lastnm'].iloc[0][1]
SEX_U = u_values[u_values.variable == 'sex'].iloc[0][1]
MONTH_U = u_values[u_values.variable == 'month'].iloc[0][1]
YEAR_U = u_values[u_values.variable == 'year'].iloc[0][1]
# Add M values to unlinked data
combined_blocks['firstnm_m'] = FN_M
combined_blocks['lastnm_m'] = SN_M
combined_blocks['sex_m'] = SEX_M
combined_blocks['month_m'] = MONTH_M
combined_blocks['year_m'] = YEAR_M
# Add U values to unlinked data
combined_blocks['firstnm_u'] = FN_U
combined_blocks['lastnm_u'] = SN_U
combined_blocks['sex_u'] = SEX_U
combined_blocks['month_u'] = MONTH_U
combined_blocks['year_u'] = YEAR_U
# Add Agreement / Disagreement Weights
for var in all_variables:
# apply calculations: agreement weight = log base 2 (m/u)
combined_blocks[var + "_agreement_weight"] = combined_blocks.apply(lambda x: (math.log2(x[var + "_m"] / x[var + "_u"])), axis = 1)
# disagreement weight = log base 2 ((1-m)/(1-u))
combined_blocks[var + "_disagreement_weight"] = combined_blocks.apply(lambda x: (math.log2((1 - x[var + "_m"]) / (1 - x[var + "_u"]))), axis = 1)
# show sample of agreement/disagreement weights calculated
print(combined_blocks[[var + "_m", var + "_u", var + "_agreement_weight", var + "_disagreement_weight"]].head(1))
'''
Alter the M and U values above (i.e. FN_M, FN_U etc. currently lines 100 - 112) to see the effect on variable agreement/disagreement weights
'''
# --------------------------------------------------- #
# ------------------ MATCH SCORES ------------------ #
# --------------------------------------------------- #
''' An agreement value between 0 and 1 is calculated for each agreeement variable '''
''' This is done for every candidate record pair '''
# --------------------------------------- #
# ------------- DOB SCORE -------------- #
# --------------------------------------- #
# Partial scores
combined_blocks['month_agreement'] = np.where(combined_blocks['month_pes'] == combined_blocks['month_cen'], 1/3, 0)
combined_blocks['year_agreement'] = np.where(combined_blocks['year_pes'] == combined_blocks['year_cen'], 1/2, 0)
# Compute final Score and drop extra score columns
dob_score_columns = ['month_agreement', 'year_agreement']
combined_blocks['DOB_agreement'] = combined_blocks[dob_score_columns].sum(axis=1)
# combined_blocks = combined_blocks.drop(dob_score_columns, axis = 1)
# ---------------------------------------- #
# ---------- PARTIAL CUT OFFS ------------ #
# ---------------------------------------- #
# All partial variables except DOB
for variable, cutoff in zip(edit_distance_variables, cutoff_values):
# If agreement below a certain level, set agreement to 0. Else, leave agreeement as it is
combined_blocks[variable + '_agreement'] = np.where(combined_blocks[variable + "_agreement"] <= cutoff, 0, combined_blocks[variable + "_agreement"])
# Remaining variables (no partial scores)
for variable in remaining_variables:
# Calculate 1/0 Agreement Score (no partial scoring)
combined_blocks[variable + '_agreement'] = np.where(combined_blocks[variable + "_cen"] == combined_blocks[variable + "_pes"], 1, 0)
# ------------------------------------------------------------------ #
# ------------------------- WEIGHTS ------------------------------- #
# ------------------------------------------------------------------ #
# Start by giving all records agreement weights
for variable in all_variables:
combined_blocks[variable + "_weight"] = combined_blocks[variable + "_agreement_weight"]
# Update for partial agreement / disagreement (only when agreement < 1)
# source: https://www.census.gov/content/dam/Census/library/working-papers/1991/adrm/rr91-9.pdf
# weight = Agreement_Weight if Agreement = 1, and
# MAX{(Agreement_Weight - (Agreement_Weight - Disgreement_Weight)*(1-Agreement)*(9/2)), Disgreement_Weight} if 0 <= Agreement < 1.
for variable in all_variables:
combined_blocks[variable + "_weight"] = np.where(combined_blocks[variable + "_agreement"] < 1,
np.maximum(((combined_blocks[variable + "_agreement_weight"]) -
((combined_blocks[variable + "_agreement_weight"] - combined_blocks[variable + "_disagreement_weight"]) *
(1 - combined_blocks[variable + "_agreement"]) * (9/2))),
combined_blocks[variable + "_disagreement_weight"]),
combined_blocks[variable + "_weight"])
# Set weights to 0 (instead of disagreement_weight) if there is missingess in PES or CEN variable (agreement == 0 condition needed for DOB)
for variable in all_variables:
combined_blocks[variable + "_weight"] = np.where(combined_blocks[variable + '_pes'].isnull() | combined_blocks[variable + '_cen'].isnull() &
(combined_blocks[variable + '_agreement'] == 0), 0,
combined_blocks[variable + '_weight'])
# Sum column wise across the above columns - create match score
combined_blocks["match_score"] = combined_blocks[['firstnm_weight', 'lastnm_weight', 'month_weight', 'year_weight', 'sex_weight']].sum(axis=1)
# ------------------------------------------------------------------ #
# ----------------------- ADJUSTMENTS ----------------------------- #
# ------------------------------------------------------------------ #
# To reduce false matches going to clerical, if ages are dissimilar set score to 0
combined_blocks['match_score'] = np.where((combined_blocks['age_pes'].notnull() == False) &
combined_blocks['age_cen'].notnull() &
(combined_blocks['age_pes'] - combined_blocks['age_cen'] > 5),
0, combined_blocks['match_score'])
''' let's view some example clusters produced to check if the scores assigned are sensible'''
# high-scoring candidate record pairs
cen_vars = [s + '_cen' for s in all_variables]
pes_vars = [s + '_pes' for s in all_variables]
display(combined_blocks[cen_vars + pes_vars + ['match_score']].sort_values(by=['match_score'], ascending=False).head(50))
# and low-scoring candidate pairs
display(combined_blocks[cen_vars + pes_vars + ['match_score']].sort_values(by=['match_score']).head(50))
# -------------------------------------- #
# -------------- SAVE ----------------- #
# -------------------------------------- #
combined_blocks.to_csv('Data/Probabilistic_Scores.csv')
|
[
"rapidfuzz.string_metric.normalized_levenshtein",
"pandas.read_csv",
"numpy.where",
"math.log2",
"numpy.maximum"
] |
[((209, 256), 'pandas.read_csv', 'pd.read_csv', (['"""Data/Mock_Rwanda_Data_Census.csv"""'], {}), "('Data/Mock_Rwanda_Data_Census.csv')\n", (220, 256), True, 'import pandas as pd\n'), ((263, 307), 'pandas.read_csv', 'pd.read_csv', (['"""Data/Mock_Rwanda_Data_Pes.csv"""'], {}), "('Data/Mock_Rwanda_Data_Pes.csv')\n", (274, 307), True, 'import pandas as pd\n'), ((2967, 2999), 'pandas.read_csv', 'pd.read_csv', (['"""Data/m_values.csv"""'], {}), "('Data/m_values.csv')\n", (2978, 2999), True, 'import pandas as pd\n'), ((3011, 3043), 'pandas.read_csv', 'pd.read_csv', (['"""Data/u_values.csv"""'], {}), "('Data/u_values.csv')\n", (3022, 3043), True, 'import pandas as pd\n'), ((5443, 5528), 'numpy.where', 'np.where', (["(combined_blocks['month_pes'] == combined_blocks['month_cen'])", '(1 / 3)', '(0)'], {}), "(combined_blocks['month_pes'] == combined_blocks['month_cen'], 1 / 3, 0\n )\n", (5451, 5528), True, 'import numpy as np\n'), ((5560, 5638), 'numpy.where', 'np.where', (["(combined_blocks['year_pes'] == combined_blocks['year_cen'])", '(1 / 2)', '(0)'], {}), "(combined_blocks['year_pes'] == combined_blocks['year_cen'], 1 / 2, 0)\n", (5568, 5638), True, 'import numpy as np\n'), ((6287, 6396), 'numpy.where', 'np.where', (["(combined_blocks[variable + '_agreement'] <= cutoff)", '(0)', "combined_blocks[variable + '_agreement']"], {}), "(combined_blocks[variable + '_agreement'] <= cutoff, 0,\n combined_blocks[variable + '_agreement'])\n", (6295, 6396), True, 'import numpy as np\n'), ((6581, 6673), 'numpy.where', 'np.where', (["(combined_blocks[variable + '_cen'] == combined_blocks[variable + '_pes'])", '(1)', '(0)'], {}), "(combined_blocks[variable + '_cen'] == combined_blocks[variable +\n '_pes'], 1, 0)\n", (6589, 6673), True, 'import numpy as np\n'), ((2347, 2399), 'rapidfuzz.string_metric.normalized_levenshtein', 'rapidfuzz.string_metric.normalized_levenshtein', (['s', 't'], {}), '(s, t)\n', (2393, 2399), False, 'import rapidfuzz\n'), ((7600, 7893), 'numpy.maximum', 'np.maximum', (["(combined_blocks[variable + '_agreement_weight'] - (combined_blocks[\n variable + '_agreement_weight'] - combined_blocks[variable +\n '_disagreement_weight']) * (1 - combined_blocks[variable + '_agreement'\n ]) * (9 / 2))", "combined_blocks[variable + '_disagreement_weight']"], {}), "(combined_blocks[variable + '_agreement_weight'] - (\n combined_blocks[variable + '_agreement_weight'] - combined_blocks[\n variable + '_disagreement_weight']) * (1 - combined_blocks[variable +\n '_agreement']) * (9 / 2), combined_blocks[variable +\n '_disagreement_weight'])\n", (7610, 7893), True, 'import numpy as np\n'), ((4339, 4379), 'math.log2', 'math.log2', (["(x[var + '_m'] / x[var + '_u'])"], {}), "(x[var + '_m'] / x[var + '_u'])\n", (4348, 4379), False, 'import math\n'), ((4535, 4587), 'math.log2', 'math.log2', (["((1 - x[var + '_m']) / (1 - x[var + '_u']))"], {}), "((1 - x[var + '_m']) / (1 - x[var + '_u']))\n", (4544, 4587), False, 'import math\n')]
|
import os
import pandas as pd
import spacy
from sklearn.feature_extraction.text import CountVectorizer
import datetime
import numpy as np
from processing import get_annee_scolaire
if __name__ == "__main__":
#print("files", os.listdir("data_processed"))
##########################
# Chargement des données
##########################
path_g = os.path.join("data_processed", "greves.pk")
g = pd.read_pickle(path_g)
g["ind"] = g.ind.map(lambda x: 1 if x == "GREVE" else 0)
g = g[["taux_grevistes", "nos", "ind", "greves_manquantes"]]
path_m = os.path.join("data_processed", "menus.pk")
m = pd.read_pickle(path_m)
path_fe = os.path.join("data_processed", "frequentation_effectif.pk")
fe = pd.read_pickle(path_fe)
path_ferie = os.path.join("data_processed", "feries.pk")
feries = pd.read_pickle(path_ferie)
path_vacs = os.path.join("data_processed", "vacances.pk")
vacances = pd.read_pickle(path_vacs)
path_epidemies = os.path.join("data_processed", "epidemies.pk")
epidemies = pd.read_pickle(path_epidemies)
path_religions = os.path.join("data_processed", "religions.pk")
religions = pd.read_pickle(path_religions)
##########################
# Join sur les dates des différentes BDD
##########################
df = fe.groupby("date")[["prevision", "reel", "effectif"]].sum().join(g).join(m).join(feries).join(vacances).join(epidemies).join(religions)
##########################
# Remplacement des valeurs manquantes
##########################
for col in df.isnull().sum()[df.isnull().sum()>0].index.drop("menu"):
df[col] = df[col].fillna(0)
df["menu"] = df["menu"].map(lambda x: x if type(x) == list else [])
####################################
# Ajout des jours, mois semaines, année scolaire, repas noel
####################################
dic_jour = {0: "Lundi", 1: "Mardi", 2: "Mercredi", 3: "Jeudi", 4: "Vendredi", 5: "Samedi", 6: "Dimanche"}
dic_mois = {1: "Janvier", 2: "Fevrier", 3: "Mars", 4: "Avril", 5: "Mai", 6: "Juin", 7: "Juillet", 8: "Aout",
9: "Septembre", 10: "Octobre", 11: "Novembre", 12: "Decembre"}
df["jour"] = df.index.weekday
df["jour"] = df["jour"].apply(lambda x: dic_jour[x])
df["semaine"] = df.index.week
df["mois"] = df.index.month
df["mois"] = df["mois"].apply(lambda x: dic_mois[x])
df["annee_scolaire"] = df.index.to_series().map(get_annee_scolaire)
date_repas_noel = ["2012-12-20", "2013-12-19", "2014-12-18", "2015-12-17", "2016-12-15",
"2017-12-21", "2018-12-20"]
l_noel = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in date_repas_noel]
df_noel = pd.DataFrame(l_noel, columns=["date"])
df_noel["repas_noel"] = 1
df = df.join(df_noel.set_index("date"))
df["repas_noel"] = df["repas_noel"].fillna(0)
####################################
# Ajout du gaspillage
####################################
assert df.isnull().sum().sum() == 0
df["gaspillage_volume"] = df["prevision"] - df["reel"]
df["gaspillage_pourcentage"] = 100 * (df["prevision"] - df["reel"]) / df["prevision"]
####################################
# Ajout des variables liées au menu
####################################
nlp = spacy.load("fr_core_news_sm")
corpus = df['menu'].apply(lambda x: "".join([i + " " for i in x]))
corpus = corpus.dropna()
# stop_word
liste = ['04', '10', '17', '18225', '2015', '2016', '220gr', '268', '29', '500', '500g', '5kg', '850''500', '500g',
'5kg', '850', 'ab', 'an', 'au', 'aux', 'avec', 'baut', 'bbc', 'de', 'des', 'du', 'en', 'et', 'gr', 'kg',
'la', 'le', 'les', 'ou', 'par', 's17', 'sa', 'sans', 'ses', 'son']
# Create CountVectorizer object
vectorizer = CountVectorizer(strip_accents='ascii', stop_words=liste, lowercase=True, ngram_range=(1, 1))
# Generate matrix of word vectors
bow_matrix = vectorizer.fit_transform(corpus)
# Convert bow_matrix into a DataFrame
bow_df = pd.DataFrame(bow_matrix.toarray())
# Map the column names to vocabulary
bow_df.columns = vectorizer.get_feature_names()
bow_df.index = df.index
# feature porc
l_porc = ["carbonara", "carbonata", "cassoulet", "chipo", "chipolatas", "choucroute",
"cordon", "croziflette", "francfort", "jambon", "knacks", "lardons", "porc", "rosette",
"saucisse", "saucisses", "tartiflette"]
df["porc"] = sum([bow_df[alim] for alim in l_porc])
df['porc'] = df['porc'] > 0
df['porc'] = df['porc'].astype('int')
# feature viande
l_viande = ["roti", "agneau", "blanquette", "boeuf", "boudin", "boulettes",
"bourguignon", "bourguignonne", "canard", "carne", "chapon", "colombo",
"couscous", "dinde", "escalope", "farci", "foie", "kebab", "lapin", "merguez",
"mouton", "napolitaines", "nuggets", "paupiette", "pintade",
"poulet", "steak", "stogonoff", "strogonoff", "tagine", "tajine",
"veau", "viande", "volaile", "volaille", "carbonara", "carbonata", "cassoulet", "chipo", "chipolatas",
"choucroute", "cordon", "croziflette", "francfort", "jambon", "knacks", "lardons", "porc", "rosette",
"saucisse", "saucisses", "tartiflette", "parmentier"]
df["viande"] = sum([bow_df[alim] for alim in l_viande])
df['viande'] = df['viande'] > 0
df['viande'] = df['viande'].astype('int')
df = df.reset_index().rename(columns = {"index":"date"})
l_index = ["2018-01-22", "2017-10-09", "2017-05-09", "2016-10-18", "2016-04-25", "2015-05-26", "2014-11-24",
"2014-05-26", "2014-03-31", "2014-01-20", "2012-01-16", "2012-01-30", "2012-07-02", "2012-10-01",
"2011-01-17", "2011-01-31", "2011-09-13", "2015-06-22", "2015-01-19", "2014-06-30", "2012-06-18",
"2011-06-20"]
index = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in l_index]
for i in index:
df.loc[df[df["date"] == i].index, "viande"] = 1
# traitement particulier des lasagnes napolitaines pour éviter les confusions avec les lasagnes de poisson
l_index = ["2016-02-22", "2016-02-04", "2015-11-23", "2015-11-17", "2015-10-05",
"2015-05-04", "2015-01-26", "2014-12-15", "2013-09-23", "2012-10-09", "2012-05-21", "2012-02-27",
"2011-11-03", "2011-09-05", "2011-05-09", "2012-12-10", "2013-12-02", "2014-05-12", "2016-05-09"]
index = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in l_index]
for i in index:
df.loc[df[df["date"] == i].index, "viande"] = 1
# traitement particulier de certains termes qui peuvent être utilisés pour du poisson ou de la viande sautés, chili, pot au feu, bolognaise, courgette farcie,ravioli
l_index = ["2016-01-28", "2016-03-17", "2016-03-07", "2015-09-15", "2012-12-06", "2012-05-03", "2012-02-09",
"2011-11-03",
"2011-09-13", "2011-06-07", "2011-04-04", "2014-06-12", "2012-11-12", "2015-06-22"]
index = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in l_index]
for i in index:
df.loc[df[df["date"] == i].index, "viande"] = 1
# traitement particulier pour parmentier végétale, steack de soja
l_index = ["2019-11-25", "2014-06-20"]
index = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in l_index]
for i in index:
df.loc[df[df["date"] == i].index, "viande"] = 0
# feature poisson
l_poisson = ["poissons", "sardines", "perray", "thon", "calamar", "lieu", "colin", "crabe", "crevette", "crustace",
"dorade", "maquereau", "poisson", "rillette", "sardine", "saumon"]
df["poisson"] = sum([bow_df[alim] for alim in l_poisson])
df['poisson'] = df['poisson'] > 0
df['poisson'] = df['poisson'].astype('int')
df['poisson'][(df['viande'] == 1) & (df['poisson'] == 1)] = np.zeros(
len(df['poisson'][(df['viande'] == 1) & (df['poisson'] == 1)]))
# traitement particulier parmentier poisson #nuggets de poisson,steack de soja et sale au thon, carbo de saumon
l_index = ["2019-05-17", "2019-05-17", "2019-02-01", "2018-11-23", "2018-10-19", "2018-09-14", "2018-06-05",
"2018-03-27", "2018-01-16", "2017-12-01", "2017-09-22", "2017-05-05", "2016-05-03", "2016-02-26",
"2016-01-15", "2015-11-20", "2015-09-22", "2015-09-08", "2015-06-05", "2014-09-08", "2014-03-25",
"2014-02-18", "2014-01-24", "2013-12-10", "2013-11-29", "2013-10-01", "2012-12-14", "2012-10-19",
"2012-09-21", "2012-03-16", "2012-01-20", "2011-09-09", "2011-03-18", "2019-03-08"]
index = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in l_index]
for i in index:
df.loc[df[df["date"] == i].index, "viande"] = 0
df.loc[df[df["date"] == i].index, "poisson"] = 1
# traitement particulier paella de la mer, filet
l_index = ['2011-01-10', '2012-01-09', '2011-01-07', "2012-01-06"]
index = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in l_index]
for i in index:
df.loc[df[df["date"] == i].index, "poisson"] = 1
# 2 menus : végé et viande, on considère que c'est un menu végé
l_index = ["2015-11-13", "2015-09-11"]
index = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in l_index]
for i in index:
df.loc[df[df["date"] == i].index, "poisson"] = 0
df.loc[df[df["date"] == i].index, "viande"] = 0
# 2 menus : poisson et viande, on considère que c'est un menu poisson
l_index = ["2015-11-20", "2015-10-16", "2015-10-02", "2015-09-25", "2015-09-18", "2015-09-04", "2015-06-25",
"2015-06-11"]
index = [datetime.datetime.strptime(x, '%Y-%m-%d') for x in l_index]
for i in index:
df.loc[df[df["date"] == i].index, "poisson"] = 1
df.loc[df[df["date"] == i].index, "viande"] = 0
# menu inconnu, mais probablement avec viande d'après le modèle
df.loc[df[df["date"] == datetime.datetime.strptime("2015-10-15", "%Y-%m-%d")].index, "viande"] = 1
# feature bio
df['bio'] = bow_df["bio"]
# set date as index
df = df.set_index("date")
###############################################################
# Ajout des 4 premiers et 4 derniers jours de l'année scolaire (grosse incertitude)
#############################################################
ind = []
temp = []
subset = df.copy()
#print("subset", subset["annee_scolaire"].unique()[1:])
for i in range(1, 5):
for annee in subset["annee_scolaire"].unique()[1:]:
temp.append(min(subset[(subset.index.year == min(subset[subset["annee_scolaire"] == annee].index.year)) & (
subset["annee_scolaire"] == annee)].index))
df.loc[temp, "4_premiers_jours"] = 1
ind.append(temp)
subset.drop(temp, inplace=True)
temp = []
for i in range(1, 5):
for annee in subset["annee_scolaire"].unique()[:-1]:
temp.append(max(subset[(subset.index.year == max(subset[subset["annee_scolaire"] == annee].index.year)) & (
subset["annee_scolaire"] == annee)].index))
df.loc[temp, "4_derniers_jours"] = 1
ind.append(temp)
subset.drop(temp, inplace=True)
temp = []
df["4_derniers_jours"].fillna(0, inplace=True)
df["4_premiers_jours"].fillna(0, inplace=True)
####################################
# Tests (longueur et valeurs manquantes)
####################################
assert len(df) == 1188
df.to_pickle("data_processed/global.pk")
df.to_excel("data_processed/global.xlsx")
|
[
"pandas.read_pickle",
"sklearn.feature_extraction.text.CountVectorizer",
"spacy.load",
"datetime.datetime.strptime",
"os.path.join",
"pandas.DataFrame"
] |
[((368, 411), 'os.path.join', 'os.path.join', (['"""data_processed"""', '"""greves.pk"""'], {}), "('data_processed', 'greves.pk')\n", (380, 411), False, 'import os\n'), ((420, 442), 'pandas.read_pickle', 'pd.read_pickle', (['path_g'], {}), '(path_g)\n', (434, 442), True, 'import pandas as pd\n'), ((583, 625), 'os.path.join', 'os.path.join', (['"""data_processed"""', '"""menus.pk"""'], {}), "('data_processed', 'menus.pk')\n", (595, 625), False, 'import os\n'), ((634, 656), 'pandas.read_pickle', 'pd.read_pickle', (['path_m'], {}), '(path_m)\n', (648, 656), True, 'import pandas as pd\n'), ((672, 731), 'os.path.join', 'os.path.join', (['"""data_processed"""', '"""frequentation_effectif.pk"""'], {}), "('data_processed', 'frequentation_effectif.pk')\n", (684, 731), False, 'import os\n'), ((741, 764), 'pandas.read_pickle', 'pd.read_pickle', (['path_fe'], {}), '(path_fe)\n', (755, 764), True, 'import pandas as pd\n'), ((783, 826), 'os.path.join', 'os.path.join', (['"""data_processed"""', '"""feries.pk"""'], {}), "('data_processed', 'feries.pk')\n", (795, 826), False, 'import os\n'), ((840, 866), 'pandas.read_pickle', 'pd.read_pickle', (['path_ferie'], {}), '(path_ferie)\n', (854, 866), True, 'import pandas as pd\n'), ((884, 929), 'os.path.join', 'os.path.join', (['"""data_processed"""', '"""vacances.pk"""'], {}), "('data_processed', 'vacances.pk')\n", (896, 929), False, 'import os\n'), ((945, 970), 'pandas.read_pickle', 'pd.read_pickle', (['path_vacs'], {}), '(path_vacs)\n', (959, 970), True, 'import pandas as pd\n'), ((993, 1039), 'os.path.join', 'os.path.join', (['"""data_processed"""', '"""epidemies.pk"""'], {}), "('data_processed', 'epidemies.pk')\n", (1005, 1039), False, 'import os\n'), ((1056, 1086), 'pandas.read_pickle', 'pd.read_pickle', (['path_epidemies'], {}), '(path_epidemies)\n', (1070, 1086), True, 'import pandas as pd\n'), ((1109, 1155), 'os.path.join', 'os.path.join', (['"""data_processed"""', '"""religions.pk"""'], {}), "('data_processed', 'religions.pk')\n", (1121, 1155), False, 'import os\n'), ((1172, 1202), 'pandas.read_pickle', 'pd.read_pickle', (['path_religions'], {}), '(path_religions)\n', (1186, 1202), True, 'import pandas as pd\n'), ((2725, 2763), 'pandas.DataFrame', 'pd.DataFrame', (['l_noel'], {'columns': "['date']"}), "(l_noel, columns=['date'])\n", (2737, 2763), True, 'import pandas as pd\n'), ((3323, 3352), 'spacy.load', 'spacy.load', (['"""fr_core_news_sm"""'], {}), "('fr_core_news_sm')\n", (3333, 3352), False, 'import spacy\n'), ((3843, 3939), 'sklearn.feature_extraction.text.CountVectorizer', 'CountVectorizer', ([], {'strip_accents': '"""ascii"""', 'stop_words': 'liste', 'lowercase': '(True)', 'ngram_range': '(1, 1)'}), "(strip_accents='ascii', stop_words=liste, lowercase=True,\n ngram_range=(1, 1))\n", (3858, 3939), False, 'from sklearn.feature_extraction.text import CountVectorizer\n'), ((2642, 2683), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['x', '"""%Y-%m-%d"""'], {}), "(x, '%Y-%m-%d')\n", (2668, 2683), False, 'import datetime\n'), ((5969, 6010), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['x', '"""%Y-%m-%d"""'], {}), "(x, '%Y-%m-%d')\n", (5995, 6010), False, 'import datetime\n'), ((6539, 6580), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['x', '"""%Y-%m-%d"""'], {}), "(x, '%Y-%m-%d')\n", (6565, 6580), False, 'import datetime\n'), ((7097, 7138), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['x', '"""%Y-%m-%d"""'], {}), "(x, '%Y-%m-%d')\n", (7123, 7138), False, 'import datetime\n'), ((7361, 7402), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['x', '"""%Y-%m-%d"""'], {}), "(x, '%Y-%m-%d')\n", (7387, 7402), False, 'import datetime\n'), ((8694, 8735), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['x', '"""%Y-%m-%d"""'], {}), "(x, '%Y-%m-%d')\n", (8720, 8735), False, 'import datetime\n'), ((9027, 9068), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['x', '"""%Y-%m-%d"""'], {}), "(x, '%Y-%m-%d')\n", (9053, 9068), False, 'import datetime\n'), ((9290, 9331), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['x', '"""%Y-%m-%d"""'], {}), "(x, '%Y-%m-%d')\n", (9316, 9331), False, 'import datetime\n'), ((9712, 9753), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['x', '"""%Y-%m-%d"""'], {}), "(x, '%Y-%m-%d')\n", (9738, 9753), False, 'import datetime\n'), ((10003, 10055), 'datetime.datetime.strptime', 'datetime.datetime.strptime', (['"""2015-10-15"""', '"""%Y-%m-%d"""'], {}), "('2015-10-15', '%Y-%m-%d')\n", (10029, 10055), False, 'import datetime\n')]
|
from torch import nn
from torchvision.models.detection.backbone_utils import resnet_fpn_backbone
from torchvision.models.utils import load_state_dict_from_url
from .utils import pooling
from .utils.class_head import ClassificationHead
class FasterRCNN_FPN(nn.Module):
"""
A Faster R-CNN FPN inspired parking lot classifier.
Passes the whole image through a CNN -->
pools ROIs from the feature pyramid --> passes
each ROI separately through a classification head.
"""
def __init__(self, roi_res=7, pooling_type='square'):
super().__init__()
# backbone
# by default, uses frozen batchnorm and 3 trainable layers
self.backbone = resnet_fpn_backbone('resnet50', pretrained=True)
hidden_dim = 256
# pooling
self.roi_res = roi_res
self.pooling_type = pooling_type
# classification head
in_channels = hidden_dim * self.roi_res**2
self.class_head = ClassificationHead(in_channels)
# load coco weights
# url taken from: https://github.com/pytorch/vision/blob/master/torchvision/models/detection/faster_rcnn.py
weights_url = 'https://download.pytorch.org/models/fasterrcnn_resnet50_fpn_coco-258fb6c6.pth'
state_dict = load_state_dict_from_url(weights_url, progress=False)
self.load_state_dict(state_dict, strict=False)
def forward(self, image, rois):
# get backbone features
features = self.backbone(image[None])
# pool ROIs from features pyramid
features = list(features.values())
features = pooling.pool_FPN_features(features, rois, self.roi_res, self.pooling_type)
# pass pooled ROIs through classification head to get class logits
features = features.flatten(1)
class_logits = self.class_head(features)
return class_logits
|
[
"torchvision.models.utils.load_state_dict_from_url",
"torchvision.models.detection.backbone_utils.resnet_fpn_backbone"
] |
[((698, 746), 'torchvision.models.detection.backbone_utils.resnet_fpn_backbone', 'resnet_fpn_backbone', (['"""resnet50"""'], {'pretrained': '(True)'}), "('resnet50', pretrained=True)\n", (717, 746), False, 'from torchvision.models.detection.backbone_utils import resnet_fpn_backbone\n'), ((1295, 1348), 'torchvision.models.utils.load_state_dict_from_url', 'load_state_dict_from_url', (['weights_url'], {'progress': '(False)'}), '(weights_url, progress=False)\n', (1319, 1348), False, 'from torchvision.models.utils import load_state_dict_from_url\n')]
|
from resources import Resources
import configparser
def getConfigEntry(group, item):
entry = None
if group != None and item != None:
config = configparser.ConfigParser()
try:
config.read(Resources.getConfigFile())
except(FileNotFoundError):
print("ERROR: File '" + Resources.getConfigFile() + "' NOT found! " + FileNotFoundError.strerror)
config = None
if config is not None and group in config:
entry = config[group].getint(item)
return entry
def getConfigEntryOrDefault(group, item, defaultValue=None):
entry = None
entry = getConfigEntry(group, item)
if entry is None:
entry = defaultValue
return entry
|
[
"resources.Resources.getConfigFile",
"configparser.ConfigParser"
] |
[((160, 187), 'configparser.ConfigParser', 'configparser.ConfigParser', ([], {}), '()\n', (185, 187), False, 'import configparser\n'), ((225, 250), 'resources.Resources.getConfigFile', 'Resources.getConfigFile', ([], {}), '()\n', (248, 250), False, 'from resources import Resources\n'), ((323, 348), 'resources.Resources.getConfigFile', 'Resources.getConfigFile', ([], {}), '()\n', (346, 348), False, 'from resources import Resources\n')]
|
# -*- coding: utf-8 -*-
import os
import configparser
import glob
import sqlite3
import traceback
import json
from bottle import route, run, auth_basic, abort, response
from sqlite3 import OperationalError
config = configparser.ConfigParser()
config.read('config/config.ini')
def VERIFY(username, password):
return username == config['web']['user'] and password == config['web']['pass']
@route('/')
@auth_basic(VERIFY)
def index():
return """<!DOCTYPE html>
<html>
<head>
<title>Yu-Console</title>
</head>
<body>
<ul>
<li><a href="panic-log">PANIC LOG</a></li>
</ul>
</body>
</html>
"""
@route('/panic-log')
@route('/panic-log/')
@auth_basic(VERIFY)
def list_panicLog():
panicLogList = glob.glob("panic-log/PANIC-*.TXT")
output = "<!DOCTYPE html><html><head><title>PANICLOG</title></head><body><h1>PANICLOG</h1><ul>"
link = ""
for l in panicLogList:
link = l.replace('panic-log/PANIC-', '').replace('.TXT', '')
output += f'<li><a href="/panic-log/{link}">{link}</a></li>'
output += '</ul></body></html>\n'
return output
@route('/db')
@route('/db/')
@auth_basic(VERIFY)
def list_table():
return "Underconstruction"
@route('/db/<table:re:[a-z_]+>')
@auth_basic(VERIFY)
def list_dbtable(table):
try:
conn = sqlite3.connect('Yu_{}.db'.format(config['instance']['address']))
c = conn.cursor()
output = f"<!DOCTYPE html><html><head><title>TABLE SHOW: {table}</title></head><body><h1>TABLE SHOW: {table}</h1><table>"
output += "<tr>"
for tn in c.execute(f"PRAGMA table_info('{table}')"):
output += f"<th>{tn[1]}</th>"
output += "</tr>"
for tb in c.execute(f"SELECT * FROM {table}"):
output += f"<tr>"
for i in tb:
output += f"<td>{i}</td>"
output += f"</tr>"
output += "</table></body>"
return output
except:
traceback.print_exc()
abort(404, "TABLE NOT FOUND")
finally:
conn.close()
@route('/user-memos/<date:re:[0-9_+]+>')
@auth_basic(VERIFY)
def list_usermemos(date):
try:
conn = sqlite3.connect('Yu_{}.db'.format(config['instance']['address']))
c = conn.cursor()
c.execute('SELECT * FROM user_memos WHERE memo_time = ?', (date, ))
memoRaw = c.fetchone()
if memoRaw == None:
abort(404, "This memo time was not found")
else:
memo = json.loads(memoRaw[2], encoding="utf-8")
output = f"<!DOCTYPE html><html><head><title>UESR MEMO SHOW: {date}</title></head><body><h1>UESR MEMO SHOW: {date}</h1><table><tr><th>User</th><th>Memo</th></tr>"
for me in memo:
output += f"<tr><td><a href=\"https://{config['instance']['address']}/@{me['from']}\">@{me['from']}</a></td><td>{me['body']}</td></tr>"
output += "</table></body>\n"
return output
except OperationalError:
traceback.print_exc()
abort(500, "INTERNAL SERVER ERROR")
finally:
conn.close()
@route('/panic-log/<panicdate:int>')
@auth_basic(VERIFY)
def show_panicLog(panicdate):
if os.path.isdir('panic-log') and os.path.isfile(f'panic-log/PANIC-{str(panicdate)}.TXT'):
with open(f'panic-log/PANIC-{str(panicdate)}.TXT', encoding="utf-8") as panic:
txtRaw = panic.read()
response.content_type = "text/plain"
return txtRaw
else:
abort(404, "PANIC LOG NOT FOUND")
def WEBRUN():
run(port=7878)
|
[
"json.loads",
"configparser.ConfigParser",
"bottle.route",
"traceback.print_exc",
"os.path.isdir",
"bottle.abort",
"bottle.auth_basic",
"bottle.run",
"glob.glob"
] |
[((217, 244), 'configparser.ConfigParser', 'configparser.ConfigParser', ([], {}), '()\n', (242, 244), False, 'import configparser\n'), ((396, 406), 'bottle.route', 'route', (['"""/"""'], {}), "('/')\n", (401, 406), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((408, 426), 'bottle.auth_basic', 'auth_basic', (['VERIFY'], {}), '(VERIFY)\n', (418, 426), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((671, 690), 'bottle.route', 'route', (['"""/panic-log"""'], {}), "('/panic-log')\n", (676, 690), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((692, 712), 'bottle.route', 'route', (['"""/panic-log/"""'], {}), "('/panic-log/')\n", (697, 712), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((714, 732), 'bottle.auth_basic', 'auth_basic', (['VERIFY'], {}), '(VERIFY)\n', (724, 732), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((1145, 1157), 'bottle.route', 'route', (['"""/db"""'], {}), "('/db')\n", (1150, 1157), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((1159, 1172), 'bottle.route', 'route', (['"""/db/"""'], {}), "('/db/')\n", (1164, 1172), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((1174, 1192), 'bottle.auth_basic', 'auth_basic', (['VERIFY'], {}), '(VERIFY)\n', (1184, 1192), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((1244, 1275), 'bottle.route', 'route', (['"""/db/<table:re:[a-z_]+>"""'], {}), "('/db/<table:re:[a-z_]+>')\n", (1249, 1275), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((1277, 1295), 'bottle.auth_basic', 'auth_basic', (['VERIFY'], {}), '(VERIFY)\n', (1287, 1295), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((2079, 2118), 'bottle.route', 'route', (['"""/user-memos/<date:re:[0-9_+]+>"""'], {}), "('/user-memos/<date:re:[0-9_+]+>')\n", (2084, 2118), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((2120, 2138), 'bottle.auth_basic', 'auth_basic', (['VERIFY'], {}), '(VERIFY)\n', (2130, 2138), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((3107, 3142), 'bottle.route', 'route', (['"""/panic-log/<panicdate:int>"""'], {}), "('/panic-log/<panicdate:int>')\n", (3112, 3142), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((3144, 3162), 'bottle.auth_basic', 'auth_basic', (['VERIFY'], {}), '(VERIFY)\n', (3154, 3162), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((773, 807), 'glob.glob', 'glob.glob', (['"""panic-log/PANIC-*.TXT"""'], {}), "('panic-log/PANIC-*.TXT')\n", (782, 807), False, 'import glob\n'), ((3547, 3561), 'bottle.run', 'run', ([], {'port': '(7878)'}), '(port=7878)\n', (3550, 3561), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((3200, 3226), 'os.path.isdir', 'os.path.isdir', (['"""panic-log"""'], {}), "('panic-log')\n", (3213, 3226), False, 'import os\n'), ((3494, 3527), 'bottle.abort', 'abort', (['(404)', '"""PANIC LOG NOT FOUND"""'], {}), "(404, 'PANIC LOG NOT FOUND')\n", (3499, 3527), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((1983, 2004), 'traceback.print_exc', 'traceback.print_exc', ([], {}), '()\n', (2002, 2004), False, 'import traceback\n'), ((2013, 2042), 'bottle.abort', 'abort', (['(404)', '"""TABLE NOT FOUND"""'], {}), "(404, 'TABLE NOT FOUND')\n", (2018, 2042), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((2428, 2470), 'bottle.abort', 'abort', (['(404)', '"""This memo time was not found"""'], {}), "(404, 'This memo time was not found')\n", (2433, 2470), False, 'from bottle import route, run, auth_basic, abort, response\n'), ((2504, 2544), 'json.loads', 'json.loads', (['memoRaw[2]'], {'encoding': '"""utf-8"""'}), "(memoRaw[2], encoding='utf-8')\n", (2514, 2544), False, 'import json\n'), ((3005, 3026), 'traceback.print_exc', 'traceback.print_exc', ([], {}), '()\n', (3024, 3026), False, 'import traceback\n'), ((3035, 3070), 'bottle.abort', 'abort', (['(500)', '"""INTERNAL SERVER ERROR"""'], {}), "(500, 'INTERNAL SERVER ERROR')\n", (3040, 3070), False, 'from bottle import route, run, auth_basic, abort, response\n')]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.