title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
send 2Dimensional Listarray From Raspberry pi to the Arduino with I2c protocol
| 39,449,076 |
<p>im working on Computer vision (opencv )python and i had a result from the image , so this results is 2D List-arrays that should go to the Arduino by i2c buffer , so i realized that there is a library called smbus that interfacing the Raspberry pi with i2c ports so send and receive data , So i searched on References pages that give me some explanation about this Library but i didn't found eny thing useful...and all what i found is this sites which is not enough information </p>
<p><a href="http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/i2c/smbus-protocol" rel="nofollow">http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/plain/Documentation/i2c/smbus-protocol</a></p>
<p><a href="http://wiki.erazor-zone.de/wiki:linux:python:smbus:doc" rel="nofollow">http://wiki.erazor-zone.de/wiki:linux:python:smbus:doc</a></p>
<p>so im indeed need eny explanation how to send 2D Arrays Like (x,y) Coordinates
from Pi to Arduino with i2c buffer </p>
<p>thanks in advance</p>
| 0 |
2016-09-12T11:15:14Z
| 39,478,707 |
<p>Check <a href="https://github.com/carlos-jenkins/smart-lights/tree/master/semaphore/chip/drivers/python/grid_io" rel="nofollow">this repository</a>. We connected a NTC CHIP to a Adafruit Trinket using I2C, but connecting the PI with the an Arduino should be very much the same.</p>
<p>The key file is the <a href="https://github.com/carlos-jenkins/smart-lights/blob/master/semaphore/chip/drivers/python/grid_io/i2c.py" rel="nofollow">I2C.py</a> file, which uses SMBus.</p>
<p>To talk to the Trinket Pro (ATMega328) we used the <a href="https://github.com/carlos-jenkins/smart-lights/blob/master/semaphore/chip/drivers/python/grid_io/trinket.py" rel="nofollow">trinket.py</a> file.</p>
<p>And the corresponding client code in the ATMega328, using the Arduino library, specially the Wire.h is located in the <a href="https://github.com/carlos-jenkins/smart-lights/blob/master/semaphore/trinket/firmware/hwthontrinket/hwthontrinket.ino" rel="nofollow">hwthontrinket.ino</a> file.</p>
<p>Finally, you can check how to use the classes in the <a href="https://github.com/carlos-jenkins/smart-lights/blob/master/semaphore/chip/drivers/python/test/test_trinket_slave.py#L24" rel="nofollow">test files</a>. You basically need to pass the bus number and the address of the device.</p>
| 0 |
2016-09-13T20:45:00Z
|
[
"python",
"opencv",
"raspbian",
"i2c",
"smbus"
] |
Numpy computes wrong eigeinvectors
| 39,449,091 |
<p>I was computing the eigenvectors of a matrix using numpy and I was getting some weird results. Then I decided to use Matlab and everything was fine.</p>
<pre><code>L = np.array(([2,-1,-1],[-1,2,-1],[-1,-1,2]))
Lam,U = np.linalg.eig(L) #compute eigenvalues and vectors
#sort by ascending eigenvalues
I = [i[0] for i in sorted(zip(xrange(len(Lam)),Lam),key=lambda x: x[1])]
Lam = Lam[I]
U = U[:,I]
print U.dot(U.T)
>> [[ 1.09 -0.24 0.15]
[-0.24 1.15 0.09]
[ 0.15 0.09 0.76]]
</code></pre>
<p>The result was weird because I was expecting <code>U.dot(U.T) = I</code>. In matlab:</p>
<pre><code>L = [2,-1,-1;-1,2,-1;-1,-1,2]
[V,D] = eig(L)
V*V'
ans =
1.0000 0.0000 -0.0000
0.0000 1.0000 0.0000
0.0000 0.0000 1.0000
</code></pre>
<p>By the way U:</p>
<pre><code>[[-0.58 0.82 0.29]
[-0.58 -0.41 -0.81]
[-0.58 -0.41 0.51]]
</code></pre>
<p>and V:</p>
<pre><code>0.5774 0.7634 0.2895
0.5774 -0.6325 0.5164
0.5774 -0.1310 -0.8059
</code></pre>
<p>What's going on?</p>
| 1 |
2016-09-12T11:16:16Z
| 39,449,374 |
<p>While it is true that there is an orthonormal basis of eigenvectors of a symmetrical matrix, there is no guarantee that Numpy will return that basis. It will return <em>any</em> basis of eigenvectors, and there is nothing wrong about this approach.</p>
<p>The matrix you are looking at has two eigenspaces: A two-dimensional one for the eigenvalue 3, and the one-dimensional kernel. For the kernel, the eigenvector is determined up to a constant factor. For the two-dimensional space, however, you have some freedom to choose a basis, and there is no guarantee that it is orthonormal.</p>
<p>To get an orthonormal basis of eigenvectors of a symmetric matrix, you can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html" rel="nofollow"><code>numpy.linalg.eigh()</code></a>.</p>
| 5 |
2016-09-12T11:33:59Z
|
[
"python",
"matlab",
"numpy"
] |
Numpy computes wrong eigeinvectors
| 39,449,091 |
<p>I was computing the eigenvectors of a matrix using numpy and I was getting some weird results. Then I decided to use Matlab and everything was fine.</p>
<pre><code>L = np.array(([2,-1,-1],[-1,2,-1],[-1,-1,2]))
Lam,U = np.linalg.eig(L) #compute eigenvalues and vectors
#sort by ascending eigenvalues
I = [i[0] for i in sorted(zip(xrange(len(Lam)),Lam),key=lambda x: x[1])]
Lam = Lam[I]
U = U[:,I]
print U.dot(U.T)
>> [[ 1.09 -0.24 0.15]
[-0.24 1.15 0.09]
[ 0.15 0.09 0.76]]
</code></pre>
<p>The result was weird because I was expecting <code>U.dot(U.T) = I</code>. In matlab:</p>
<pre><code>L = [2,-1,-1;-1,2,-1;-1,-1,2]
[V,D] = eig(L)
V*V'
ans =
1.0000 0.0000 -0.0000
0.0000 1.0000 0.0000
0.0000 0.0000 1.0000
</code></pre>
<p>By the way U:</p>
<pre><code>[[-0.58 0.82 0.29]
[-0.58 -0.41 -0.81]
[-0.58 -0.41 0.51]]
</code></pre>
<p>and V:</p>
<pre><code>0.5774 0.7634 0.2895
0.5774 -0.6325 0.5164
0.5774 -0.1310 -0.8059
</code></pre>
<p>What's going on?</p>
| 1 |
2016-09-12T11:16:16Z
| 39,449,777 |
<p>I think the problem is that your matrix is NOT reversible</p>
<pre><code>np.linalg.det(L) # Return 0
Lam # Return [3, 3, 0]
</code></pre>
<p>So:</p>
<pre><code>det(L * U) = det(L) * det(U) = 0 != det(I) = 1
</code></pre>
| -2 |
2016-09-12T11:56:36Z
|
[
"python",
"matlab",
"numpy"
] |
explode pandas dataframe by unique elements in columns (strings) and create contingency table?
| 39,449,235 |
<p>I have a pandas dataframe that I would like to do some analysis on, it looks like this:</p>
<pre><code>from pandas import DataFrame
a = DataFrame([{'var1': 'K802', 'var2': 'No Concatenation', 'var3':'73410'},
{'var1': 'O342,O820,Z370', 'var2': '59514,01968', 'var3':'146010'},
{'var1': 'Z094', 'var2': 'No Concatenation', 'var3':'233210'},
{'var1': 'N920', 'var2': '58120', 'var3':'130910'},
{'var1': 'S801,W2064,I219', 'var2': 'No Concatenation', 'var3':'93630'},
{'var1': 'O987,O820,Z302,Z370', 'var2': '59514,01968,58611', 'var3':'146010'},
{'var1': 'O987,O820,Z302,Z370,E115', 'var2': '59514,01968,58611', 'var3':'146020'},
{'var1': 'N359,N319,J459', 'var2': '52281', 'var3':'113720'},
{'var1': 'O342,O343,O820,Z370', 'var2': '59514,01968,59871', 'var3':'146010'},
{'var1': 'J459,C449,E785,I10', 'var2': 'No Concatenation', 'var3':'43810'},
{'var1': 'Z380,C780,C189,I270,J449,Z933', 'var2': 'No Concatenation', 'var3':'157520'}])
print a.var1
0 K802
1 O342,O820,Z370
2 Z094
3 N920
4 S801,W2064,I219
5 O987,O820,Z302,Z370
6 O987,O820,Z302,Z370,E115
7 N359,N319,J459
8 O342,O343,O820,Z370
9 J459,C449,E785,I10
10 Z380,C780,C189,I270,J449,Z933
Name: var1, dtype: object
</code></pre>
<p>It has been truncated as the csv file it came from has 1 million plus rows. The goal is to end up with something like this:</p>
<pre><code>b = DataFrame([{'K802':1, 'O342': 0, 'O820':0, 'Z370':0, 'Z094': 0, 'N920':0, 'S801':0, 'W2064': 0, 'I219':0},
{'K802':0, 'O342': 1, 'O820':1, 'Z370':1, 'Z094': 0, 'N920':0, 'S801':0, 'W2064': 0, 'I219':0},
{'K802':0, 'O342': 0, 'O820':0, 'Z370':0, 'Z094': 1, 'N920':0, 'S801':1, 'W2064': 0, 'I219':0},
{'K802':0, 'O342': 0, 'O820':0, 'Z370':0, 'Z094': 0, 'N920':1, 'S801':0, 'W2064': 0, 'I219':0},
{'K802':0, 'O342': 0, 'O820':0, 'Z370':0, 'Z094': 0, 'N920':0, 'S801':1, 'W2064': 1, 'I219':1}])
print b
I219 K802 N920 O342 O820 S801 W2064 Z094 Z370
0 0 1 0 0 0 0 0 0 0
1 0 0 0 1 1 0 0 0 1
2 0 0 0 0 0 1 0 1 0
3 0 0 1 0 0 0 0 0 0
4 1 0 0 0 0 1 1 0 0
...
</code></pre>
<p>Basically, I would like to get a new column for each unique entry in the rows of <code>a.var1</code> then populate the columns with either a <code>1</code> for is present in that row or <code>0</code> for not present. I need to do this for <code>var1</code>, <code>var2</code>, and <code>var3</code> separately then join the three by the indices of the original <code>a</code> so that I can calculate frequencies and maybe some logistic regression.
I am new to pandas, and can not seem to figure out how to do this efficiently.</p>
<p>Any help would be appreciated.</p>
| 2 |
2016-09-12T11:24:23Z
| 39,449,569 |
<p>You can use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.get_dummies.html" rel="nofollow"><code>get_dummies</code></a> method defined on pd.Series. It is more straightforward to use than <code>pd.get_dummies</code> function for this use case. You can then use pd.concat to combine the resulting dfs.</p>
<pre><code>pd.concat([a[col].str.get_dummies(',') for col in a], axis=1)
Out:
C189 C449 C780 E115 E785 I10 I219 I270 J449 J459 ... \
0 0 0 0 0 0 0 0 0 0 0 ...
1 0 0 0 0 0 0 0 0 0 0 ...
2 0 0 0 0 0 0 0 0 0 0 ...
3 0 0 0 0 0 0 0 0 0 0 ...
4 0 0 0 0 0 0 1 0 0 0 ...
5 0 0 0 0 0 0 0 0 0 0 ...
6 0 0 0 1 0 0 0 0 0 0 ...
7 0 0 0 0 0 0 0 0 0 1 ...
8 0 0 0 0 0 0 0 0 0 0 ...
9 0 1 0 0 1 1 0 0 0 1 ...
10 1 0 1 0 0 0 0 1 1 0 ...
</code></pre>
| 3 |
2016-09-12T11:45:02Z
|
[
"python",
"pandas",
"join",
"split"
] |
Completing a curve using a function
| 39,449,251 |
<p>I have developed a model in Python to simulate something physical. This model outputs angular velocity over a distance range. For various reasons the model does not give results for small distance values, but it has been suggested to me that I complete the curve using a function which we believe approximates the small distance regime. </p>
<p>My question is, how can I actually implement this? It is not obvious to me how to tie together the curve that results from the model with the known analytic function.</p>
| -2 |
2016-09-12T11:26:03Z
| 39,449,316 |
<p>You can make a piecewise function. For example if you know that your angular velocity function is valid for <code>r > 10</code>, then you could do something like</p>
<pre><code>def angular_velocity(r):
if r > 10:
your_analytical_function(r)
else:
some_alternative_for_small_distance(r)
</code></pre>
<p>If your question is more of how to determine an equation for small ranges, then that would really depend on your data and model. There are ways you can <a href="https://stackoverflow.com/questions/2745329/how-to-make-scipy-interpolate-give-an-extrapolated-result-beyond-the-input-range">extrapolate</a> your data, but in general you should be <a href="https://xkcd.com/605/" rel="nofollow">wary of extrapolation</a>.</p>
| 1 |
2016-09-12T11:30:29Z
|
[
"python",
"plot",
"model"
] |
Resize without changing the structure of the image
| 39,449,259 |
<p>I have an image here (DMM_a01_s01_e01_sdepth.PNG, it is basically a human depth map or something, I don't really know the details :( ):</p>
<p><a href="http://i.stack.imgur.com/2Ux3i.png" rel="nofollow"><img src="http://i.stack.imgur.com/2Ux3i.png" alt="DMM_a01_s01_e01_sdepth.PNG"></a></p>
<p>It's very small (54x102) so here is a visualization:
<a href="http://i.stack.imgur.com/LRSJV.png" rel="nofollow"><img src="http://i.stack.imgur.com/LRSJV.png" alt="enter image description here"></a></p>
<p>But when I tried to resize it to 20x20 using this piece of code that I've made:</p>
<pre><code>from scipy import misc
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import numpy as np
import math
import cv2
im = misc.imread('DMM_a01_s01_e01_sdepth.PNG')
def rgb2gray(rgb):
return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
if len(im.shape) ==3:
im = rgb2gray(im) # Convert RGB to grayscale
# Show image
plt.imshow(im, cmap = cm.Greys_r)
plt.show()
# Resize image
boxSize = 20
newImage= misc.imresize(im, (boxSize,boxSize), interp="bicubic")
plt.imshow(newImage, cmap = cm.Greys_r)
plt.show()
</code></pre>
<p>, the resized image is no longer the same as the orignal one:</p>
<p><a href="http://i.stack.imgur.com/GKLye.png" rel="nofollow"><img src="http://i.stack.imgur.com/GKLye.png" alt="enter image description here"></a></p>
<p>How do I resize and still keep the structure of the image? Please help me, thank you very much :) </p>
| 0 |
2016-09-12T11:26:32Z
| 39,449,474 |
<p>What you are asking for is impossible. Resizing of image is a destructive operation. You have 54x102 pixels (5508 pixels of data) and you are trying to fit that amount of data into a 20x20 image - that's just 400 pixels! You'll always lose some detail, structure etc. based on the algorithm you used - in this case scipy's.</p>
| 0 |
2016-09-12T11:40:07Z
|
[
"python",
"image-processing"
] |
Django: Database Model For Prize Structure
| 39,449,331 |
<p>I'm trying to figure out the proper way to go about structuring a model for a contest's payout/prize structure, an example is below</p>
<p>1st: $50000
2nd: $10000
3rd-10th: $1000
10th-70th: $500
70th-150th: $25
150th-400th: $1</p>
<p>My first thought was to design it like this:</p>
<pre><code>class Prize(models.Model):
place=models.IntegerField()
prize=models.IntegerField()
</code></pre>
<p>The issue for me is that once you get to the lower tier, you begin having multiple entries that are repetitive. So from 150th-400th I would have 250 of the same entries. I'm wondering if there is a smarter way to go about this. Thanks.</p>
| -1 |
2016-09-12T11:31:11Z
| 39,449,397 |
<p>First of all, never use integers to store prices. Always try to use decimal field. If you have never tried it, there is a thing to notice. You have to supply max_digits and decimal_places. For example:</p>
<pre><code>prize = forms.DecimalField(max_digits=10, decimal_places=2)
</code></pre>
<p>So you can only use 8 digits (not 10) with 2 decimal places.</p>
<p>In order to answer your question. Add <strong>from range</strong> and <strong>to range</strong>.
Then also in meta add unique_together, so you will not have duplicates for the same ranges with the same prize.</p>
| 0 |
2016-09-12T11:35:53Z
|
[
"python",
"django",
"django-models"
] |
16-bit color images with pyinsane
| 39,449,350 |
<p>pyinsane's scan sessions return a list of 8-bit PIL images by default. This is true, even when the scan has been done in 16-bit mode (for instance using the transparency unit). Is there any way to get 16-bit images (I suppose PIL does not support that) or the original raw data out of pyinsane?</p>
<p>Here is the sample code I am currently using and getting images with 8 bits colour depth:</p>
<pre><code>import pyinsane.abstract as pyinsane
device = pyinsane.get_devices()[0]
device.options['resolution'].value = 1200
device.options['mode'].value = 'Color'
device.options['source'].value = 'Transparency Unit'
scan_session = device.scan(multiple=False)
try:
while True:
scan_session.scan.read()
except EOFError:
pass
image = scan_session.images[0]
</code></pre>
| 0 |
2016-09-12T11:32:14Z
| 39,920,000 |
<p>You're right, this is a limitation from Pillow (PIL). You can actually see the conversion from raw to PIL Image here : <a href="https://github.com/jflesch/pyinsane/blob/stable/src/pyinsane2/sane/abstract.py#L161" rel="nofollow">https://github.com/jflesch/pyinsane/blob/stable/src/pyinsane2/sane/abstract.py#L161</a></p>
<p>If you really need this extra data, I guess the only option is to use the Sane API directly and do your own conversions:</p>
<pre><code>import pyinsane.sane.rawapi
pyinsane.sane.rawapi.sane_init()
(...)
pyinsane.sane.rawapi.sane_exit()
</code></pre>
<p>Unfortunately, doing this will make you loose the Windows portability (WIA support), and this part of Pyinsane is not documented at all. However, pyinsane.sane.rawapi provides the Sane C API with only minor transformations to make it more Pythony-friendly. So I guess you can just refer to the Sane documentation for informations : <a href="http://www.sane-project.org/html/doc009.html" rel="nofollow">http://www.sane-project.org/html/doc009.html</a> .</p>
| 0 |
2016-10-07T14:44:25Z
|
[
"python",
"pillow",
"scanning",
"pyinsane"
] |
How do I replace values in 2D numpy array using a dictionary of {value:(row#,column#)} pairs
| 39,449,394 |
<pre><code> import numpy as np
</code></pre>
<p>the array looks like so:</p>
<pre><code> array = np.zeros((10,10))
array =
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
</code></pre>
<p>the dictionary is like this:</p>
<pre><code> dict = {72: (3, 4), 11: (1, 5), 10: (2, 4), 43: (2, 3), 22: (24,35), 11: (8, 9)}
</code></pre>
<p>I want to iterate over the array and replace any grid points that match the grid coordinates in the dictionary with the corresponding value in the dictionary</p>
<p>i am after an output like this:</p>
<pre><code> [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 11. 0. 0. 0. 0.]
[ 0. 0. 0. 43. 10. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 72. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 11.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
</code></pre>
<p>** i have edited the question the provide co-ordinates that sit within the array except for 1 exception. I also provided an example of the desired output</p>
| 0 |
2016-09-12T11:35:46Z
| 39,449,633 |
<p>I hope I understood your question correctly</p>
<pre><code>array = np.zeros((10,10))
data = {72: (3, 4), 11: (1, 5), 10: (2, 4), 43: (2, 3), 22: (24,35)}
for i in data.keys():
try:
array[data[i][0],data[i][1]] = float(i)
except IndexError:
pass
print array
</code></pre>
<p>I changed the indices such that it fits into your 10 x 10 array (I assume you work with a bigger array in your real example) </p>
<p>I iterate over all keys in the dictionary (the values). The program then tries to set this value in the array at the given coordinates.
I pass IndexErrors for the case some coordinates are outside the array (like the last in this example.</p>
<p><strong>EDIT</strong></p>
<p>This solution only works if your keys are unique. If they are not I would recommend the solution of @Osssan.</p>
| 3 |
2016-09-12T11:48:49Z
|
[
"python",
"numpy",
"dictionary"
] |
How do I replace values in 2D numpy array using a dictionary of {value:(row#,column#)} pairs
| 39,449,394 |
<pre><code> import numpy as np
</code></pre>
<p>the array looks like so:</p>
<pre><code> array = np.zeros((10,10))
array =
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
</code></pre>
<p>the dictionary is like this:</p>
<pre><code> dict = {72: (3, 4), 11: (1, 5), 10: (2, 4), 43: (2, 3), 22: (24,35), 11: (8, 9)}
</code></pre>
<p>I want to iterate over the array and replace any grid points that match the grid coordinates in the dictionary with the corresponding value in the dictionary</p>
<p>i am after an output like this:</p>
<pre><code> [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 11. 0. 0. 0. 0.]
[ 0. 0. 0. 43. 10. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 72. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 11.]
[ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
</code></pre>
<p>** i have edited the question the provide co-ordinates that sit within the array except for 1 exception. I also provided an example of the desired output</p>
| 0 |
2016-09-12T11:35:46Z
| 39,450,333 |
<p>We need to invert mapping from values=>coordinates to co-ordinates=>values before replacement in the array. I have edited the dictionary entries for demo purpose and as pointed in comments, dictionary co-ordinate entries should be less than dimensions of array</p>
<pre><code>import numpy as np
arrObj = np.zeros((10,10))
arrObj
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
# [ 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
#copy of array for replacement
replaceArrObj=arrObj
#ensure co-ordinates in the dictionary could be indexed in array
#current mapping: values => co-ordinates
dictObj = {1.0:(0.0,0.0),2.0:(1.0,1.0),3.0: (2.0, 2.0), 4.0: (3.0, 3.0),5.0:(4.0,4.0), 6.0: (5.0, 5.0), 7.0: (6.0, 6.0), 8.0: (7.0,7.0), 9.0: (8.0,8.0),
10.0: (9.0,9.0)}
dictObj
#{1.0: (0.0, 0.0),
# 2.0: (1.0, 1.0),
# 3.0: (2.0, 2.0),
# 4.0: (3.0, 3.0),
# 5.0: (4.0, 4.0),
# 6.0: (5.0, 5.0),
# 7.0: (6.0, 6.0),
# 8.0: (7.0, 7.0),
# 9.0: (8.0, 8.0),
# 10.0: (9.0, 9.0)}
</code></pre>
<p><strong>Invert Mapping:</strong></p>
<pre><code>#invert mapping of dictionary: co-ordinates => values
inv_dictObj = {v: k for k, v in dictObj.items()}
inv_dictObj
#{(0.0, 0.0): 1.0,
# (1.0, 1.0): 2.0,
# (2.0, 2.0): 3.0,
# (3.0, 3.0): 4.0,
# (4.0, 4.0): 5.0,
# (5.0, 5.0): 6.0,
# (6.0, 6.0): 7.0,
# (7.0, 7.0): 8.0,
# (8.0, 8.0): 9.0,
# (9.0, 9.0): 10.0}
</code></pre>
<p><strong>Replacement:</strong></p>
<pre><code>#Replace values from dictionary at correponding co-ordiantes
for i,j in inv_dictObj.keys():
replaceArrObj[i,j]=inv_dictObj[(i,j)]
replaceArrObj
#array([[ 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
# [ 0., 2., 0., 0., 0., 0., 0., 0., 0., 0.],
# [ 0., 0., 3., 0., 0., 0., 0., 0., 0., 0.],
# [ 0., 0., 0., 4., 0., 0., 0., 0., 0., 0.],
# [ 0., 0., 0., 0., 5., 0., 0., 0., 0., 0.],
# [ 0., 0., 0., 0., 0., 6., 0., 0., 0., 0.],
# [ 0., 0., 0., 0., 0., 0., 7., 0., 0., 0.],
# [ 0., 0., 0., 0., 0., 0., 0., 8., 0., 0.],
# [ 0., 0., 0., 0., 0., 0., 0., 0., 9., 0.],
# [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 10.]])
</code></pre>
<p><strong>Type Conversion:</strong></p>
<p>You should not face any errors/warnings as long as both array co-ordinates and dictionary entries have same type.
You can additionally enforce specific type conversion if you prefer int/float</p>
<pre><code>#float to int conversion in array
replaceArrObj.astype(int)
#array([[ 1, 0, 0, 0, 0, 0, 0, 0, 0, 0],
# [ 0, 2, 0, 0, 0, 0, 0, 0, 0, 0],
# [ 0, 0, 3, 0, 0, 0, 0, 0, 0, 0],
# [ 0, 0, 0, 4, 0, 0, 0, 0, 0, 0],
# [ 0, 0, 0, 0, 5, 0, 0, 0, 0, 0],
# [ 0, 0, 0, 0, 0, 6, 0, 0, 0, 0],
# [ 0, 0, 0, 0, 0, 0, 7, 0, 0, 0],
# [ 0, 0, 0, 0, 0, 0, 0, 8, 0, 0],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 9, 0],
# [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 10]])
#float to int conversion in dictionary, where k referes to key items and v to value items
int_dictObj = { (int(k[0]),int(k[1])):int(v) for k,v in inv_dictObj.items()}
int_dictObj
#{(0, 0): 1,
# (1, 1): 2,
# (2, 2): 3,
# (3, 3): 4,
# (4, 4): 5,
# (5, 5): 6,
# (6, 6): 7,
# (7, 7): 8,
# (8, 8): 9,
# (9, 9): 10}
</code></pre>
| 1 |
2016-09-12T12:27:34Z
|
[
"python",
"numpy",
"dictionary"
] |
PyQt, How to change BoxLayout's weight (size)
| 39,449,531 |
<p>I have a Gui layout as in the picture.
<a href="http://i.stack.imgur.com/hFAR1.png" rel="nofollow"><img src="http://i.stack.imgur.com/hFAR1.png" alt="enter image description here"></a></p>
<p>So currently Sublayout1 and Sublayout2 are equal in size. However I want to make Sublayout1's width smaller and stretch Sublayout2's width bigger. Is there a way to do it dynamically instead of putting a fixed size. Like on Android Studio where you can set the weight of the elements within the layout. </p>
<p>Also affecting the size shouldn't affect the Bottomlayout either. Many thanks for your help. A snippet of the code is:</p>
<pre><code>sublayout1 = QtGui.QVBoxLayout()
sublayout2 = QtGui.QVBoxLayout()
plotBox = QtGui.QHBoxLayout()
plotBox.addLayout(sublayout1)
plotBox.addLayout(sublayout2)
bottomlayout = QtGui.QHBoxLayout()
mainlayout.addLayout(plotBox)
mainlayout.addLayout(bottomlayout)
</code></pre>
| 0 |
2016-09-12T11:42:50Z
| 39,449,710 |
<p>Use the <code>stretch</code> parameter passed to <a href="http://doc.qt.io/qt-5/qboxlayout.html#addLayout" rel="nofollow"><code>QBoxLayout::addLayout</code></a></p>
<pre><code>sublayout1 = QtGui.QVBoxLayout()
sublayout2 = QtGui.QVBoxLayout()
plotBox = QtGui.QHBoxLayout()
plotBox.addLayout(sublayout1, 1)
plotBox.addLayout(sublayout2, 2)
bottomlayout = QtGui.QHBoxLayout()
mainlayout.addLayout(plotBox)
mainlayout.addLayout(bottomlayout)
</code></pre>
<p>In the above <code>sublayout2</code> has a stretch twice that of <code>sublayout1</code> and so should be allocated more space.</p>
| 2 |
2016-09-12T11:53:05Z
|
[
"python",
"qt",
"pyqt"
] |
python SyntaxError: invalid syntax %matplotlib inline
| 39,449,549 |
<p>I got this error in my python script:</p>
<pre><code>%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from utils import progress_bar_downloader
import os
#Hosting files on my dropbox since downloading from google code is painful
#Original project hosting is here: https://code.google.com/p/hmm-speech-recognition/downloads/list
#Audio is included in the zip file
link = 'https://dl.dropboxusercontent.com/u/15378192/audio.tar.gz'
dlname = 'audio.tar.gz'
if not os.path.exists('./%s' % dlname):
progress_bar_downloader(link, dlname)
os.system('tar xzf %s' % dlname)
else:
print('%s already downloaded!' % dlname)
</code></pre>
<p>I want use matplotlib but it gives syntax error,
I tried sudo apt-get install python-matplotlib</p>
| 1 |
2016-09-12T11:43:51Z
| 39,449,602 |
<p>"%matplotlib inline" isn't valid python code, so you can't put it in a script.</p>
<p>I assume you're using a Jupyter notebook? If so, put it in the first cell and all should work.</p>
| 1 |
2016-09-12T11:47:33Z
|
[
"python"
] |
python SyntaxError: invalid syntax %matplotlib inline
| 39,449,549 |
<p>I got this error in my python script:</p>
<pre><code>%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from utils import progress_bar_downloader
import os
#Hosting files on my dropbox since downloading from google code is painful
#Original project hosting is here: https://code.google.com/p/hmm-speech-recognition/downloads/list
#Audio is included in the zip file
link = 'https://dl.dropboxusercontent.com/u/15378192/audio.tar.gz'
dlname = 'audio.tar.gz'
if not os.path.exists('./%s' % dlname):
progress_bar_downloader(link, dlname)
os.system('tar xzf %s' % dlname)
else:
print('%s already downloaded!' % dlname)
</code></pre>
<p>I want use matplotlib but it gives syntax error,
I tried sudo apt-get install python-matplotlib</p>
| 1 |
2016-09-12T11:43:51Z
| 39,449,990 |
<p>if you are not using Jupyter IPython notebook, just comment out (or delete) the line, everything will work fine and a separate plot window will be opened if you are running your python script from the console. </p>
<p>if you are not using Jupyter IPython notebook, the very first python code cell in your notebook should have the line "%matplotlib inline" for you to be able to view any plot.</p>
| 0 |
2016-09-12T12:09:56Z
|
[
"python"
] |
Using Google Maps API with custom tiles
| 39,449,597 |
<p>So, basic gist is, I have my own tiles of not the real world I'd like to display with the Google Maps viewer. I've found examples of how to split an existing single image into tiles for use with it, but nothing that deals with setting up your own tiler.</p>
<p>I have map data such as this:
<a href="https://dl.dropboxusercontent.com/u/44766482/superimage/index.html" rel="nofollow">https://dl.dropboxusercontent.com/u/44766482/superimage/index.html</a>
Which right now is just a bunch of 1600x1600 images in an html table. This naive method works, but I'd like to switch to the more robust google api for better zooming and smarter streaming of the image data.</p>
<p>I've been unable to find a good example of how to generate your own tiles for the zoom levels and bring it together with some html/js.</p>
<p>If you want some more information for the goal process;
I have a python script that can output any size tiles of the map, at any zoom level. I'd like to bundle those together into a google maps api website. But in my own efforts I have not found a good example or documentation of how to do that. For one, I can only find examples of how the zoom levels work for the real world map, but not for a custom one.</p>
<p>Edit:</p>
<p>Got most things working as I want them, but I'm still confused regarding the "center" that can be set, as it's in lat and lng, which don't apply. I'd also like to set boundaries as currently it tries to load .png files outside of the maps range.</p>
<p>My current progress:
<a href="https://dl.dropboxusercontent.com/u/44766482/googlemapspreview/index.html" rel="nofollow">https://dl.dropboxusercontent.com/u/44766482/googlemapspreview/index.html</a></p>
| 0 |
2016-09-12T11:47:09Z
| 39,449,979 |
<p>I think what you are looking for is the google maps imageMapTypes:</p>
<p><a href="https://developers.google.com/maps/documentation/javascript/maptypes#ImageMapTypes" rel="nofollow">https://developers.google.com/maps/documentation/javascript/maptypes#ImageMapTypes</a></p>
| 0 |
2016-09-12T12:09:32Z
|
[
"javascript",
"python",
"html",
"google-maps"
] |
Using Google Maps API with custom tiles
| 39,449,597 |
<p>So, basic gist is, I have my own tiles of not the real world I'd like to display with the Google Maps viewer. I've found examples of how to split an existing single image into tiles for use with it, but nothing that deals with setting up your own tiler.</p>
<p>I have map data such as this:
<a href="https://dl.dropboxusercontent.com/u/44766482/superimage/index.html" rel="nofollow">https://dl.dropboxusercontent.com/u/44766482/superimage/index.html</a>
Which right now is just a bunch of 1600x1600 images in an html table. This naive method works, but I'd like to switch to the more robust google api for better zooming and smarter streaming of the image data.</p>
<p>I've been unable to find a good example of how to generate your own tiles for the zoom levels and bring it together with some html/js.</p>
<p>If you want some more information for the goal process;
I have a python script that can output any size tiles of the map, at any zoom level. I'd like to bundle those together into a google maps api website. But in my own efforts I have not found a good example or documentation of how to do that. For one, I can only find examples of how the zoom levels work for the real world map, but not for a custom one.</p>
<p>Edit:</p>
<p>Got most things working as I want them, but I'm still confused regarding the "center" that can be set, as it's in lat and lng, which don't apply. I'd also like to set boundaries as currently it tries to load .png files outside of the maps range.</p>
<p>My current progress:
<a href="https://dl.dropboxusercontent.com/u/44766482/googlemapspreview/index.html" rel="nofollow">https://dl.dropboxusercontent.com/u/44766482/googlemapspreview/index.html</a></p>
| 0 |
2016-09-12T11:47:09Z
| 39,516,370 |
<p>Basically, each zoom level is the 4 lower zoom tiles combined. A Projection function can be skipped to get orthogonal mapping.</p>
| 0 |
2016-09-15T16:46:53Z
|
[
"javascript",
"python",
"html",
"google-maps"
] |
Can't get all names with selenium from Facebook - Python 3
| 39,449,705 |
<p>i'm creating a scraper using python and selenium to improve my ability with python.
I'm having trouble with selecting certain elements.</p>
<p>On facebook i'm trying to scrape the list of someone friends.
This is the piece of code i wrote</p>
<pre><code>nomifb = driver.find_elements_by_xpath("//div[@class='fsl fwb fcb']")
print("We've found " + str(len(nomifb)) + " friends!")"
file = open(filename, "w")
for i in range(len(nomifb)):
nomifb_txt.append(nomifb[i].text)
nomifb_txt.sort()
for i in range(len(nomifb)):
file.write(nomifb_txt[i] + "\n")
file.close()
</code></pre>
<p>I get the div the contain the names using the "fsl fwb fcb" classes. That seems to be working on people with not a lot of friends.
If i get over 400 friends it seems to miss about 5% of it, and i cant seem to figure out why :/ </p>
<p>Is there anyway to improve the find_elements so that i get every person name? </p>
<p>I checked with chrome to see if there was any problem with my script but it seems that the "fsl fwb fcb" tag is used less time than the total number of friends and there lies the problem</p>
<p>Also i seems to have a too-hacky solution for the scrolldown, it goes down until it find</p>
<pre><code>element = driver.find_element_by_xpath("//*[contains(text(), 'Altre informazioni su')]")
</code></pre>
<p>Which is "Other information about" but as you can see it lack the support for other languages.
How would i go about selecting it in a better way? </p>
<p>Sorry for the newbie question, hope you can spare a moment to teach me better :)</p>
<p>Cheers </p>
<p>EDIT I've seemed to fix it! The problem was that facebook count in the friends number also deleted account, and that's where the discrepancy come from. </p>
| 0 |
2016-09-12T11:52:49Z
| 39,726,176 |
<p>you can try by element name
<code>element = driver.find_element_by_name("name of tag")</code></p>
| 0 |
2016-09-27T13:37:26Z
|
[
"python",
"facebook",
"python-3.x",
"selenium",
"screen-scraping"
] |
Python: modul not found after Anaconda installation
| 39,449,725 |
<p>I've successfully installed Python 2.7 and Anaconda but when i try to import a library i get always this error:</p>
<pre><code>>>> import scipy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named scipy
</code></pre>
<p>I've set up the <code>PYTHONHOME</code> to <code>C:\Python27</code> and <code>PYTHONPATH</code> to <code>C:\Python27\Lib</code>. </p>
<p><strong>EDIT : content of PATH</strong></p>
<p>In my $PATH variable i have <code>C:\Users\Mattia\Anaconda2</code>, <code>C:\Users\Mattia\Anaconda2\Scripts</code> and <code>C:\Users\Mattia\Anaconda2\Library\bin</code>.</p>
<p>Do i have to set any other env veriables?</p>
| 0 |
2016-09-12T11:53:31Z
| 39,449,767 |
<p>Try to install <code>scipy</code> again with:</p>
<pre><code>conda install numpy scipy
</code></pre>
| 0 |
2016-09-12T11:56:07Z
|
[
"python",
"windows",
"python-2.7",
"anaconda"
] |
Python: modul not found after Anaconda installation
| 39,449,725 |
<p>I've successfully installed Python 2.7 and Anaconda but when i try to import a library i get always this error:</p>
<pre><code>>>> import scipy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named scipy
</code></pre>
<p>I've set up the <code>PYTHONHOME</code> to <code>C:\Python27</code> and <code>PYTHONPATH</code> to <code>C:\Python27\Lib</code>. </p>
<p><strong>EDIT : content of PATH</strong></p>
<p>In my $PATH variable i have <code>C:\Users\Mattia\Anaconda2</code>, <code>C:\Users\Mattia\Anaconda2\Scripts</code> and <code>C:\Users\Mattia\Anaconda2\Library\bin</code>.</p>
<p>Do i have to set any other env veriables?</p>
| 0 |
2016-09-12T11:53:31Z
| 39,450,523 |
<p>As pointed out by @Mr.F the error was given by the presence of the <code>PYTHONPATH</code> and <code>PYTHONHOME</code>. Deleting them i was able to use the Anaconda version of python.</p>
| 0 |
2016-09-12T12:37:53Z
|
[
"python",
"windows",
"python-2.7",
"anaconda"
] |
Python: modul not found after Anaconda installation
| 39,449,725 |
<p>I've successfully installed Python 2.7 and Anaconda but when i try to import a library i get always this error:</p>
<pre><code>>>> import scipy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named scipy
</code></pre>
<p>I've set up the <code>PYTHONHOME</code> to <code>C:\Python27</code> and <code>PYTHONPATH</code> to <code>C:\Python27\Lib</code>. </p>
<p><strong>EDIT : content of PATH</strong></p>
<p>In my $PATH variable i have <code>C:\Users\Mattia\Anaconda2</code>, <code>C:\Users\Mattia\Anaconda2\Scripts</code> and <code>C:\Users\Mattia\Anaconda2\Library\bin</code>.</p>
<p>Do i have to set any other env veriables?</p>
| 0 |
2016-09-12T11:53:31Z
| 39,469,995 |
<p>The problem is that you should not have either <code>PYTHONPATH</code> or <code>PYTHONHOME</code> set. They are both pointing to a non-Continuum version of Anaconda, I believe. Anaconda will install (by default) into a directory called <code>Anaconda</code>, either at <code>C:\Anaconda</code> or at <code>C:\Users\USERNAME\Anaconda</code> (IIRC). It is generally recommended that you should not ever set <code>PYTHONPATH</code> or <code>PYTHONHOME</code>, except as a last resort, exactly because of these kinds of problems.</p>
<p>You can see which Python interpreter you're running by doing:</p>
<pre><code>>>> import sys
>>> sys.executable
</code></pre>
<p>And then you can see what directories are ending up in your Python library path (where <code>import</code> statements will look for packages, such as <code>scipy</code> and <code>numpy</code>) by doing one of the following:</p>
<pre><code>>>> import sys
>>> sys.path
</code></pre>
<p>or the more readable version:</p>
<pre><code>>>> import sys
>>> for p in sys.path:
... print p
</code></pre>
| 0 |
2016-09-13T12:22:43Z
|
[
"python",
"windows",
"python-2.7",
"anaconda"
] |
aiohttp: how-to retrieve the data (body) in aiohttp server from requests.get
| 39,449,739 |
<p>Could you please advise on the following?</p>
<p>On the localhost:8900 there is aiohttp server running</p>
<p>when i do a request like (using the python2 module requests) from python
<code>
requests.get("http://127.0.01:8900/api/bgp/show-route", data={'topo':"switzerland", 'pop':"zrh", 'prefix':"1.1.1.1/32"})
</code></p>
<p>And there is a route defined in the aiohttp server
<code>
app.router.add_route("GET", "/api/bgp/show-route", api_bgp_show_route)
</code></p>
<p>which is being handled like
<code>
def api_bgp_show_route(request):
pass
</code></p>
<p>The question is: how do i retrieve on server-side the data part of the request ? meaning <code>{'topo':"switzerland", 'pop':"zrh", 'prefix':"1.1.1.1/32"}</code></p>
| 2 |
2016-09-12T11:54:16Z
| 39,452,219 |
<p>ahh the <code>data</code> part is accessed like that </p>
<pre><code>await request.json()
</code></pre>
<p>You can find this in official <a href="http://aiohttp.readthedocs.io/en/stable/web_reference.html#aiohttp.web.Request.json" rel="nofollow">aiohttp docs</a></p>
| 3 |
2016-09-12T14:03:47Z
|
[
"python",
"rest",
"python-3.x",
"aiohttp"
] |
Reusing Mock to create attribute mocks unittest.patch
| 39,449,844 |
<pre><code>from unittest.mock import patch
class A:
def b(self):
return 'HAHAHA A'
a = A()
with patch('__main__.A') as a_mock:
a_mock.b.return_value = 'not working'
print(a.b())
HAHAHA A
>>>
</code></pre>
<p>Why doesn't it print <code>'not working'</code>? What for is a_mock, then? ______________</p>
| 0 |
2016-09-12T12:00:09Z
| 39,449,878 |
<p>You replaced the <em>whole class</em> with your patch, not the method on the existing class. <code>a = A()</code> created an instance of <code>A</code> before you replaced the class, so <code>a.__class__</code> still references the actual class, not the mock.</p>
<p>Mocking can only ever replace one reference at a time, and not the object referenced. Before the patch, both the names <code>A</code> and the attribute <code>a.__class__</code> are references to the class object. You then patched only the <code>A</code> reference, leaving <code>a.__class__</code> in place.</p>
<p>In other words, <code>a.__class__</code> is not patched, only <code>A</code>, and <code>A().b()</code> <em>would</em> print <code>not working</code>.</p>
<p>You'd have to patch <em>just</em> the method on the class, so that <code>a.__class__</code> still references <code>A</code>, and <code>a.b</code> will resolve to the patched <code>A.b</code> mock:</p>
<pre><code>with patch('__main__.A.b') as b_mock:
b_mock.return_value = 'working as long as you patch the right object'
print(a.b())
</code></pre>
<p>Demo:</p>
<pre><code>>>> with patch('__main__.A.b') as b_mock:
... b_mock.return_value = 'working as long as you patch the right object'
... print(a.b())
...
working as long as you patch the right object
</code></pre>
<p>You can't, unfortunately, patch the <code>a.__class__</code> reference with a <code>Mock</code>; Python only lets you use actual classes for that attribute.</p>
<pre><code>>>> with patch('__main__.a.__class__') as a_class_mock:
... a_class_mock.b.return_value = 'working as long as you patch the right object'
... print(a.b())
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/mjpieters/Development/Library/buildout.python/parts/opt/lib/python3.5/unittest/mock.py", line 1312, in __enter__
setattr(self.target, self.attribute, new_attr)
TypeError: __class__ must be set to a class, not 'MagicMock' object
</code></pre>
| 2 |
2016-09-12T12:02:47Z
|
[
"python",
"mocking",
"python-unittest"
] |
Reusing Mock to create attribute mocks unittest.patch
| 39,449,844 |
<pre><code>from unittest.mock import patch
class A:
def b(self):
return 'HAHAHA A'
a = A()
with patch('__main__.A') as a_mock:
a_mock.b.return_value = 'not working'
print(a.b())
HAHAHA A
>>>
</code></pre>
<p>Why doesn't it print <code>'not working'</code>? What for is a_mock, then? ______________</p>
| 0 |
2016-09-12T12:00:09Z
| 39,449,998 |
<p>You may want to patch your instance instead of your class:</p>
<pre><code>with patch('__main__.a') as a_mock:
a_mock.b.return_value = 'works perfectly ;)'
print(a.b())
</code></pre>
| 0 |
2016-09-12T12:10:08Z
|
[
"python",
"mocking",
"python-unittest"
] |
optimization & numpy vectorization: repeat set of arrays
| 39,449,853 |
<p>I am getting used to the awesomness of <code>numpy.apply_along_axis</code> and I was wondering whether I could take the vectorization to the next level - mainly for speed purposes, that is using the potential of the function by trying to eliminate the for loop I have in the code below. </p>
<pre><code>from pandas import DataFrame
import numpy as np
from time import time
list_away = []
new_data_morphed = DataFrame(np.random.rand(1000, 5)).transpose().as_matrix()
group_by_strat_full = DataFrame([
[0, 1, 4],
[1, 2, 3],
[2, 3, 4],
[0, 1, 4]], columns=['add_mask_1', 'add_mask_2', 'add_mask_3', ])
all_the_core_strats = [lambda x: x.prod(), lambda x: sum(x), ]
def run_strat_mod(df_vals, core_strat, dict_mask_1, dict_mask_2, dict_mask_3, list_away):
slice_df = df_vals[[dict_mask_1, dict_mask_2, dict_mask_3]]
# TODO: this comprehension list should be vectorized
to_append = [np.apply_along_axis(lambda x: u(x), 0, slice_df) for u in core_strat]
list_away.append(to_append)
t1 = time()
results_3 = group_by_strat_full.apply(lambda x: run_strat_mod(
new_data_morphed,
all_the_core_strats,
x['add_mask_1'],
x['add_mask_2'],
x['add_mask_3'],
list_away), axis=1)
t2 = time()
print(abs(t1 - t2))
</code></pre>
<p>in order to do that, I was thinking of repeating the initial set of arrays, that is <code>slice_df</code> so that I could apply <code>numpy.apply_along_axis</code> to a new <code>all_the_core_strats_mod</code>.</p>
<p>with output, from this:</p>
<pre><code> print(slice_df)
[[[ 0.91302268 0.6172959 0.05478723 ..., 0.37028638 0.52116891
0.14158221]
[ 0.72579223 0.78732047 0.61335979 ..., 0.46359203 0.27593171
0.73700975]
[ 0.21706977 0.87639447 0.44936619 ..., 0.44319643 0.53712003
0.8071096 ]]
</code></pre>
<p>to this:</p>
<pre><code>slice_df = np.array([df_vals[[dict_mask_1, dict_mask_2, dict_mask_3]]] * len(core_strat))
print(slice_df)
[[[ 0.91302268 0.6172959 0.05478723 ..., 0.37028638 0.52116891
0.14158221]
[ 0.72579223 0.78732047 0.61335979 ..., 0.46359203 0.27593171
0.73700975]
[ 0.21706977 0.87639447 0.44936619 ..., 0.44319643 0.53712003
0.8071096 ]]
[[ 0.91302268 0.6172959 0.05478723 ..., 0.37028638 0.52116891
0.14158221]
[ 0.72579223 0.78732047 0.61335979 ..., 0.46359203 0.27593171
0.73700975]
[ 0.21706977 0.87639447 0.44936619 ..., 0.44319643 0.53712003
0.8071096 ]]]
</code></pre>
<p>and then </p>
<pre><code>def all_the_core_strats_mod(x):
return [x[0].prod(), sum(x[1])]
to_append = np.apply_along_axis(all_the_core_strats_mod, 0, slice_df)
</code></pre>
<p>but it's not working as I imagined (applying functions separately to each duplicated block). </p>
<p>any ideas are welcome (the faster the better!)</p>
| 0 |
2016-09-12T12:00:43Z
| 39,457,518 |
<pre><code>def foo(x):
print(x) # add for diagnosis
return [x[0].prod(), x[1].sum()]
</code></pre>
<p>You apply it to a 3d array, which for simplicity I'll use:</p>
<pre><code>In [64]: x=np.arange(2*3*2).reshape(2,3,2)
In [66]: np.apply_along_axis(foo,0,x)
[0 6]
[1 7]
[2 8]
[3 9]
[ 4 10]
[ 5 11]
Out[66]:
array([[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]]])
</code></pre>
<p>So <code>apply_along_axis</code> is passing to <code>foo</code>, <code>x[:,0,0]</code>, <code>x[:,0,1]</code>, <code>x[:,1,0]</code>, etc. Taking <code>prod</code> and <code>sum</code> on those 2 numbers isn't very exciting.</p>
<p><code>apply_along_axis</code> is just a convenient way of doing:</p>
<pre><code>for i in range(x.shape[1]):
for j in range(x.shape[2]):
ret[:,i,j] = foo(x[:,i,j])
</code></pre>
| 0 |
2016-09-12T19:34:09Z
|
[
"python",
"arrays",
"numpy",
"vectorization"
] |
Manual implementation of a Support Vector Machine
| 39,449,904 |
<p>I am wondering is there any article where SVM (Support Vector Machine) is implemented manually in R or Python.</p>
<p>I do <strong>not</strong> want to use a built-in function or package. ?</p>
<p>The example could be very simple in terms of feature space and linear separable. </p>
<p>I just want to go through the whole process to enhance my understanding.</p>
| -1 |
2016-09-12T12:04:36Z
| 39,498,571 |
<p>The answer to this question is rather broad since there are several possible algorithms in order to train SVMs. Also packages like LibSVM (available both for Python and R) are open-source so you are free to check the code inside.</p>
<p>Hereinafter I will consider the Sequential Minimal Optimization (SMO) algorithm by J. Pratt which is implemented in LibSVM. Implementing manually an algorithm which solves the SVM optimization problem is rather tedious but, if that's your first approach with SVMs I'd suggest the following (albeit simplified) version of the SMO algorithm</p>
<p><a href="http://cs229.stanford.edu/materials/smo.pdf" rel="nofollow">http://cs229.stanford.edu/materials/smo.pdf</a></p>
<p>This lecture is from Prof. Andrew Ng (Stanford) and he shows a simplified version of the SMO algorithm. I don't know what your theoretical background in SVMs is, but let's just say that the major difference is that the Lagrange multipliers pair (<em>alpha_i</em> and <em>alpha_j</em>) is randomly selected whereas in the original SMO algorithm there's a much harder heuristic involved.<br>
In other terms, thus, this algorithm does not guarantee to converge to a global optimum (which is always true in SVMs for the dataset at hand if trained with a proper algorithm), but this can give you a nice introduction to the optimization problem behind SVMs. </p>
<p>This paper, however, does not show any codes in R or Python but the pseudo-code is rather straightforward and easy to implement (~100 lines of code in either Matlab or Python). Also keep in mind that this training algorithm returns <em>alpha</em> (Lagrange multipliers vector) and <em>b</em> (intercept). You might want to have some additional parameters such as the proper Support Vectors but starting from vector <em>alpha</em> such quantities are rather easy to calculate.</p>
<p>At first, let's suppose you have a <code>LearningSet</code> in which there are as many rows as there are patterns and as many columns as there are features and let also suppose that you have <code>LearningLabels</code> which is a vector with as many elements as there are patterns and this vector (as its name suggests) contains the proper labels for the patterns in <code>LearningSet</code>.<br>
Also recall that <em>alpha</em> has as many elements as there are patterns. </p>
<p>In order to evaluate the Support Vector indices you can check whether element <em>i</em> in <em>alpha</em> is greater than or equal to 0: if <code>alpha[i]>0</code> then the <em>i</em>-th pattern from <code>LearningSet</code> is a Support Vector. Similarly, the <em>i</em>-th element from <code>LearningLabels</code> is the related label.</p>
<p>Finally, you might want to evaluate vector <em>w</em>, the free parameters vector. Given that <em>alpha</em> is known, you can apply the following formula </p>
<pre><code>w = ((alpha.*LearningLabels)'*LearningSet)'
</code></pre>
<p>where <code>alpha</code> and <code>LearningLabels</code> are column vectors and <code>LearningSet</code> is the matrix as described above. In the above formula <code>.*</code> is the element-wise product whereas <code>'</code> is the transpose operator.</p>
| 0 |
2016-09-14T19:50:11Z
|
[
"python",
"svm"
] |
Segmentation fault: 11 when trying to run Pygame
| 39,449,909 |
<p>I am using OSX 10.11.6, Python 2.17.12 and Pygame 1.9.1.
I made this simple program that should display a black rectangle in the middle of a white field. However, when I try to run it I get an error saying:</p>
<pre><code>Segmentation fault: 11
</code></pre>
<p>I have tried several things, but nothing seems to work. Here is my code:</p>
<pre><code>import pygame
pygame.init()
white = (255, 255, 255)
black = (0, 0, 0)
red = (255, 0, 0)
gameDisplay = pygame.display.set_mode((800, 600))
pygame.display.set_caption('Slither')
gameExit = False
while not gameExit:
for event in pygame.event.get():
if event.type == pygame.QUIT:
gameExit = True
gameDisplay.fill(white)
pygame.draw.rect(gameDisplay, black, [400, 300, 20, 20])
pygame.display.update()
pygame.quit()
quit()
</code></pre>
<p>Does someone know how I can solve this issue? Thanks in advance!
Note: I am writing my code in Atom, and running it in Terminal using this command:</p>
<pre><code>$ python2.7-32 slither.py
</code></pre>
| 0 |
2016-09-12T12:04:56Z
| 39,719,491 |
<p>This is due to a flaw in the built-in SDL library that Pygame depends on. Pygame can create a screen, but attempting to touch it will immediately crash it with Segmentation error 11.</p>
<p>From the official SDL website, go to <a href="https://www.libsdl.org/download-1.2.php" rel="nofollow">the download page</a> and get the runtime library 1.2.15 for Mac. Open the .dmg you downloaded and you'll be given an SDL.framework file. Open /Library/Frameworks in Finder and move the framework file there. You may need to select Replace.</p>
| 0 |
2016-09-27T08:13:52Z
|
[
"python",
"terminal",
"pygame"
] |
numpy apply_over_axes forcing keepdims=True?
| 39,449,926 |
<p>I have the following code </p>
<pre><code>import numpy as np
import sys
def barycenter( arr, axis=0 ) :
bc = np.mean( arr, axis, keepdims=False )
print( "src shape:", arr.shape, ", **** trg shape:", bc.shape, "****" )
sys.stdout.flush()
return bc
a = np.array([[[0.1, 0.2, 0.3], [0.2, 0.3, 0.4]],
[[0.4, 0.4, 0.4], [0.7, 0.6, 0.8]]], np.float)
e = barycenter( a, 2 )
print( "direct application =", e, "**** (trg shape =", e.shape, ") ****\n" )
f = np.apply_over_axes( barycenter, a, 2 )
print( "application through apply_over_axes =", f, "**** (trg shape =", f.shape, ") ****\n" )
</code></pre>
<p>which produces the following output</p>
<pre><code>src shape: (2, 2, 3) , **** trg shape: (2, 2) ****
direct application = [[ 0.2 0.3]
[ 0.4 0.7]] **** (trg shape = (2, 2) ) ****
src shape: (2, 2, 3) , **** trg shape: (2, 2) ****
application through apply_over_axes = [[[ 0.2]
[ 0.3]]
[[ 0.4]
[ 0.7]]] **** (trg shape = (2, 2, 1) ) ****
</code></pre>
<p>So the return value of the function <code>barycenter</code> is different from what is obtained with <code>apply_over_axes( barycenter, ...</code>.</p>
<p>Why is that so?</p>
| 0 |
2016-09-12T12:05:51Z
| 39,450,318 |
<p>The result follows directly from the doc:</p>
<blockquote>
<p>func is called as res = func(a, axis), where axis is the first element
of axes. The result res of the function call must have either the same
dimensions as a or one less dimension. If res has one less dimension
than a, a dimension is inserted before axis. The call to func is then
repeated for each axis in axes, with res as the first argument.</p>
</blockquote>
<p>Your func reduces the dimension by 1, so apply_over_axes inserts a dimension.</p>
| 1 |
2016-09-12T12:26:59Z
|
[
"python",
"numpy"
] |
python unittest.TestCase.assertRaises not working
| 39,450,098 |
<p>I'm trying to run tests on my 'add' function in Python but it is giving an error :</p>
<pre><code>7
E
======================================================================
ERROR: test_upper (__main__.TestStringMethods)
----------------------------------------------------------------------
Traceback (most recent call last):
File "C:/Users/MaZee/PycharmProjects/andela/asdasd.py", line 22, in test_upper
self.assertEqual("Input should be a string:", cm.exception.message , "Input is not a string:")
AttributeError: '_AssertRaisesContext' object has no attribute 'exception'
----------------------------------------------------------------------
Ran 1 test in 0.001s
FAILED (errors=1)
Process finished with exit code 1
</code></pre>
<p>Here's my code : </p>
<pre><code> import unittest
def add(a,b):
"""
Returns the addition value of a and b.
"""
try:
out = a + b
except TypeError:
raise TypeError("Input should be a string:")
print (out)
return
class TestStringMethods(unittest.TestCase):
def test_upper(self):
with self.assertRaises(TypeError) as cm:
add(3,4)
self.assertEqual("Input should be a string:", cm.exception.message , "Input is not a string:")
if __name__ == '__main__':
unittest.main()
</code></pre>
| 2 |
2016-09-12T12:16:12Z
| 39,450,218 |
<p>As the error message is telling you, your <em>assert raises</em> object has no attribute <code>exception</code>. To be more specific, this call: </p>
<pre><code>cm.exception.message
</code></pre>
<p><code>cm</code> is your assert object in this case, and because the code you are testing never actually raises, your <code>cm</code> object will not have the <code>exception</code> attribute you are trying to access.</p>
<p>Now, on to why this is happening. You are trying to test what happens when an <code>exception</code> is being raised in your <code>add</code> method, in order to raise a <code>TypeError</code>. However, if you look at your test case you are passing two valid integers in to the <code>add</code> method. You will not raise an exception because this is a valid test case. </p>
<p>For your unittest, you are looking to test what happens when you <code>raise</code> something, i.e. insert invalid data to the <code>add</code> method. Try your code again, but this time in your unittest, pass the following:</p>
<pre><code>add(5, 'this will raise')
</code></pre>
<p>You will now get your <code>TypeError</code>.</p>
<p>You will also need to perform your assertion validation outside of the context manager: </p>
<pre><code>def test_upper(self):
with self.assertRaises(TypeError) as cm:
add(3, 'this will raise')
self.assertEqual("Input should be a string:", cm.exception.message, "Input is not a string:")
</code></pre>
<p>You will now have another problem. There is no <code>message</code> attribute. You should be checking simply <code>cm.exception</code>. Furthermore, in your <code>add</code> method your string is: </p>
<pre><code>"Input should be a string:"
</code></pre>
<p>However, you are checking that it is: </p>
<pre><code>"Input is not a string:"
</code></pre>
<p>So, once you correct your unittest to use <code>cm.exception</code>, you will now be faced with:</p>
<pre><code>AssertionError: 'Input should be a string:' != TypeError('Input should be a string:',) : Input is not a string:
</code></pre>
<p>So, your assertion should check the exception string by calling <code>str</code> on the <code>cm.exception</code>:</p>
<pre><code>self.assertEqual("Input should be a string:", str(cm.exception), "Input should be a string:")
</code></pre>
<p>So, your full test method should be:</p>
<pre><code>def test_upper(self):
with self.assertRaises(TypeError) as cm:
add(3, 'this will raise')
self.assertEqual("Input should be a string:", str(cm.exception), "Input should be a string:")
</code></pre>
| 4 |
2016-09-12T12:22:14Z
|
[
"python",
"pycharm"
] |
Looping over widgets inside widgets inside layouts
| 39,450,141 |
<p>Similar question to what we have here <a href="http://stackoverflow.com/questions/5150182/loop-over-widgets-in-pyqt-layout">Loop over widgets in PyQt Layout</a> but a bit more complex...</p>
<p>I have</p>
<pre><code>QVGridLayout
QGroupBox
QGridLayout
QLineEdit
</code></pre>
<p>I'd like to access QLineEdit but so far I dont know how to access children of QGroupBox</p>
<pre><code> for i in range(self.GridLayout.count()):
item = self.GridLayout.itemAt(i)
for i in range(item.count()):
lay = item.itemAt(i)
edit = lay.findChildren(QLineEdit)
print edit.text()
</code></pre>
<p>Can any1 point me to right dirrection?</p>
| 0 |
2016-09-12T12:18:29Z
| 39,453,020 |
<p>Sorted :</p>
<pre><code>for i in range(self.GridLayout.count()):
item = self.GridLayout.itemAt(i)
if type(item.widget()) == QGroupBox:
child = item.widget().children()
</code></pre>
<p>I had to use item.widget() to get access to GroupBox.
Hope this helps some1. </p>
| 0 |
2016-09-12T14:45:09Z
|
[
"python",
"pyqt",
"qgroupbox"
] |
Looping over widgets inside widgets inside layouts
| 39,450,141 |
<p>Similar question to what we have here <a href="http://stackoverflow.com/questions/5150182/loop-over-widgets-in-pyqt-layout">Loop over widgets in PyQt Layout</a> but a bit more complex...</p>
<p>I have</p>
<pre><code>QVGridLayout
QGroupBox
QGridLayout
QLineEdit
</code></pre>
<p>I'd like to access QLineEdit but so far I dont know how to access children of QGroupBox</p>
<pre><code> for i in range(self.GridLayout.count()):
item = self.GridLayout.itemAt(i)
for i in range(item.count()):
lay = item.itemAt(i)
edit = lay.findChildren(QLineEdit)
print edit.text()
</code></pre>
<p>Can any1 point me to right dirrection?</p>
| 0 |
2016-09-12T12:18:29Z
| 39,475,208 |
<p>When a widget is added to a layout, it automatically becomes a child of the widget the layout it is set on. So the example reduces to a two-liner:</p>
<pre><code>for group in self.GridLayout.parentWidget().findChildren(QGroupBox):
for edit in group.findChildren(QLineEdit):
# do stuff with edit
</code></pre>
<p>However, <code>findChildren</code> is <em>recursive</em>, so if all the line-edits are in group-boxes, this can be simplified to a one-liner:</p>
<pre><code>for edit in self.GridLayout.parentWidget().findChildren(QLineEdit):
# do stuff with edit
</code></pre>
| 1 |
2016-09-13T16:49:37Z
|
[
"python",
"pyqt",
"qgroupbox"
] |
Passing Django Model to Template
| 39,450,186 |
<p>I have a little problem which I can't seem to get my head around. It seems like a very trivial thing but I just can't figure it out. Essentially what I'm trying to do is to create a <code>Project</code> model which holds information on a certain project and then have another model called <code>Link</code> which holds the name of the link such as 'Get Program' or 'Download Release' and have a URLField which holds the URL for those links and is child of <code>Project</code>.</p>
<p>So far my <code>models.py</code> is as below:</p>
<pre><code>from django.db import models
class Project(models.Model):
name = models.CharField(max_length=255)
description = models.TextField()
def get_name(self):
return self.name
def get_description(self):
return self.description
def __str__(self):
return self.name
class Link(models.Model):
project = models.ForeignKey(Project)
name = models.CharField(max_length=255)
url = models.URLField()
def get_name(self):
return self.name
def __str__(self):
return self.name
</code></pre>
<p>Problem is in my <code>views.py</code> I want to be able to pass Projects object so I can iterate over them and display all links for each project including the project objects fields such as name and description. So far I've done:</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
from .models import Project, Link
def projects_index(request):
projects = Project.objects.all()
links = Link.objects.all()
context = {
'projects': projects,
}
return render(request, 'projects.html', context)
</code></pre>
| 1 |
2016-09-12T12:20:39Z
| 39,450,279 |
<p>Inside your template <code>projects.html</code> you should be able to loop over the Links for each project with</p>
<pre><code>{% for project in projects %}
{% for link in project.link_set.all %}
{{ link }}
{% endfor %}
{% endfor %}
</code></pre>
| 1 |
2016-09-12T12:24:55Z
|
[
"python",
"django",
"django-models"
] |
Passing Django Model to Template
| 39,450,186 |
<p>I have a little problem which I can't seem to get my head around. It seems like a very trivial thing but I just can't figure it out. Essentially what I'm trying to do is to create a <code>Project</code> model which holds information on a certain project and then have another model called <code>Link</code> which holds the name of the link such as 'Get Program' or 'Download Release' and have a URLField which holds the URL for those links and is child of <code>Project</code>.</p>
<p>So far my <code>models.py</code> is as below:</p>
<pre><code>from django.db import models
class Project(models.Model):
name = models.CharField(max_length=255)
description = models.TextField()
def get_name(self):
return self.name
def get_description(self):
return self.description
def __str__(self):
return self.name
class Link(models.Model):
project = models.ForeignKey(Project)
name = models.CharField(max_length=255)
url = models.URLField()
def get_name(self):
return self.name
def __str__(self):
return self.name
</code></pre>
<p>Problem is in my <code>views.py</code> I want to be able to pass Projects object so I can iterate over them and display all links for each project including the project objects fields such as name and description. So far I've done:</p>
<pre><code>from django.shortcuts import render
from django.http import HttpResponse
from .models import Project, Link
def projects_index(request):
projects = Project.objects.all()
links = Link.objects.all()
context = {
'projects': projects,
}
return render(request, 'projects.html', context)
</code></pre>
| 1 |
2016-09-12T12:20:39Z
| 39,450,352 |
<p>You really should do the tutorial. </p>
<pre><code>for project in projects:
for link in project.link_set.all:
<...>
</code></pre>
<p>Add a <code>related_name="links"</code> to your foreign key, and the inner loop can be</p>
<pre><code>for link in project.links:
</code></pre>
| 0 |
2016-09-12T12:28:55Z
|
[
"python",
"django",
"django-models"
] |
Efficiently read/shift out-of-range datetimes using pandas
| 39,450,309 |
<p>so I have a dataset (a bunch of csv files) which contains (anonymized) datetimes in the following form:</p>
<blockquote>
<p>3202-11-11 14:51:00 EST</p>
</blockquote>
<p>The dates have been shifted by some random time for each entity. So differences in time for a given entity are still meaningful.</p>
<p>When trying to convert using e.g.
<code>pd.to_datetime(['3202-11-11 14:51:00 EST'], format='%Y-%m-%d %H:%M:%S EST')</code>, this will result in 'OutOfBoundsDatetime' error.</p>
<p>For my use case it would be ideal to specify a number of years by which to shift all dates when reading the csv files, s.t. they are within the valid pandas datetime range.</p>
<p>Do you have an idea how this could be solved efficiently? I have to do this on ~40k entities/csv files, with 10 to 10k such dates per csv. (my non-efficient idea: Go through python datetime which works for years till 9999, shift dates there and then convert to pandas datetime)</p>
<p><strong>EDIT</strong>: I also asked this question in IRC #pydata and got this answer (thanks jboy): </p>
<pre><code>>>> from datetime import timedelta
>>> offset = timedelta(days=10000)
>>> df
time
0 3001-01-01 01:00:01
1 3001-01-01 01:00:02
2 3001-01-01 01:00:05
3 3001-01-01 01:00:09
>>> df['time'].map(lambda t: t - offset)
0 2973-08-15 01:00:01
1 2973-08-15 01:00:02
2 2973-08-15 01:00:05
3 2973-08-15 01:00:09
Name: time, dtype: object
</code></pre>
<p>The only thing I have to do differently was:</p>
<pre><code> df['time'].map(lambda t: datetime.datetime.strptime(t, '%Y-%m-%d %H:%M:%S EST')-offset)
</code></pre>
<p>Because my time column was still str and not datetime.datetime.</p>
| 1 |
2016-09-12T12:26:33Z
| 39,450,551 |
<p>One thing you could do is to work on this at the string level, deducting some number of years (in the following, 1200):</p>
<pre><code>s = '3202-11-11 14:51:00 EST'
>>> In [21]: pd.to_datetime(str(int(s[: 4]) - 1200) + s[4: ])
Out[21]: Timestamp('2002-11-11 14:51:00')
</code></pre>
<p>You can also vectorize this. Say you start with</p>
<pre><code>dates = pd.Series([s, s])
</code></pre>
<p>Then you can use</p>
<pre><code>>>> pd.to_datetime((dates.str[: 4].astype(int) - 1200).astype(str) + dates.str[4: ])
0 2002-11-11 14:51:00
1 2002-11-11 14:51:00
dtype: datetime64[ns]
</code></pre>
| 0 |
2016-09-12T12:39:55Z
|
[
"python",
"datetime",
"pandas"
] |
Efficiently read/shift out-of-range datetimes using pandas
| 39,450,309 |
<p>so I have a dataset (a bunch of csv files) which contains (anonymized) datetimes in the following form:</p>
<blockquote>
<p>3202-11-11 14:51:00 EST</p>
</blockquote>
<p>The dates have been shifted by some random time for each entity. So differences in time for a given entity are still meaningful.</p>
<p>When trying to convert using e.g.
<code>pd.to_datetime(['3202-11-11 14:51:00 EST'], format='%Y-%m-%d %H:%M:%S EST')</code>, this will result in 'OutOfBoundsDatetime' error.</p>
<p>For my use case it would be ideal to specify a number of years by which to shift all dates when reading the csv files, s.t. they are within the valid pandas datetime range.</p>
<p>Do you have an idea how this could be solved efficiently? I have to do this on ~40k entities/csv files, with 10 to 10k such dates per csv. (my non-efficient idea: Go through python datetime which works for years till 9999, shift dates there and then convert to pandas datetime)</p>
<p><strong>EDIT</strong>: I also asked this question in IRC #pydata and got this answer (thanks jboy): </p>
<pre><code>>>> from datetime import timedelta
>>> offset = timedelta(days=10000)
>>> df
time
0 3001-01-01 01:00:01
1 3001-01-01 01:00:02
2 3001-01-01 01:00:05
3 3001-01-01 01:00:09
>>> df['time'].map(lambda t: t - offset)
0 2973-08-15 01:00:01
1 2973-08-15 01:00:02
2 2973-08-15 01:00:05
3 2973-08-15 01:00:09
Name: time, dtype: object
</code></pre>
<p>The only thing I have to do differently was:</p>
<pre><code> df['time'].map(lambda t: datetime.datetime.strptime(t, '%Y-%m-%d %H:%M:%S EST')-offset)
</code></pre>
<p>Because my time column was still str and not datetime.datetime.</p>
| 1 |
2016-09-12T12:26:33Z
| 39,451,384 |
<p>The pandas datetime object uses a 64-bit integer to represent time, and since it has nanosecond resolution, the upper limitation stops at <code>2262-04-11</code>, referenced <a href="http://pandas-docs.github.io/pandas-docs-travis/timeseries.html#timeseries-timestamp-limits" rel="nofollow">here</a>.</p>
<p>I'm not sure if you plan on doing any sort of time manipulations on the time objects, but if you just want them represented in the dataframe, I don't see why not use the python datetime object to just represent them as-is without doing any time shifting:</p>
<p><strong>EXAMPLE</strong></p>
<pre><code>from datetime import datetime
s = pd.Series(['3202-11-11 14:51:00 EST', '9999-12-31 12:21:00 EST'])
s = s.apply(lambda x: datetime.strptime(x[:-4], "%Y-%m-%d %H:%M:%S"))
</code></pre>
<p><strong>RETURNS</strong></p>
<pre><code>0 3202-11-11 14:51:00
1 9999-12-31 12:21:00
dtype: object
</code></pre>
<p>Running a quick sniff type check on the first cell:</p>
<pre><code>>>> type(s[0])
<type 'datetime.datetime'>
</code></pre>
| 0 |
2016-09-12T13:23:07Z
|
[
"python",
"datetime",
"pandas"
] |
Weird for loop in python 2.7
| 39,450,444 |
<p>I have a codewars problem and i'm new to python (less than 24hrs using it).
I'm solving the diamond problem:</p>
<blockquote>
<p>Task:</p>
<p>You need to a string that when printed, displays a diamond shape on
the screen using asterisk ("*") characters. Please see provided test
cases for exact output format.</p>
<p>The shape that will be returned from print method resembles a diamond,
where the number provided as input represents the number of *âs
printed on the middle line. The line above and below will be centered
and will have 2 less *âs than the middle line. This reduction by 2 *âs
for each line continues until a line with a single * is printed at the
top and bottom of the figure.</p>
<p>Return null if input is even number or negative (as it is not possible
to print diamond with even number or negative number).</p>
<p>Please see provided test case(s) for examples.</p>
<p>Python Note</p>
<p>Since print is a reserved word in Python, Python students must
implement the diamond(n) method instead, and return None for invalid
input.</p>
</blockquote>
<p>My code:</p>
<pre><code>def diamond(n):
retorno = " *\n"
if n%3 == 0:
for i in range(n,0,-2):
retorno += i * "*"
print(retorno + str(i));
#return retorno
</code></pre>
<p>Test case:</p>
<pre><code>expected = " *\n"
expected += "***\n"
expected += " *\n"
test.assert_equals(diamond(3), expected)
</code></pre>
<p>The output:</p>
<pre><code> *
***3
*
****1
</code></pre>
<p>How come the first "*" from the initialization of the var is repeating like it's inside the for loop?</p>
| -1 |
2016-09-12T12:33:43Z
| 39,450,624 |
<p>You have </p>
<pre><code>retorno = " *\n"
</code></pre>
<p>and append to it after first iteration(i = 3):</p>
<pre><code>retorno = " *\n***" # printed
# *
#***
</code></pre>
<p>No newline was appended.
After second iteration (i = 1):</p>
<pre><code>retorno = " *\n****" # printed
# *
#****
</code></pre>
<p>the 2 printed retornos is exactly what you see.</p>
| 2 |
2016-09-12T12:44:02Z
|
[
"python",
"python-2.7",
"for-loop"
] |
Weird for loop in python 2.7
| 39,450,444 |
<p>I have a codewars problem and i'm new to python (less than 24hrs using it).
I'm solving the diamond problem:</p>
<blockquote>
<p>Task:</p>
<p>You need to a string that when printed, displays a diamond shape on
the screen using asterisk ("*") characters. Please see provided test
cases for exact output format.</p>
<p>The shape that will be returned from print method resembles a diamond,
where the number provided as input represents the number of *âs
printed on the middle line. The line above and below will be centered
and will have 2 less *âs than the middle line. This reduction by 2 *âs
for each line continues until a line with a single * is printed at the
top and bottom of the figure.</p>
<p>Return null if input is even number or negative (as it is not possible
to print diamond with even number or negative number).</p>
<p>Please see provided test case(s) for examples.</p>
<p>Python Note</p>
<p>Since print is a reserved word in Python, Python students must
implement the diamond(n) method instead, and return None for invalid
input.</p>
</blockquote>
<p>My code:</p>
<pre><code>def diamond(n):
retorno = " *\n"
if n%3 == 0:
for i in range(n,0,-2):
retorno += i * "*"
print(retorno + str(i));
#return retorno
</code></pre>
<p>Test case:</p>
<pre><code>expected = " *\n"
expected += "***\n"
expected += " *\n"
test.assert_equals(diamond(3), expected)
</code></pre>
<p>The output:</p>
<pre><code> *
***3
*
****1
</code></pre>
<p>How come the first "*" from the initialization of the var is repeating like it's inside the for loop?</p>
| -1 |
2016-09-12T12:33:43Z
| 39,450,635 |
<p>It isn't repeating. It starts life as *\n, the first time through the loop you add '***' so it becomes *\n***, the next time, you add '*' so it becomes *\n****, thus the output:</p>
<pre><code>*
***
*
****
</code></pre>
<p>Note also that n%3 is not the way to test for odd numbers, you want n%2==1.</p>
| 1 |
2016-09-12T12:44:35Z
|
[
"python",
"python-2.7",
"for-loop"
] |
Weird for loop in python 2.7
| 39,450,444 |
<p>I have a codewars problem and i'm new to python (less than 24hrs using it).
I'm solving the diamond problem:</p>
<blockquote>
<p>Task:</p>
<p>You need to a string that when printed, displays a diamond shape on
the screen using asterisk ("*") characters. Please see provided test
cases for exact output format.</p>
<p>The shape that will be returned from print method resembles a diamond,
where the number provided as input represents the number of *âs
printed on the middle line. The line above and below will be centered
and will have 2 less *âs than the middle line. This reduction by 2 *âs
for each line continues until a line with a single * is printed at the
top and bottom of the figure.</p>
<p>Return null if input is even number or negative (as it is not possible
to print diamond with even number or negative number).</p>
<p>Please see provided test case(s) for examples.</p>
<p>Python Note</p>
<p>Since print is a reserved word in Python, Python students must
implement the diamond(n) method instead, and return None for invalid
input.</p>
</blockquote>
<p>My code:</p>
<pre><code>def diamond(n):
retorno = " *\n"
if n%3 == 0:
for i in range(n,0,-2):
retorno += i * "*"
print(retorno + str(i));
#return retorno
</code></pre>
<p>Test case:</p>
<pre><code>expected = " *\n"
expected += "***\n"
expected += " *\n"
test.assert_equals(diamond(3), expected)
</code></pre>
<p>The output:</p>
<pre><code> *
***3
*
****1
</code></pre>
<p>How come the first "*" from the initialization of the var is repeating like it's inside the for loop?</p>
| -1 |
2016-09-12T12:33:43Z
| 39,450,679 |
<p>Ok, let me explain you what exactly is happening in your script step by step:</p>
<p>That is your code with the "number of line":</p>
<pre><code>(1)def diamond(n):
(2) retorno = " *\n"
(3) if n%3 == 0:
(4) for i in range(n,0,-2):
(5) retorno += i * "*"
(6) print(retorno + str(i))
</code></pre>
<ol>
<li>line 1, n = 3</li>
<li>line 2, n = 3, retorno = " *\n"</li>
<li>line 5, n = 3, i = 3, retorno = " *\n***"</li>
<li>line 6, n = 3, i = 3, retorno = " *\n***", you print " *\n***3"</li>
<li>line 5, n = 3, i = 1, retorno = " *\n****"</li>
<li>line 6, n = 3, i = 1, retorno = " *\n****", you print " *\n****1"</li>
</ol>
<p>So finally you have print in two times:</p>
<pre><code> *
***3
*
****1
</code></pre>
<p>Please also note that in Python you should not use ";" and that if you want to test if the input number is odd you should use <code>n%2 == 1</code>.</p>
| 2 |
2016-09-12T12:47:05Z
|
[
"python",
"python-2.7",
"for-loop"
] |
Return name of column after comparing cummulative sum with random drawn number
| 39,450,527 |
<p>I have a <code>DataFrame</code> in which the sum of the columns is <code>1</code>, like so:</p>
<pre><code>Out[]:
cod_mun ws_1 1_3 4 5_7 8 9_10 11 12 13 14 15 nd
1100015 0.1379 0.273 0.2199 0.1816 0.0566 0.0447 0.0617 0.0015 0 0.0021 0.0074 0.0137
1100023 0.1132 0.2009 0.185 0.2161 0.1036 0.0521 0.0885 0.0044 0.0038 0.0061 0.0181 0.0082
</code></pre>
<p>I want to draw a random number </p>
<pre><code>import random
prob = random.random()
</code></pre>
<p>And then I want to compare such number with the cummulative sum of the columns <strong>from left to right</strong> and then return the columns' <code>heading</code>.</p>
<p>For example, if <code>prob = 0.24</code> the threshold would reach 0.27 in the second column, <code>0.1379 + 0.273 > 0.24</code>Then I would need to return the name of the column.</p>
<p>It it possible to do that WITHOUT a using 15 <code>elif</code>s?</p>
<p>Such that:</p>
<pre><code>if prob < df.iloc[0]['ws_1']:
return 'ws_1'
elif prob < df.iloc[0]['ws_1'] + df.iloc[0]['1_3']
return '1_3'
elif ...
</code></pre>
| 1 |
2016-09-12T12:38:05Z
| 39,450,618 |
<p>I think you can count <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.cumsum.html" rel="nofollow"><code>DataFrame.cumsum</code></a>, compare with <code>prob</code> and get first column with <code>True</code> value by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.idxmax.html" rel="nofollow"><code>idxmax</code></a>:</p>
<pre><code>df.set_index('cod_mun', inplace=True)
prob = 0.24
print (df.cumsum(axis=1))
ws_1 1_3 4 5_7 8 9_10 11 12 \
cod_mun
1100015 0.1379 0.4109 0.6308 0.8124 0.8690 0.9137 0.9754 0.9769
1100023 0.1132 0.3141 0.4991 0.7152 0.8188 0.8709 0.9594 0.9638
13 14 15 nd
cod_mun
1100015 0.9769 0.9790 0.9864 1.0001
1100023 0.9676 0.9737 0.9918 1.0000
print (df.cumsum(axis=1) > prob)
ws_1 1_3 4 5_7 8 9_10 11 12 13 14 15 \
cod_mun
1100015 False True True True True True True True True True True
1100023 False True True True True True True True True True True
nd
cod_mun
1100015 True
1100023 True
print ((df.cumsum(axis=1) > prob).idxmax(axis=1))
cod_mun
1100015 1_3
1100023 1_3
dtype: object
</code></pre>
| 4 |
2016-09-12T12:43:50Z
|
[
"python",
"pandas",
"random"
] |
Python: split list into indices based on consecutive identical values
| 39,450,575 |
<p>If you could advice me how to write the script to split list by number of values I mean: </p>
<pre><code>my_list =[11,11,11,11,12,12,15,15,15,15,15,15,20,20,20]
</code></pre>
<p>And there are 11-4,12-2,15-6,20-3 items.
So in next list for exsample range(0:100)
I have to split on 4,2,6,3 parts
So I counted same values and function for split list, but it doen't work with list:</p>
<pre><code> div=Counter(my_list).values() ##counts same values in the list
def chunk(it, size):
it = iter(it)
return iter(lambda: tuple(islice(it, size)), ())
</code></pre>
<p>What do I need:</p>
<pre><code>Out: ([0,1,2,3],[4,5],[6,7,8,9,10,11], etc...]
</code></pre>
| 2 |
2016-09-12T12:41:07Z
| 39,450,720 |
<p>You can use <a href="https://docs.python.org/2/library/functions.html#enumerate" rel="nofollow"><code>enumerate</code></a>, <a href="https://docs.python.org/2/library/itertools.html#itertools.groupby" rel="nofollow"><code>itertools.groupby</code></a>, and <a href="https://docs.python.org/2/library/operator.html" rel="nofollow"><code>operator.itemgetter</code></a>:</p>
<pre><code>In [45]: import itertools
In [46]: import operator
In [47]: [[e[0] for e in d[1]] for d in itertools.groupby(enumerate(my_list), key=operator.itemgetter(1))]
Out[47]: [[0, 1, 2, 3], [4, 5], [6, 7, 8, 9, 10, 11], [12, 13, 14]]
</code></pre>
<p>What this does is as follows:</p>
<ol>
<li><p>First it enumerates the items.</p></li>
<li><p>It groups them, using the second item in each enumeration tuple (the original value).</p></li>
<li><p>In the resulting list per group, it uses the first item in each tuple (the enumeration)</p></li>
</ol>
| 3 |
2016-09-12T12:49:14Z
|
[
"python",
"list",
"split"
] |
Python: split list into indices based on consecutive identical values
| 39,450,575 |
<p>If you could advice me how to write the script to split list by number of values I mean: </p>
<pre><code>my_list =[11,11,11,11,12,12,15,15,15,15,15,15,20,20,20]
</code></pre>
<p>And there are 11-4,12-2,15-6,20-3 items.
So in next list for exsample range(0:100)
I have to split on 4,2,6,3 parts
So I counted same values and function for split list, but it doen't work with list:</p>
<pre><code> div=Counter(my_list).values() ##counts same values in the list
def chunk(it, size):
it = iter(it)
return iter(lambda: tuple(islice(it, size)), ())
</code></pre>
<p>What do I need:</p>
<pre><code>Out: ([0,1,2,3],[4,5],[6,7,8,9,10,11], etc...]
</code></pre>
| 2 |
2016-09-12T12:41:07Z
| 39,452,018 |
<p>Solution in Python 3 , If you are only using <code>counter</code> :</p>
<pre><code>from collections import Counter
my_list =[11,11,11,11,12,12,15,15,15,15,15,15,20,20,20]
count = Counter(my_list)
div= list(count.keys()) # take only keys
div.sort()
l = []
num = 0
for i in div:
t = []
for j in range(count[i]): # loop number of times it occurs in the list
t.append(num)
num+=1
l.append(t)
print(l)
</code></pre>
<p>Output:</p>
<pre><code>[[0, 1, 2, 3], [4, 5], [6, 7, 8, 9, 10, 11], [12, 13, 14]]
</code></pre>
<p>Alternate Solution using <code>set</code>:</p>
<pre><code>my_list =[11,11,11,11,12,12,15,15,15,15,15,15,20,20,20]
val = set(my_list) # filter only unique elements
ans = []
num = 0
for i in val:
temp = []
for j in range(my_list.count(i)): # loop till number of occurrence of each unique element
temp.append(num)
num+=1
ans.append(temp)
print(ans)
</code></pre>
<p><strong>EDIT:</strong>
As per required changes made to get desired output as mention in comments by @Protoss Reed</p>
<pre><code>my_list =[11,11,11,11,12,12,15,15,15,15,15,15,20,20,20]
val = list(set(my_list)) # filter only unique elements
val.sort() # because set is not sorted by default
ans = []
index = 0
l2 = [54,21,12,45,78,41,235,7,10,4,1,1,897,5,79]
for i in val:
temp = []
for j in range(my_list.count(i)): # loop till number of occurrence of each unique element
temp.append(l2[index])
index+=1
ans.append(temp)
print(ans)
</code></pre>
<p>Output:</p>
<pre><code>[[54, 21, 12, 45], [78, 41], [235, 7, 10, 4, 1, 1], [897, 5, 79]]
</code></pre>
<p>Here I have to convert <code>set</code> into <code>list</code> because <code>set</code> is not sorted and I think remaining is self explanatory.</p>
<p><strong>Another Solution</strong> if input is not always Sorted (using <code>OrderedDict</code>):</p>
<pre><code>from collections import OrderedDict
v = OrderedDict({})
my_list=[12,12,11,11,11,11,20,20,20,15,15,15,15,15,15]
l2 = [54,21,12,45,78,41,235,7,10,4,1,1,897,5,79]
for i in my_list: # maintain count in dict
if i in v:
v[i]+=1
else:
v[i]=1
ans =[]
index = 0
for key,values in v.items():
temp = []
for j in range(values):
temp.append(l2[index])
index+=1
ans.append(temp)
print(ans)
</code></pre>
<p>Output:</p>
<pre><code>[[54, 21], [12, 45, 78, 41], [235, 7, 10], [4, 1, 1, 897, 5, 79]]
</code></pre>
<p>Here I use <code>OrderedDict</code> to maintain order of input sequence which is random(unpredictable) in case of <code>set</code>.</p>
<p>Although I prefer @Ami Tavory's solution which is more pythonic.</p>
<p>[Extra work: If anybody can convert this solution into <code>list comprehension</code> it will be awesome because i tried but can not convert it to <code>list comprehension</code> and if you succeed please post it in comments it will help me to understand]</p>
| 1 |
2016-09-12T13:55:12Z
|
[
"python",
"list",
"split"
] |
for loop questions python matching excel
| 39,450,609 |
<p>I am trying to go through an entire row of cells and compare it to the row of cells on another excel sheet. i have figured it out so that it reads from the sheet, but there must be something wrong with my for loops and if statements. what i am doing is assigning the first cell and then comparing it to the first cell in the other sheet. if they are equal i want it to write a ' ' in a different sheet, at the point of that row in the first sheet. if they aren't the same, i want to see if the first 6 digits are the same, and if they are... i want to write "CHECK" on that cell on the first sheet. i want to keep going down the cells in the second sheet, and until that is all checked, move on to the next cell in the first sheet and start over. </p>
<p>what is wrong with my code here? it is outputting the wrong outputs into the excel sheet.</p>
<p>thank you.</p>
<pre><code>for row in range(sheet.nrows):
cell = str(sheet.cell_value(row, 0))
for row in range(windc_sheet.nrows):
windcell = str(windc_sheet.cell_value(row, 0));
if cell == windcell:
outputsheet.write(row, 1, ' ')
else:
sixdig = cell[0:6]
sixdigwind = windcell[0:6]
if sixdig == sixdigwind:
outputsheet.write(row, 1, 'Check')
else:
continue;
else:
continue
error output onto excel: wanted output:
check check
check
check
check check
check
</code></pre>
<p>two sheets being compared:</p>
<pre><code> 123456-01 123455-01
124445 123336-55
123454-99 123456-02
123455-03 133377-66
1277 199227-22
</code></pre>
| 0 |
2016-09-12T12:43:26Z
| 39,451,714 |
<p>Second iteration is the one that is making the error. Just try this:</p>
<pre><code>for row in range(sheet.nrows):
cell = str(sheet.cell_value(row, 0))
windcell = str(windc_sheet.cell_value(row, 0))
if cell == windcell:
outputsheet.write(row, 1, ' ')
else:
sixdig = cell[0:6]
sixdigwind = windcell[0:6]
if sixdig == sixdigwind:
outputsheet.write(row, 1, 'Check')
</code></pre>
| 0 |
2016-09-12T13:40:02Z
|
[
"python",
"excel",
"for-loop"
] |
Setting up sockets with Django on gninx
| 39,450,642 |
<p>I have a Django application running on nginx. This application use sockets, which (as far as I know) should be proxied. So I have troubles configuring nginx and other stuff. Same application works fine on Apache/2.4.7, so I assume that it is not a programming mistake.</p>
<p>Sockets using is based on Django-Channels and backend is very similar to code from <a href="https://channels.readthedocs.io/en/latest/getting-started.html" rel="nofollow">Channels getting started</a>.</p>
<p>For server configuration I used <a href="https://media.readthedocs.org/pdf/django-websocket-redis/latest/django-websocket-redis.pdf" rel="nofollow">this</a> manual.</p>
<p>In the beginning I had just one problem: I got 200 request answer instead of 101 on socket creation. After many manipulations (configuring and newer versions installing) and information collecting I came to current situation:</p>
<p>I start uwsgi separately for sockets:</p>
<pre><code>uwsgi --virtualenv /home/appname/env/ --http-socket /var/run/redis/redis.sock --http-websock --wsgi-file /home/appname/appname/appname/wsgi.py
</code></pre>
<p>On this step on socket creation <code>var socket = new WebSocket("ws://appname.ch/ws/64");</code> I get </p>
<pre><code>WebSocket connection to 'ws://appname.ch/ws/64' failed: Error during WebSocket handshake: Unexpected response code: 502
</code></pre>
<p>and sure </p>
<pre><code>2016/09/12 12:00:26 [crit] 30070#0: *2141 connect() to unix:/var/run/redis/redis.sock failed (13: Permission denied) while connecting to upstream, client: 140.70.82.220, server: appname.ch,, request: "GET /ws/64 HTTP/1.1", upstream: "http://unix:/var/run/redis/redis.sock:/ws/64", host: "appname.ch"
</code></pre>
<p>in nginx error log.</p>
<p>After <code>chmod 777 /var/run/redis/redis.sock</code> I get responce</p>
<pre><code>WebSocket connection to 'ws://appname.ch/ws/64' failed: Error during WebSocket handshake: Unexpected response code: 404
</code></pre>
<p>and in uwsgi</p>
<pre><code>[pid: 6572|app: 0|req: 1/1] 0.0.0.0 () {46 vars in 916 bytes} [Mon Sep 12 12:01:29 2016] GET /ws/64 => generated 3357 bytes in 24 msecs (HTTP/1.1 404) 2 headers in 80 bytes (1 switches on core 0)
</code></pre>
<p>nginx.conf file</p>
<pre><code>user www-data;
worker_processes 4;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
gzip_disable "msie6";
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
</code></pre>
<p>redis.conf</p>
<pre><code>daemonize yes
pidfile /var/run/redis/redis-server.pid
port 6379
unixsocket /var/run/redis/redis.sock
unixsocketperm 777
timeout 0
loglevel notice
logfile /var/log/redis/redis-server.log
databases 16
save 900 1
save 300 10
save 60 10000
rdbcompression yes
dbfilename dump.rdb
dir /var/lib/redis
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
</code></pre>
<p>/etc/nginx/sites-enabled/appname </p>
<pre><code>server {
listen 80;
server_name appname.ch, 177.62.206.170;
#charset koi8-r;
client_max_body_size 8M;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
include uwsgi_params;
uwsgi_pass unix:///home/appname/appname/app.sock;
#add_header Access-Control-Allow-Origin *;
}
location /ws/ {
#proxy_redirect off;
proxy_pass http://unix:/var/run/redis/redis.sock;
#proxy_http_version 1.1;
#proxy_set_header Upgrade $http_upgrade;
#proxy_set_header Connection "upgrade";
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
}
location /static {
alias /home/appname/appname/static_files;
}
location /media {
alias /home/appname/appname/media;
}
}
</code></pre>
<p>uwsgi.ini</p>
<pre><code>[uwsgi]
chdir=/home/appname/appname
env=DJANGO_SETTINGS_MODULE=appname.settings
wsgi-file=appname/wsgi.py
master=True
pidfile=/home/appname/appname/appname-master.pid
vacuum=True
max-requests=5000
daemonize=/home/appname/appname/uwsgi.log
socket=/home/appname/appname/app.sock
virtualenv=/home/appname/env
uid=appname
gid=appname
</code></pre>
<p>Django app settings.py</p>
<pre><code>"""
Django settings for appname project.
For more information on this file, see
https://docs.djangoproject.com/en/1.7/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.7/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = ''
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = DEBUG
ALLOWED_HOSTS = ['.appname.ch', '177.62.206.170', '127.0.0.1']
# Application definition
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'customers',
'projects',
'moodboard',
'channels',
'debug_toolbar',
'rest_framework',
'appname',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'appname.urls'
WSGI_APPLICATION = 'appname.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.7/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'db.sqlite3'),
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.7/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = '/static/'
MEDIA_URL = '/media/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static_root')
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static_files'),
)
TEMPLATE_DIRS = (
os.path.join(BASE_DIR, 'templates'),
)
AUTH_PROFILE_MODULE = 'customers.Customer'
REST_FRAMEWORK = {
# Use Django's standard `django.contrib.auth` permissions,
# or allow read-only access for unauthenticated users.
'DEFAULT_PERMISSION_CLASSES': [
'rest_framework.permissions.IsAuthenticated',
]
}
LOGIN_REDIRECT_URL = '/accounts/home'
CHANNEL_LAYERS = {
"default": {
"BACKEND": "asgi_redis.RedisChannelLayer",
"CONFIG": {
"hosts": [("localhost", 6379)],
},
"ROUTING": "appname.routing.channel_routing",
},
}
</code></pre>
<p>App urls</p>
<pre><code>from django.conf.urls import patterns, include, url
from django.contrib import admin
from django.contrib.auth import views as auth_views
from projects.views import ProjectViewSet
from customers.views import UserHomeView, RegistrationView, CustomerViewSet, UserViewSet
from moodboard.views import MoodBoardViewSet, BoardItemViewSet, BoardTextViewSet, ShareMoodBoardItem, LiveViewSet
from rest_framework import routers
from django.conf import settings
from django.conf.urls.static import static
router = routers.DefaultRouter()
router.register(r'projects', ProjectViewSet)
router.register(r'moodboards', MoodBoardViewSet)
router.register(r'items', BoardItemViewSet)
router.register(r'texts', BoardTextViewSet)
router.register(r'share', ShareMoodBoardItem)
router.register(r'customers', CustomerViewSet)
router.register(r'users', UserViewSet)
router.register(r'live', LiveViewSet)
urlpatterns = patterns('',
url(r'^$', 'appname.views.home', name='landing_page'),
url(r'^api/', include(router.urls)),
url(r'^accounts/login/$', auth_views.login, name='login'),
url(r'^accounts/logout/$', auth_views.logout, name='logout'),
url(r'^accounts/home/$', UserHomeView.as_view(), name='home'),
url(r'^accounts/register/$', RegistrationView.as_view(), name='registration'),
url(r'^admin/', include(admin.site.urls)),
url(r'^customers/', include('customers.urls')),
url(r'^projects/', include('projects.urls')),
url(r'^moodboard/', include('moodboard.urls')),
url(r'^api-auth/', include('rest_framework.urls', namespace='rest_framework'))
)
if settings.DEBUG:
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)root
</code></pre>
<p><code>nginx version: 1.6.2
Redis server version: 2.4.14
uwsgi version: 2.1
Django version: 1.8.0 'final'
Python version: 2.7.3
</code></p>
<p>Seems 404 should not be a complicated error, but after many days of fixing I have no idea what the problem is and if I am on the right way generally. </p>
| 0 |
2016-09-12T12:45:03Z
| 39,451,790 |
<p>First of all, do not ever try to create sock manually.
Just set up a path, and it will be created automatically. </p>
<p>This is example nginx and uwsgi conf:</p>
<pre><code>server {
root /your/djang/app/main/folder/;
# the port your site will be served on
listen 80;
server_name your-domain.com *.your-domain.com # if you have subdomains
charset utf-8;
access_log /path/to/logs/if/you/have/access_log.log
error_log /path/to/logs/if/you/have/error_log.log
# max upload size
client_max_body_size 1G;
location /media/ {
alias /path/to/django/media/if/exist/;
}
location /static/ {
alias /path/to/django/static/if/exist/;
}
# Note three slash
location / {
uwsgi_pass unix:///home/path/to/sock/file/your-sock.sock
}
</code></pre>
<p>}</p>
<p>and this cna be your uwsgi config file</p>
<pre><code># suprasl_uwsgi.ini file
[uwsgi]
uid = www-data
gid = www-data
chmod-socket = 755
chown-socket = www-data
# Django-related settings
# the base directory (full path)
chdir = /your/djang/app/main/folder/
# Django's wsgi file
wsgi-file = /your/djang/app/main/folder/main-folder/wsgi.py;
# the virtualenv (full path)
home = /your/env/folder/;
# process-related settings
# master
master = true
# maximum number of worker processes
processes = 2
# the socket (use the full path to be safe
socket = /home/path/to/sock/file/your-sock.soc
logto = /path/to/logs/if/you/have/uwsgi_logs.log
</code></pre>
<p>All you have to do is just run this command:</p>
<p><code>uwsgi --ini your_uwsgi_file.ini</code> # the --ini option is used to specify a file</p>
| 0 |
2016-09-12T13:43:11Z
|
[
"python",
"django",
"sockets",
"nginx",
"redis"
] |
Python csv reader // how to ignore enclosing char (because sometimes it's missing)
| 39,450,667 |
<p>I am trying to import csv data from files where sometimes the enclosing char " is missing.</p>
<p>So I have rows like this:</p>
<pre><code>"ThinkPad";"2000.00";"EUR"
"MacBookPro";"2200.00;EUR"
# In the second row the closing " after 2200.00 is missing
# also the closing " before EUR" is missing
</code></pre>
<p>Now I am reading the csv data with this:</p>
<pre><code>csv.reader(
codecs.open(filename, 'r', encoding='latin-1'),
delimiter=";",
dialect=csv.excel_tab)
</code></pre>
<p>And the data I get for the second row is this:</p>
<p>["MacBookPro", "2200.00;EUR"]</p>
<p>Aside from pre-processing my csv files with a unix command like sed and removing all closing chars " and relying on the semicolon to seperate the columns, what else can I do?</p>
| 0 |
2016-09-12T12:46:43Z
| 39,450,927 |
<p>If you're looping through all the lines/rows of the file, you can use string's .replace() function to get rid off the quotes (if you don't need them later-on for other purposes.).</p>
<pre><code>>>> import csv
>>> with open('eggs.csv', 'rb') as csvfile:
... my_file = csv.reader(codecs.open(filename, 'r', encoding='latin-1')
... delimiter=";",
... dialect=csv.excel_tab)
... )
... for row in my_file:
... (model,price,currency) = row
... model.replace('"','')
... price.replace('"','')
... currency.replace('"','')v
... print 'Model is: %s (costs %s%s).' % (model,price,currency)
>>>
Model is: MacBookPro (costs 2200.00EUR).
</code></pre>
| 0 |
2016-09-12T13:00:33Z
|
[
"python",
"csv"
] |
Python csv reader // how to ignore enclosing char (because sometimes it's missing)
| 39,450,667 |
<p>I am trying to import csv data from files where sometimes the enclosing char " is missing.</p>
<p>So I have rows like this:</p>
<pre><code>"ThinkPad";"2000.00";"EUR"
"MacBookPro";"2200.00;EUR"
# In the second row the closing " after 2200.00 is missing
# also the closing " before EUR" is missing
</code></pre>
<p>Now I am reading the csv data with this:</p>
<pre><code>csv.reader(
codecs.open(filename, 'r', encoding='latin-1'),
delimiter=";",
dialect=csv.excel_tab)
</code></pre>
<p>And the data I get for the second row is this:</p>
<p>["MacBookPro", "2200.00;EUR"]</p>
<p>Aside from pre-processing my csv files with a unix command like sed and removing all closing chars " and relying on the semicolon to seperate the columns, what else can I do?</p>
| 0 |
2016-09-12T12:46:43Z
| 39,451,085 |
<p>This <em>might</em> work:</p>
<pre><code>import csv
import io
file = io.StringIO(u'''
"ThinkPad";"2000.00";"EUR"
"MacBookPro";"2200.00;EUR"
'''.strip())
reader = csv.reader((line.replace('"', '') for line in file), delimiter=';', quotechar='"')
for row in reader:
print(row)
</code></pre>
<p>The problem is that if there are any legitimate quoted line, e.g.</p>
<pre><code>"MacBookPro;Awesome Edition";"2200.00";"EUR"
</code></pre>
<p>Or, worse:</p>
<pre><code>"MacBookPro:
Description: Awesome Edition";"2200.00";"EUR"
</code></pre>
<p>Your output is going to produce too few/many columns. But if you <em>know</em> that's not a problem then it will work fine. You could pre-screen the file by adding this before the read part, which would give you the malformed line:</p>
<pre><code>for line in file:
if line.count(';') != 2:
raise ValueError('No! This file has broken data on line {!r}'.format(line))
file.seek(0)
</code></pre>
<p>Or alternatively you could screen as you're reading:</p>
<pre><code>for row in reader:
if any(';' in _ for _ in row):
print('Error:')
print(row)
</code></pre>
<p>Ultimately your best option is to fix whatever is producing your garbage csv file.</p>
| 0 |
2016-09-12T13:08:06Z
|
[
"python",
"csv"
] |
How to pass PIL image to Add_Picture in python-pptx
| 39,450,718 |
<p>I'm trying to get the image from clipboard and I want to add that image in python-pptx .
I don't want to save the image in the Disk.
I have tried this:</p>
<pre><code>from pptx import Presentation
from PIL import ImageGrab,Image
from pptx.util import Inches
im = ImageGrab.grabclipboard()
prs = Presentation()
title_slide_layout = prs.slide_layouts[0]
slide = prs.slides.add_slide(title_slide_layout)
left = top = Inches(1)
pic = slide.shapes.add_picture(im, left, top)
prs.save('PPT.pptx')
</code></pre>
<p>But Getting this error</p>
<pre><code>File "C:\Python27\lib\site-packages\PIL\Image.py", line 627, in __getattr__
raise AttributeError(name)
AttributeError: read
</code></pre>
<p>What is wrong with this?</p>
| 0 |
2016-09-12T12:49:02Z
| 39,459,980 |
<p>The image needs to be in the form of a stream (i.e. logical file) object. So you need to "save" it to a memory file first, probably StringIO is what you're looking for.</p>
<p><a href="http://stackoverflow.com/questions/646286/python-pil-how-to-write-png-image-to-string">This other question</a> provides some of the details.</p>
| 0 |
2016-09-12T23:03:35Z
|
[
"python",
"python-imaging-library",
"python-pptx"
] |
Save and restore in CNN
| 39,450,721 |
<p>i am beginner in tensorflow and i have one question related to save and restore checkpoint in Convolutional neural networks. I am trying to create CNN to classify faces. My question is:</p>
<p>If is it possible when i add new class in my dataset to make partial training? So i just want to retrain the new class instead to retrain the hole Network. is it possible to restore the weights and bias from previous training and train just the new class ?</p>
<p>i am using for save </p>
<pre><code>saver = tf.train.Saver(tf.all_variables())
save = saver.save(session, "/home/owner//tensorflownew_models.ckpt")
print("Model saved in file: %s" % save)
</code></pre>
| 0 |
2016-09-12T12:49:16Z
| 39,450,900 |
<p>Sure. You can use the var_list parameter in tf.train.Optimizer.minimize() to control the weights you want to optimize. If you don't include the variables you restored (or currently have trained,) they shouldn't get changed.</p>
| 0 |
2016-09-12T12:59:00Z
|
[
"python",
"python-2.7",
"tensorflow",
"conv-neural-network"
] |
Save and restore in CNN
| 39,450,721 |
<p>i am beginner in tensorflow and i have one question related to save and restore checkpoint in Convolutional neural networks. I am trying to create CNN to classify faces. My question is:</p>
<p>If is it possible when i add new class in my dataset to make partial training? So i just want to retrain the new class instead to retrain the hole Network. is it possible to restore the weights and bias from previous training and train just the new class ?</p>
<p>i am using for save </p>
<pre><code>saver = tf.train.Saver(tf.all_variables())
save = saver.save(session, "/home/owner//tensorflownew_models.ckpt")
print("Model saved in file: %s" % save)
</code></pre>
| 0 |
2016-09-12T12:49:16Z
| 39,479,531 |
<p>Your question has multiple facets. Let's look at each in detail:</p>
<ul>
<li><p><strong>Is it possible to restore the weights and bias from previous training?</strong> Yes. You can create a <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/state_ops.html#Saver" rel="nofollow"><code>tf.train.Saver</code></a> in your new program and use it to load the values of variables from an old checkpoint. If you only want to load some of the variables from an old model, you can specify the subset of variables that you want to restore in the <code>var_list</code> argument to the <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/state_ops.html#Saver.__init__" rel="nofollow"><code>tf.train.Saver</code> constructor</a>. If the variables in your new model have different names, you may need to specify a key-value mapping, as discussed in <a href="http://stackoverflow.com/a/39453210/3574081">this answer</a>.</p></li>
<li><p><strong>Is it possible to add a class to the network?</strong> Yes, although it's a little tricky (and there might be other ways to do it). I assume you have a softmax classifier in your network, which includes a linear layer (matrix multiplication by a weight matrix of size <code>m * C</code>, followed by adding a vector of biases of length <code>C</code>). To add a class, you could create a matrix of size <code>m * C+1</code> and a vector of length <code>C+1</code>, then initialize the first <code>C</code> rows/elements of these from the existing weights using <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/state_ops.html#Variable.scatter_assign" rel="nofollow"><code>tf.Variable.scatter_assign()</code></a>. <a href="http://stackoverflow.com/q/34913762/3574081">This question</a> deals with the same topic.</p></li>
<li><p><strong>Is it possible to do partial training?</strong> Yes. I assume you mean "training only some of the layers, while holding other layers constant." You can do this as <a href="http://stackoverflow.com/a/39450900/3574081">MMN suggests</a> by passing an explicit list of variables to optimize when calling <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/train.html#Optimizer.minimize" rel="nofollow"><code>tf.train.Optimizer.minimize()</code></a>. For example, if you were adding a class as above, you might retrain only the softmax weights and biases, and hold the convolution layers' filters constant. Have a look at the <a href="https://www.tensorflow.org/versions/r0.10/how_tos/image_retraining/index.html" rel="nofollow">tutorial on transfer learning</a> with the pre-trained Inception model for more ideas.</p></li>
</ul>
| 0 |
2016-09-13T21:50:12Z
|
[
"python",
"python-2.7",
"tensorflow",
"conv-neural-network"
] |
Insert row and col label when dumping matrix data to text file in python
| 39,450,799 |
<p>I want to insert labels to col and row. like below.</p>
<pre><code> x y z
a 1 3 4
b 4 3 2
</code></pre>
<p>Actually I could do to add header like below.</p>
<pre><code>import numpy as np
row_label = ["a", "b"]
col_label = ["x", "y", "z"]
data = np.array([[1,3,4], [4,3,2]])
np.savetxt("out.csv", data, header=",".join(["x", "y", "z"]), delimiter=",", fmt="%.0f", comments='')
</code></pre>
<p>out.csv</p>
<pre><code>x,y,z
1,3,4
4,3,2
</code></pre>
<p>But how do I also add a column label?</p>
| 1 |
2016-09-12T12:54:01Z
| 39,452,254 |
<pre><code>import numpy as np
import pandas as pd
row_label = ["a", "b"]
col_label = ["x", "y", "z"]
data = np.array([[1,3,4], [4,3,2]])
df = pd.DataFrame(data, index=row_label, columns=col_label)
df.to_csv(r'temp.csv', sep='\t')
</code></pre>
<p>temp.csv</p>
<pre><code> x y z
a 1 3 4
b 4 3 2
</code></pre>
| 1 |
2016-09-12T14:05:15Z
|
[
"python",
"numpy"
] |
Control flux in python if/elif continue
| 39,450,887 |
<p>I have a <code>if</code>/<code>elif</code> statement in Python, but I want to execute both commands next. I want to show <code>AD0</code> and <code>AD1</code> every second, but the code is just showing <code>AD1</code> (Not entering on <code>elif</code>).</p>
<pre><code>#!/usr/bin/env python
# -*- coding: utf-8 -*-
import serial
from time import sleep
import datetime
now = datetime.datetime.now()
port = '/dev/ttyUSB0'
ser = serial.Serial(port, 9600, timeout=0)
while True:
arquivo = open('temperatura.txt','r')
conteudo = arquivo.readlines()
now = datetime.datetime.now()
data = ser.read(9999)
if len(data) > 0:
if 'AD0'in data:
temp = data[4:7]
#print 'Got:', data
#string = str('A temperatura é',temp,'ºC')
conteudo.append(now.strftime("%Y-%m-%d %H:%M"))
conteudo.append(' ')
conteudo.append('[AD0]A temperatura é')
conteudo.append(temp)
conteudo.append('ºC')
print '[AD0]A temperatura é',temp,'ºC'
#print string
conteudo.append('\n')
arquivo = openarquivo = open('temperatura.txt','w')
arquivo.writelines(conteudo)
continue
elif 'AD1'in data:
temp = data[4:7]
#print 'Got:', data
#string = str('A temperatura é',temp,'ºC')
conteudo.append(now.strftime("%Y-%m-%d %H:%M"))
conteudo.append(' ')
conteudo.append('[AD1]A temperatura é')
conteudo.append(temp)
conteudo.append('ºC')
print '[AD1]A temperatura é',temp,'ºC'
#print string
conteudo.append('\n')
arquivo = openarquivo = open('temperatura.txt','w')
arquivo.writelines(conteudo)
sleep(1)
#print 'not blocked'
arquivo.close()
ser.close()
</code></pre>
| 0 |
2016-09-12T12:58:21Z
| 39,450,982 |
<p>Python will pick the <em>first</em> matching condition out of a series of <code>if..elif..</code> tests, and will execute just that <em>one</em> block of code.</p>
<p>If your two tests should be <em>independent</em> and both should be considered, replace the <code>elif</code> block with a separate <code>if</code> statement:</p>
<pre><code>if 'AD0'in data:
# ...
if 'AD1'in data:
# ...
</code></pre>
<p>Now both conditions will be tested independently and if both match, both will be executed.</p>
| 0 |
2016-09-12T13:03:45Z
|
[
"python",
"if-statement"
] |
Apache Spark Count State Change
| 39,450,896 |
<p>I am new to Apache Spark (Pyspark) and would be glad to get some help resolving this problem. I am currently using Pyspark 1.6 (Had to ditch 2.0 since there is not MQTT support).</p>
<p>So, I have a data frame that has the following information,</p>
<pre><code>+----------+-----------+
| time|door_status|
+----------+-----------+
|1473678846| 2|
|1473678852| 1|
|1473679029| 3|
|1473679039| 3|
|1473679045| 2|
|1473679055| 1|
</code></pre>
<p>This is basically the time and status of the door. I need to calculate the number of times the door opened & closed. So I need to identify the state change and keep independent counters for each status.</p>
<p>Since I am new to this I am finding it extremely hard to conceive any idea of how to implement this. If somebody can suggest an idea & point me in the right direction it will be great. </p>
<p>Thanks in advance!!</p>
| 2 |
2016-09-12T12:58:47Z
| 39,452,029 |
<p>In this case, using accumulator should solve the issue.
Basically, you create three different accumulators for the three statuses.</p>
<pre><code>status_1 = sc.accumulator(0)
status_2 = sc.accumulator(0)
status_3 = sc.accumulator(0)
</code></pre>
<p>then you can do something like the following </p>
<pre><code>if (status == 1):
status_1 += 1
</code></pre>
| 0 |
2016-09-12T13:55:33Z
|
[
"python",
"apache-spark",
"pyspark"
] |
Apache Spark Count State Change
| 39,450,896 |
<p>I am new to Apache Spark (Pyspark) and would be glad to get some help resolving this problem. I am currently using Pyspark 1.6 (Had to ditch 2.0 since there is not MQTT support).</p>
<p>So, I have a data frame that has the following information,</p>
<pre><code>+----------+-----------+
| time|door_status|
+----------+-----------+
|1473678846| 2|
|1473678852| 1|
|1473679029| 3|
|1473679039| 3|
|1473679045| 2|
|1473679055| 1|
</code></pre>
<p>This is basically the time and status of the door. I need to calculate the number of times the door opened & closed. So I need to identify the state change and keep independent counters for each status.</p>
<p>Since I am new to this I am finding it extremely hard to conceive any idea of how to implement this. If somebody can suggest an idea & point me in the right direction it will be great. </p>
<p>Thanks in advance!!</p>
| 2 |
2016-09-12T12:58:47Z
| 39,453,568 |
<p>There is no efficient method which can perform operation like this out-of-the-box. You could use window functions:</p>
<pre><code>from pyspark.sql.window import Window
from pyspark.sql.functions import col, lag
df = sc.parallelize([
(1473678846, 2), (1473678852, 1),
(1473679029, 3), (1473679039, 3),
(1473679045, 2), (1473679055, 1)
]).toDF(["time", "door_status"])
w = Window().orderBy("time")
(df
.withColumn("prev_status", lag("door_status", 1).over(w))
.where(col("door_status") != col("prev_status"))
.groupBy("door_status", "prev_status")
.count())
</code></pre>
<p>but this simply won't scale. You can try to <code>mapParitions</code>. Fist let's define a function we'll use to map partitions:</p>
<pre><code>from collections import Counter
def process_partition(iter):
"""Given an iterator of (time, state)
return the first state, the last state and
a counter of state changes
>>> process_partition([])
[(None, None, Counter())]
>>> process_partition(enumerate([1, 1, 1]))
[(1, 1, Counter())]
>>> process_partition(enumerate([1, 2, 3]))
[(1, 3, Counter({(1, 2): 1, (2, 3): 1}))]
"""
first = None
prev = None
cnt = Counter()
for i, (_, x) in enumerate(iter):
# Store the first object per partition
if i == 0:
first = x
# If change of state update couter
if prev is not None and prev != x:
cnt[(prev, x)] += 1
prev = x
return [(first, prev, cnt)]
</code></pre>
<p>and a simple merger functions:</p>
<pre><code>def merge(x, y):
"""Given a pair of tuples:
(first-state, last-state, counter_of changes)
return a tuple of the same shape representing aggregated results
>>> merge((None, None, Counter()), (1, 1, Counter()))
(None, 1, Counter())
>>> merge((1, 2, Counter([(1, 2)])), (2, 2, Counter()))
(None, 2, Counter({(1, 2): 1}))
>>> merge((1, 2, Counter([(1, 2)])), (3, 2, Counter([(3, 2)])
(None, 2, Counter({(1, 2): 1, (2, 3): 1, (3, 2): 1}))
"""
(_, last_x, cnt_x), (first_y, last_y, cnt_y) = x, y
# If previous partition wasn't empty update counter
if last_x is not None and first_y is not None and last_x != first_y:
cnt_y[(last_x, first_y)] += 1
# Merge counters
cnt_y.update(cnt_x)
return (None, last_y, cnt_y)
</code></pre>
<p>With these two we can perform operation like this:</p>
<pre><code>partials = (df.rdd
.mapPartitions(process_partition)
.collect())
reduce(merge, [(None, None, Counter())] + partials)
</code></pre>
| 1 |
2016-09-12T15:13:24Z
|
[
"python",
"apache-spark",
"pyspark"
] |
Apache Spark Count State Change
| 39,450,896 |
<p>I am new to Apache Spark (Pyspark) and would be glad to get some help resolving this problem. I am currently using Pyspark 1.6 (Had to ditch 2.0 since there is not MQTT support).</p>
<p>So, I have a data frame that has the following information,</p>
<pre><code>+----------+-----------+
| time|door_status|
+----------+-----------+
|1473678846| 2|
|1473678852| 1|
|1473679029| 3|
|1473679039| 3|
|1473679045| 2|
|1473679055| 1|
</code></pre>
<p>This is basically the time and status of the door. I need to calculate the number of times the door opened & closed. So I need to identify the state change and keep independent counters for each status.</p>
<p>Since I am new to this I am finding it extremely hard to conceive any idea of how to implement this. If somebody can suggest an idea & point me in the right direction it will be great. </p>
<p>Thanks in advance!!</p>
| 2 |
2016-09-12T12:58:47Z
| 39,464,578 |
<p>You can try the following solution:</p>
<pre><code>import org.apache.spark.sql.expressions.Window
var data = Seq((1473678846, 2), (1473678852, 1), (1473679029, 3), (1473679039, 3), (1473679045, 2), (1473679055, 1),(1473779045, 1), (1474679055, 2), (1475679055, 1), (1476679055, 3)).toDF("time","door_status")
data.
select(
$"*",
coalesce(lead($"door_status", 1).over(Window.orderBy($"time")), $"door_status").as("next_door_status")
).
groupBy($"door_status").
agg(
sum(($"door_status" !== $"next_door_status").cast("int")).as("door_changes")
).
show
</code></pre>
<p>It is in scala language. You have to craft the same in python.</p>
| 0 |
2016-09-13T07:37:37Z
|
[
"python",
"apache-spark",
"pyspark"
] |
Apache Spark Count State Change
| 39,450,896 |
<p>I am new to Apache Spark (Pyspark) and would be glad to get some help resolving this problem. I am currently using Pyspark 1.6 (Had to ditch 2.0 since there is not MQTT support).</p>
<p>So, I have a data frame that has the following information,</p>
<pre><code>+----------+-----------+
| time|door_status|
+----------+-----------+
|1473678846| 2|
|1473678852| 1|
|1473679029| 3|
|1473679039| 3|
|1473679045| 2|
|1473679055| 1|
</code></pre>
<p>This is basically the time and status of the door. I need to calculate the number of times the door opened & closed. So I need to identify the state change and keep independent counters for each status.</p>
<p>Since I am new to this I am finding it extremely hard to conceive any idea of how to implement this. If somebody can suggest an idea & point me in the right direction it will be great. </p>
<p>Thanks in advance!!</p>
| 2 |
2016-09-12T12:58:47Z
| 39,465,339 |
<p>i tried it out in java, but actually it should also be possible in python by the dataframes API in a comparable way.</p>
<p>Doing the following:</p>
<ul>
<li>load your data as dataframe/dataset</li>
<li>register dataframe as temp table</li>
<li>execute this query: SELECT state, COUNT(*) FROM doorstates GROUP BY state</li>
</ul>
<p>make sure to remove the header</p>
| 0 |
2016-09-13T08:21:23Z
|
[
"python",
"apache-spark",
"pyspark"
] |
Ensure the POST data is valid JSON
| 39,451,002 |
<p>I am developping a JSON API with Python Flask.<br>
What I want is to always return JSON, with a error message indicating any error that occured.</p>
<p>That API also only accept JSON data in the POST body, but Flask by default return a HTML error 400 if it can't read the data as JSON.</p>
<p>Preferably, I d also like to not force the user to send the <code>Content-Type</code> header, and if <code>raw</code> or <code>text</code> content-type, try to parse the body as JSON nonetheless.</p>
<p>In short, I need a way to validate that the POST body's is JSON, and handle the error myself.</p>
<p>I've read about adding decorator to <code>request</code> to do that, but no comprehensive example.</p>
| 1 |
2016-09-12T13:04:36Z
| 39,451,086 |
<p>You can try to decode JSON object using python json library.
The main idea is to take plain request body and try to convert to JSON.E.g:</p>
<pre><code>import json
...
# somewhere in view
def view():
try:
json.loads(request.get_data())
except ValueError:
# not a JSON! return error
return {'error': '...'}
# do plain stuff
</code></pre>
| 0 |
2016-09-12T13:08:10Z
|
[
"python",
"json",
"flask"
] |
Ensure the POST data is valid JSON
| 39,451,002 |
<p>I am developping a JSON API with Python Flask.<br>
What I want is to always return JSON, with a error message indicating any error that occured.</p>
<p>That API also only accept JSON data in the POST body, but Flask by default return a HTML error 400 if it can't read the data as JSON.</p>
<p>Preferably, I d also like to not force the user to send the <code>Content-Type</code> header, and if <code>raw</code> or <code>text</code> content-type, try to parse the body as JSON nonetheless.</p>
<p>In short, I need a way to validate that the POST body's is JSON, and handle the error myself.</p>
<p>I've read about adding decorator to <code>request</code> to do that, but no comprehensive example.</p>
| 1 |
2016-09-12T13:04:36Z
| 39,451,091 |
<p>You have three options:</p>
<ul>
<li><p>Register a <a href="http://flask.pocoo.org/docs/0.11/patterns/errorpages/#error-handlers" rel="nofollow">custom error handler</a> for 400 errors on the API views. Have this error return JSON instead of HTML.</p></li>
<li><p>Set the <a href="https://flask.readthedocs.io/en/latest/api/#flask.Request.on_json_loading_failed" rel="nofollow"><code>Request.on_json_loading_failed</code> method</a> to something that raises a <a href="http://werkzeug.pocoo.org/docs/0.11/exceptions/#werkzeug.exceptions.BadRequest" rel="nofollow"><code>BadRequest</code> exception</a> subclass with a JSON payload. See <a href="http://werkzeug.pocoo.org/docs/0.11/exceptions/#custom-errors" rel="nofollow"><em>Custom Errors</em></a> in the Werkzeug exceptions documentation to see how you can create one.</p></li>
<li><p>Put a <code>try: except</code> around the <code>request.get_json()</code> call, catch the <code>BadRequest</code> exception and raise a new exception with a JSON payload.</p></li>
</ul>
<p>Personally, I'd probably go with the second option:</p>
<pre><code>from werkzeug.exceptions import BadRequest
from flask import json, Request, _request_ctx_stack
def JSONBadRequest(BadRequest):
def get_body(self, environ=None):
"""Get the JSON body."""
return json.dumps({
'code': self.code,
'name': self.name,
'description': self.description,
})
def get_headers(self, environ=None):
"""Get a list of headers."""
return [('Content-Type', 'application/json')]
def on_json_loading_failed(self):
ctx = _request_ctx_stack.top
if ctx is not None and ctx.app.config.get('DEBUG', False):
raise JSONBadRequest('Failed to decode JSON object: {0}'.format(e))
raise JSONBadRequest()
Request.on_json_loading_failed = on_json_loading_failed
</code></pre>
<p>Now, every time <code>request.get_json()</code> fails, it'll call your custom <code>on_json_loading_failed</code> method and raise an exception with a JSON payload rather than a HTML payload.</p>
| 4 |
2016-09-12T13:08:21Z
|
[
"python",
"json",
"flask"
] |
Ensure the POST data is valid JSON
| 39,451,002 |
<p>I am developping a JSON API with Python Flask.<br>
What I want is to always return JSON, with a error message indicating any error that occured.</p>
<p>That API also only accept JSON data in the POST body, but Flask by default return a HTML error 400 if it can't read the data as JSON.</p>
<p>Preferably, I d also like to not force the user to send the <code>Content-Type</code> header, and if <code>raw</code> or <code>text</code> content-type, try to parse the body as JSON nonetheless.</p>
<p>In short, I need a way to validate that the POST body's is JSON, and handle the error myself.</p>
<p>I've read about adding decorator to <code>request</code> to do that, but no comprehensive example.</p>
| 1 |
2016-09-12T13:04:36Z
| 39,452,642 |
<p>Combining the options <code>force=True</code> and <code>silent=True</code> make the result of <code>request.get_json</code> be <code>None</code> if the data is not parsable, then a simple <code>if</code> allow you to check the parsing.</p>
<pre><code>from flask import Flask
from flask import request
@app.route('/foo', methods=['POST'])
def function(function = None):
print "Data: ", request.get_json(force = True, silent = True);
if request.get_json() is not None:
return "Is JSON";
else:
return "Nope";
if __name__ == "__main__":
app.run()
</code></pre>
<p>Credits to lapinkoira and Martijn Pieters.</p>
| 0 |
2016-09-12T14:24:35Z
|
[
"python",
"json",
"flask"
] |
Dynamically change combobox size Python Tkinter
| 39,451,019 |
<p>In Python Tkinter is it possible to dynamically change the size of a <code>combobox</code> (the width of the entry widget) based on the length of the selected item?</p>
| 0 |
2016-09-12T13:05:13Z
| 39,453,139 |
<p>Is this what you are looking for:</p>
<pre><code>from tkinter import *
import ttk
root = Tk()
defaultCar = StringVar()
defaultCar.set("Mercedes Benz")
carList = ["BMW", "Lamborghini", "Honda"]
def resizeFunc():
newLen = len(cars.get())
cars.configure(width=newLen+2)
cars = ttk.Combobox(root,
textvariable=defaultCar,
values=carList,
postcommand=resizeFunc)
cars.pack()
</code></pre>
<p>?</p>
| 0 |
2016-09-12T14:50:41Z
|
[
"python",
"tkinter",
"combobox",
"width",
"dynamically-generated"
] |
resample dataframe with python
| 39,451,021 |
<p>I need to resample a dataframe using pandas.DataFrame.resample function like this : </p>
<pre><code>data.set_index('TIMESTAMP').resample('24min', how='sum')
</code></pre>
<p>This works without no problem, but when I try call the fucntion with 'xmin" where x is a general argment</p>
<pre><code> data.set_index('TIMESTAMP').resample('xmin', how='sum')
</code></pre>
<p>It cannot works</p>
<p>Any idea please?</p>
<p>Thank you</p>
<p><strong>EDIT</strong></p>
<pre><code>def ratio_activ_equip(data, date_deb, date_fin, step):
# filtre_site(data, site)
filtre_date(data, date_deb, date_fin)
xmin = 'stepmin'
data.set_index('TIMESTAMP').resample(xmin, how='sum')
res = data.iloc[:,1:10] = data.iloc[:,1:10].divide(data.sum(axis=1), axis=0)
res = data
return res
</code></pre>
<p><strong>EDIT2</strong></p>
<pre><code>def ratio_activ_equip(data, date_deb, date_fin, step): #
# filtre_site(data, site)
filtre_date(data, date_deb, date_fin)
#step = 10
xmin = str(step) + 'min'
data = data.set_index('TIMESTAMP').resample(xmin, how='sum')
data.set_index('TIMESTAMP').resample(xmin, how='sum')
res = data.iloc[:,1:10] = data.iloc[:,1:10].divide(data.sum(axis=1), axis=0)
res = data
return res
</code></pre>
<p>When I call this fucntion like this : </p>
<pre><code>res = ratio_activ_equip(y, '2016-05-10 22:00:00', '2016-05-14 22:00:00', 30)
</code></pre>
<p>I get this error : </p>
<blockquote>
<pre><code>KeyError: 'TIMESTAMP'
</code></pre>
</blockquote>
<p>Any idea please?</p>
| 1 |
2016-09-12T13:05:16Z
| 39,451,045 |
<p>IIUC you need pass variable <code>xmin</code>:</p>
<pre><code>xmin = '24min'
data.set_index('TIMESTAMP').resample(xmin, how='sum')
</code></pre>
<p>More general is convert <code>int</code> value to <code>str</code> and add substring <code>min</code>:</p>
<pre><code>step = 10
xmin = str(step) + 'min'
data = data.set_index('TIMESTAMP').resample(xmin, how='sum')
</code></pre>
<hr>
<pre><code>step = 10
data = data.set_index('TIMESTAMP').resample(str(step) + 'min', how='sum')
</code></pre>
| 1 |
2016-09-12T13:06:27Z
|
[
"python",
"pandas",
"dataframe",
"resampling",
"datetimeindex"
] |
Separating decades and centuries in python
| 39,451,104 |
<p>I have a program I'm creating where I need to be able to take a year, as an integer, and separate the century from the decade:</p>
<pre><code>year=1970
decade=70
century=1900
</code></pre>
<p>I want to be able to do this:</p>
<pre><code>year= new Year(1970)
dec= year.decade
cent=year.century
</code></pre>
<p>I know I'll have to create a Year object and implement those two methods, but my issue is in the math, how do I take a year and extract just the decade and/or century?</p>
<p>I had tried passing it to a function as a string then building an integer out of the last two values in the string, but I ran into trouble with data types and want a 'native integer' way of doing this.</p>
<p>Thanks in advance!</p>
<p>Stormy</p>
| 0 |
2016-09-12T13:09:09Z
| 39,451,200 |
<p>Divide the year by 100 (integer only). The result is the century:</p>
<pre><code>1972 / 100 = 19
</code></pre>
<p>(Of course, multiply again with 100 to get 1900).</p>
<p>The remainder is the year within that century:</p>
<pre><code>1972 - (19 * 100) = 72
</code></pre>
<p>Do the same if you want to get the decade of the year with 10:</p>
<pre><code>72 / 10 = 7
</code></pre>
| 2 |
2016-09-12T13:13:28Z
|
[
"python"
] |
Separating decades and centuries in python
| 39,451,104 |
<p>I have a program I'm creating where I need to be able to take a year, as an integer, and separate the century from the decade:</p>
<pre><code>year=1970
decade=70
century=1900
</code></pre>
<p>I want to be able to do this:</p>
<pre><code>year= new Year(1970)
dec= year.decade
cent=year.century
</code></pre>
<p>I know I'll have to create a Year object and implement those two methods, but my issue is in the math, how do I take a year and extract just the decade and/or century?</p>
<p>I had tried passing it to a function as a string then building an integer out of the last two values in the string, but I ran into trouble with data types and want a 'native integer' way of doing this.</p>
<p>Thanks in advance!</p>
<p>Stormy</p>
| 0 |
2016-09-12T13:09:09Z
| 39,451,211 |
<p>Well, you could just convert your year in string, then take the two first char and add 00 and take the two ending char. That would look like that:</p>
<pre><code>year = 1970
stringyear = str(1970)
century = stringyear[:2] + "00"
century = int(century)
print century
decade = stringyear[2:]
decade = int(decade)
print decade
</code></pre>
<p>There is probably a simpler way but it work well.</p>
| 0 |
2016-09-12T13:13:52Z
|
[
"python"
] |
Separating decades and centuries in python
| 39,451,104 |
<p>I have a program I'm creating where I need to be able to take a year, as an integer, and separate the century from the decade:</p>
<pre><code>year=1970
decade=70
century=1900
</code></pre>
<p>I want to be able to do this:</p>
<pre><code>year= new Year(1970)
dec= year.decade
cent=year.century
</code></pre>
<p>I know I'll have to create a Year object and implement those two methods, but my issue is in the math, how do I take a year and extract just the decade and/or century?</p>
<p>I had tried passing it to a function as a string then building an integer out of the last two values in the string, but I ran into trouble with data types and want a 'native integer' way of doing this.</p>
<p>Thanks in advance!</p>
<p>Stormy</p>
| 0 |
2016-09-12T13:09:09Z
| 39,451,296 |
<p>you can use <a href="https://docs.python.org/2/library/functions.html?highlight=divmod#divmod" rel="nofollow">divmod</a> builtin function:</p>
<pre><code>>>> divmod(1664, 100)
(16, 64)
</code></pre>
| 0 |
2016-09-12T13:19:03Z
|
[
"python"
] |
Separating decades and centuries in python
| 39,451,104 |
<p>I have a program I'm creating where I need to be able to take a year, as an integer, and separate the century from the decade:</p>
<pre><code>year=1970
decade=70
century=1900
</code></pre>
<p>I want to be able to do this:</p>
<pre><code>year= new Year(1970)
dec= year.decade
cent=year.century
</code></pre>
<p>I know I'll have to create a Year object and implement those two methods, but my issue is in the math, how do I take a year and extract just the decade and/or century?</p>
<p>I had tried passing it to a function as a string then building an integer out of the last two values in the string, but I ran into trouble with data types and want a 'native integer' way of doing this.</p>
<p>Thanks in advance!</p>
<p>Stormy</p>
| 0 |
2016-09-12T13:09:09Z
| 39,550,705 |
<p>Actually, year%100 returns the decade quite nicely.</p>
<p>here is the finished version of my program that works...</p>
<pre><code>critters=('Rat','ox', 'tiger','rabbit', 'dragon', 'snake', 'horse', 'ram', 'monkey', 'rooster', 'dog', 'pig')
def docrit(yrint):
modval=yrint%12
return critters[modval]
def wieder():
go=input('Again[y,n]')
if go=='y':
return True
else:
return False
def main():
print('Chinese Zodiac Calculator')
yob=int(input('year: '))
cent=int(yob/100)
decade=yob%100
print('century: ',cent)
print('decade: ',decade)
ani=docrit(decade)
print('the animal is ',ani)
if wieder()== True:
main()
else:
exit()
def exit():
print("I'm done. Bye.")
return 0
main()
</code></pre>
| 0 |
2016-09-17T19:08:58Z
|
[
"python"
] |
Pythonic way to write many try/except clauses?
| 39,451,126 |
<p>I am writing a script to write ~200 cells of somewhat unique data to an excel spreadsheet. My code is following this basic pattern:</p>
<pre><code>try:
sheet.cell(row=9, column=7).value = getData()
except:
sheet.cell(row=9, column=7).value = 'NA'
</code></pre>
<p>Basically if there is an error then write some placeholder to the cell. I was wondering if there is a shorter, more pythonic way to write this than having 200 of these in sequence.</p>
| 1 |
2016-09-12T13:09:57Z
| 39,451,298 |
<p>As I understand, exception could be occurred in getData() call. So you can wrap getData function as:</p>
<pre><code>def get_data_or_default(default="NA"):
try:
return getData()
except Exception:
return default
</code></pre>
<p>And then just use:</p>
<pre><code>sheet.cell(row=9, column=7).value = get_data_or_default()
</code></pre>
| 7 |
2016-09-12T13:19:10Z
|
[
"python",
"excel"
] |
Pythonic way to write many try/except clauses?
| 39,451,126 |
<p>I am writing a script to write ~200 cells of somewhat unique data to an excel spreadsheet. My code is following this basic pattern:</p>
<pre><code>try:
sheet.cell(row=9, column=7).value = getData()
except:
sheet.cell(row=9, column=7).value = 'NA'
</code></pre>
<p>Basically if there is an error then write some placeholder to the cell. I was wondering if there is a shorter, more pythonic way to write this than having 200 of these in sequence.</p>
| 1 |
2016-09-12T13:09:57Z
| 39,451,374 |
<p>Put all your cells into a list, or better, get column/row from nested <code>for i in xrange()</code> loops so that you can write one line of code to perform on all of them.</p>
<pre><code>for r in xrange(0, 10):
for c in xrange(0, 10):
try:
sheet(row=r, column=c).value = getData()
except BadDataException:
sheet(row=r, column=c).value = 'NA'
</code></pre>
<p>But as others have mentioned, if you can change getData, have it return the default (or None) instead of raising an Exception. If you can't change getData, check the documentation to see if it has a <code>default</code> option of some kind, like many Python objects (e.g., dictionaries with <code>.get()</code>)</p>
<p><em>UPDATE</em>: Or, see Zakhar's answer to wrap getData if you can't change it yourself.</p>
| 0 |
2016-09-12T13:22:30Z
|
[
"python",
"excel"
] |
Issues when attempting to install NAPALM for use with Ansible
| 39,451,128 |
<p>I have tried to figure this out but have come to a dead end.
I am trying to install NAPALM on Fedora (release 21)</p>
<p>You can install either NAPALM as a whole 'napalm' or sub packages as needed- i.e </p>
<p>napalm-ibm
napalm-ios
napalm-junos
etc</p>
<p>When I run "pip3 install napalm" all the packages download/unpack however i seem to be getting specific errors related to pyangbind.</p>
<p>The same is true if i try to install a sub-package/</p>
<p>Details of the install log below
Any assistance would be much appreciated</p>
<pre><code>[rbotham@ernie ~]$ pip3 install napalm
Downloading/unpacking napalm
Downloading napalm-1.1.0.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm/setup.py) egg_info for package napalm
Downloading/unpacking napalm-base (from napalm)
Downloading napalm-base-0.15.0.tar.gz (231kB): 231kB downloaded
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-base/setup.py) egg_info for package napalm-base
Downloading/unpacking napalm-eos (from napalm)
Downloading napalm-eos-0.3.0.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-eos/setup.py) egg_info for package napalm-eos
Downloading/unpacking napalm-fortios (from napalm)
Downloading napalm-fortios-0.1.1.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-fortios/setup.py) egg_info for package napalm-fortios
Downloading/unpacking napalm-ibm (from napalm)
Downloading napalm-ibm-0.1.1.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-ibm/setup.py) egg_info for package napalm-ibm
Downloading/unpacking napalm-ios (from napalm)
Downloading napalm-ios-0.2.0.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-ios/setup.py) egg_info for package napalm-ios
Downloading/unpacking napalm-iosxr (from napalm)
Downloading napalm-iosxr-0.2.2.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-iosxr/setup.py) egg_info for package napalm-iosxr
Downloading/unpacking napalm-junos (from napalm)
Downloading napalm-junos-0.3.0.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-junos/setup.py) egg_info for package napalm-junos
warning: no files found matching 'napalm_junos/utils/textfsm_templates/*.tpl'
Downloading/unpacking napalm-nxos (from napalm)
Downloading napalm-nxos-0.3.0.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-nxos/setup.py) egg_info for package napalm-nxos
Downloading/unpacking napalm-pluribus (from napalm)
Downloading napalm-pluribus-0.3.0.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-pluribus/setup.py) egg_info for package napalm-pluribus
Downloading/unpacking napalm-panos (from napalm)
Downloading napalm-panos-0.1.0.tar.gz
Running setup.py (path:/tmp/pip-build-fy54pkbj/napalm-panos/setup.py) egg_info for package napalm-panos
warning: no files found matching 'include'
Downloading/unpacking jinja2 (from napalm-base->napalm)
Downloading Jinja2-2.8-py2.py3-none-any.whl (263kB): 263kB downloaded
Downloading/unpacking pyangbind (from napalm-base->napalm)
Downloading pyangbind-0.5.8.tar.gz (46kB): 46kB downloaded
Running setup.py (path:/tmp/pip-build-fy54pkbj/pyangbind/setup.py) egg_info for package pyangbind
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/pip/download.py", line 274, in get_file_content
f = open(url)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-build-fy54pkbj/pyangbind/requirements.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip-build-fy54pkbj/pyangbind/setup.py", line 8, in <module>
inst_reqs = [str(ir.req) for ir in pip_reqs]
File "/tmp/pip-build-fy54pkbj/pyangbind/setup.py", line 8, in <listcomp>
inst_reqs = [str(ir.req) for ir in pip_reqs]
File "/usr/lib/python3.4/site-packages/pip/req.py", line 1551, in parse_requirements
session=session,
File "/usr/lib/python3.4/site-packages/pip/download.py", line 278, in get_file_content
raise InstallationError('Could not open requirements file: %s' % str(e))
pip.exceptions.InstallationError: Could not open requirements file: [Errno 2] No such file or directory: '/tmp/pip-build-fy54pkbj/pyangbind/requirements.txt'
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/pip/download.py", line 274, in get_file_content
f = open(url)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-build-fy54pkbj/pyangbind/requirements.txt'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 17, in <module>
File "/tmp/pip-build-fy54pkbj/pyangbind/setup.py", line 8, in <module>
inst_reqs = [str(ir.req) for ir in pip_reqs]
File "/tmp/pip-build-fy54pkbj/pyangbind/setup.py", line 8, in <listcomp>
inst_reqs = [str(ir.req) for ir in pip_reqs]
File "/usr/lib/python3.4/site-packages/pip/req.py", line 1551, in parse_requirements
session=session,
File "/usr/lib/python3.4/site-packages/pip/download.py", line 278, in get_file_content
raise InstallationError('Could not open requirements file: %s' % str(e))
pip.exceptions.InstallationError: Could not open requirements file: [Errno 2] No such file or directory: '/tmp/pip-build-fy54pkbj/pyangbind/requirements.txt'
</code></pre>
| 0 |
2016-09-12T13:10:01Z
| 39,525,715 |
<p>Looks like you are trying to install under python3, Napalm does not currently support python 3.
<a href="https://github.com/napalm-automation/napalm/issues/199" rel="nofollow">https://github.com/napalm-automation/napalm/issues/199</a></p>
| 0 |
2016-09-16T07:13:12Z
|
[
"python",
"networking",
"automation",
"ansible"
] |
Efficiency of for loops in python3
| 39,451,346 |
<p>I am currently learning Python (3), having mostly experience with R as main programming language. While in R <code>for</code>-loops have mostly the same functionality as in Python, I was taught to avoid using it for big operations and instead use <code>apply</code>, which is more efficient.</p>
<p>My question is: how efficient are <code>for</code>-loops in Python, are there alternatives and is it worth exploring those possibilities as a Python newbie? </p>
<p>For example: </p>
<pre><code>p = some_candidate_parameter_generator(data)
for i in p:
fit_model_with paramter(data, i)
</code></pre>
<p>Bare with me, it is tricky to give an example without going too much into specific code. But this is something that in <code>R</code> I would have writting with <code>apply</code>, especially if <code>p</code> is large.</p>
| 2 |
2016-09-12T13:20:47Z
| 39,451,668 |
<p>The comments correctly point out that for loops are "only as efficient as your logic"; however, the <code>range</code> and <code>xrange</code> in Python do have performance implications, and this may be what you had in mind when asking this question. <em>These methods have nothing to do with the intrinsic performance of for loops though.</em></p>
<p>In Python 3.0, <code>xrange</code> is now implicitly just <code>range</code>; however, in Python versions less than 3.0, there used to be a distinction â <code>range</code> loaded your entire iterable into memory, and then iterated over each item, while <code>xrange</code> was more akin to a generator, where each item was loaded into memory only when needed and then removed from memory after it was iterated over.</p>
<p><strong>After your updated question:</strong></p>
<p>In other words, if you have a giant list of items that you need to iterate over via a for loop, it is often more memory efficient to use a generator, not a list or a tuple, etc. Again though, this has nothing to do with how the Python for-loop operates, but more to do with what you're iterating over. If in doubt, use a generator, and your memory-efficiency will be as good as it will get with Python.</p>
| 1 |
2016-09-12T13:37:36Z
|
[
"python"
] |
How does one identify if python script was run for the first time?
| 39,451,347 |
<p>I've been wondering if there's any way to find out if a python script was run for the first time.</p>
<p>Let's suppose I have this simple program:</p>
<pre><code>import random
import string
from uuid import getnode
def get_mac():
return getnode()
def generate_unique_id():
return ''.join(random.choice(string.ascii_uppercase + string.digits + string.ascii_uppercase) for _ in range(20))
def main():
get_mac()
generate_unique_id()
if __name__ == '__main__':
main()
</code></pre>
<p>I'd like to know if there's any way to find / count the number of times a program is being run on a Windows machine.</p>
<p>Any other workarounds are welcome.</p>
| 0 |
2016-09-12T13:21:06Z
| 39,451,585 |
<p><strong> If you want to keep track of all the times your software was run </strong></p>
<p>There really is no way without writing to a file every time you start, and then appending to this file. So that way, you'd simply go look up the file to keep track of your program.</p>
<p><strong> If you want to see when the last time your software was compiled </strong></p>
<p>I guess another thing you could technically do is check the date of your generated compiled python bytecodes, but this would only tell you the last time your program executed, not the first time or the total number of times.</p>
<p><a href="http://stackoverflow.com/questions/237079/how-to-get-file-creation-modification-date-times-in-python">This link shows you how to check for the creation/modification/update times</a>
<a href="http://stackoverflow.com/questions/237079/how-to-get-file-creation-modification-date-times-in-python"></a>.</p>
| 1 |
2016-09-12T13:33:40Z
|
[
"python",
"windows",
"python-3.x"
] |
How to count the number of occurrences in either of two columns
| 39,451,385 |
<p>I have a simple looking problem. I have a dataframe <code>df</code> with two columns. For each of the strings that occurs in either of these columns I would like to count the number of rows which has the symbol in either column.</p>
<p>E.g.</p>
<pre><code>g k
a h
c i
j e
d i
i h
b b
d d
i a
d h
</code></pre>
<p>The following code works but is very inefficient.</p>
<pre><code>for elem in set(df.values.flat):
print elem, len(df.loc[(df[0] == elem) | (df[1] == elem)])
a 2
c 1
b 1
e 1
d 3
g 1
i 4
h 3
k 1
j 1
</code></pre>
<p>This is however very inefficient and my dataframe is large. The inefficiency comes from calling <code>df.loc[(df[0] == elem) | (df[1] == elem)]</code> separately for every distinct symbol in df. </p>
<p>Is there a fast way of doing this?</p>
| 0 |
2016-09-12T13:23:08Z
| 39,451,438 |
<p>OK this is much trickier than I thought, not sure how this will scale but if you have a lot of repeating values then it will be more efficient than your current method, basically we can use <code>str.get_dummies</code> and reindex the columns from that result to generate a dummies df for all unique values, we can then use <code>np.maximal</code> on the 2 dfs and <code>sum</code> these:</p>
<pre><code>In [77]:
t="""col1 col2
g k
a h
c i
j e
d i
i h
b b
d d
i a
d h"""
df = pd.read_csv(io.StringIO(t), delim_whitespace=True)
np.maximum(df['col1'].str.get_dummies().reindex_axis(vals, axis=1).fillna(0), df['col2'].str.get_dummies().reindex_axis(vals, axis=1).fillna(0)).sum()
Out[77]:
a 2
b 1
c 1
d 3
e 1
g 1
h 3
i 4
j 1
k 1
dtype: float64
</code></pre>
<p>vals here is just the unique values:</p>
<pre><code>In [80]:
vals = np.unique(df.values)
vals
Out[80]:
array(['a', 'b', 'c', 'd', 'e', 'g', 'h', 'i', 'j', 'k'], dtype=object)
</code></pre>
| 1 |
2016-09-12T13:26:07Z
|
[
"python",
"pandas"
] |
How to count the number of occurrences in either of two columns
| 39,451,385 |
<p>I have a simple looking problem. I have a dataframe <code>df</code> with two columns. For each of the strings that occurs in either of these columns I would like to count the number of rows which has the symbol in either column.</p>
<p>E.g.</p>
<pre><code>g k
a h
c i
j e
d i
i h
b b
d d
i a
d h
</code></pre>
<p>The following code works but is very inefficient.</p>
<pre><code>for elem in set(df.values.flat):
print elem, len(df.loc[(df[0] == elem) | (df[1] == elem)])
a 2
c 1
b 1
e 1
d 3
g 1
i 4
h 3
k 1
j 1
</code></pre>
<p>This is however very inefficient and my dataframe is large. The inefficiency comes from calling <code>df.loc[(df[0] == elem) | (df[1] == elem)]</code> separately for every distinct symbol in df. </p>
<p>Is there a fast way of doing this?</p>
| 0 |
2016-09-12T13:23:08Z
| 39,458,178 |
<p>You can use <code>loc</code> to filter out row level matches from <code>'col2'</code>, append the filtered <code>'col2'</code> values to <code>'col1'</code>, and then call <code>value_counts</code>:</p>
<pre><code>counts = df['col1'].append(df.loc[df['col1'] != df['col2'], 'col2']).value_counts()
</code></pre>
<p>The resulting output:</p>
<pre><code>i 4
d 3
h 3
a 2
j 1
k 1
c 1
g 1
b 1
e 1
</code></pre>
<p>Note: You can add <code>.sort_index()</code> to the end of the counting code if you want the output to appear in alphabetical order.</p>
<p><strong>Timings</strong></p>
<p>Using the following setup to produce a larger sample dataset:</p>
<pre><code>from string import ascii_lowercase
n = 10**5
data = np.random.choice(list(ascii_lowercase), size=(n,2))
df = pd.DataFrame(data, columns=['col1', 'col2'])
def edchum(df):
vals = np.unique(df.values)
count = np.maximum(df['col1'].str.get_dummies().reindex_axis(vals, axis=1).fillna(0), df['col2'].str.get_dummies().reindex_axis(vals, axis=1).fillna(0)).sum()
return count
</code></pre>
<p>I get the following timings:</p>
<pre><code>%timeit df['col1'].append(df.loc[df['col1'] != df['col2'], 'col2']).value_counts()
10 loops, best of 3: 19.7 ms per loop
%timeit edchum(df)
1 loop, best of 3: 3.81 s per loop
</code></pre>
| 2 |
2016-09-12T20:18:43Z
|
[
"python",
"pandas"
] |
Opening and closing a large number of files on python
| 39,451,470 |
<p>I'm writing a program which organizes my school mark and for every subject I created a file.pck where are saved all the marks of that subject. Since I have to open and pickle.load 10+ files I decided to make 2 functions, files_open():</p>
<pre><code>subj1 = open(subj1_file)
subj1_marks = pickle.load(subj1)
subj2 = open(subj2_file)
subj2marks = pickle.load(subj2)
</code></pre>
<p>and file_close():</p>
<pre><code>subj1.close()
subj2.close()
</code></pre>
<p>The problem is that I had to make every variable in file_open() global and the function now is too long. I tried to avoid that problem by calling variables like:</p>
<pre><code>file_open.subj1
</code></pre>
<p>but it does work and I can't understand why.</p>
| 0 |
2016-09-12T13:27:53Z
| 39,451,613 |
<p>since you just want to open, load and close the file afterwards I would suggest a simple helper function:</p>
<pre><code>def load_marks(filename):
with open(filename,"rb") as f: # don't forget to open as binary
marks = pickle.load(f)
return marks
</code></pre>
<p>Use like this:</p>
<pre><code>subj1_marks = load_marks(subj1_file)
</code></pre>
<p>The file is closed when going out of scope of <code>with</code> block, and your data remains accessible even if the file is closed which may be your (unjustified) concern with your question.</p>
<p>Note: someone suggested that what you really want (maybe) is to save all your data in one big pickle file.
In that case, you could create a dictionary containing your data:</p>
<pre><code>d = dict()
d["mark1"] = subj1_marks
d["mark2"] = subj2_marks
...
</code></pre>
<p>and perform one sole <code>pickle.dump()</code> and <code>pickle.load()</code> on the dictionary (if data is picklable then a dictionary of this data is also picklable): that would be simpler to handle 1 big file than a lot of them, knowning that you need all of them anyway.</p>
| 1 |
2016-09-12T13:34:52Z
|
[
"python"
] |
How to return the value of a counter in Python when comparing non-identical elements?
| 39,451,485 |
<p>I have the following list:</p>
<pre><code>x = [['A', 'A', 'A', 'A'], ['C', 'T', 'C', 'C'], ['G', 'T', 'C', 'C'], ['T', 'T', 'C', 'C'], ['A', 'T', 'C']]
</code></pre>
<p>I need to compare each element in sub_list to the other and note number of changes</p>
<pre><code>x[0] --> # No change
x[1] --> # 1 change (Only one conversion from C to T (T to C conversion = C to T conversion))
x[2] --> # 3 changes(G to T, T to C, G to C (T to C conversion = C to T conversion))
</code></pre>
<p>....
So, final count for Changes should be [0,1,3,2,3]</p>
| 1 |
2016-09-12T13:28:37Z
| 39,452,294 |
<p>If I understand well...</p>
<pre><code>from collections import Counter
from itertools import combinations
x = [['A', 'A', 'A', 'A'],
['C', 'T', 'C', 'C'],
['G', 'T', 'C', 'C'],
['T', 'T', 'C', 'C'],
['A', 'T', 'C', 'Z']]
def divide_and_square(number, divisor):
return (1. * number / divisor) ** 2
# part1
counters = [Counter(sub_list) for sub_list in x]
atgc_counts = [sum(val for key, val in counter.items()
if key.upper() in "ATGC")
for counter in counters]
print(atgc_counts)
# part 2
conversions = []
for sl in x:
sub_list = [base for base in sl if base.upper() in "ATGC"]
conversions.append(len(list(combinations(set(sub_list), 2))))
print(conversions)
# bonus
squared_factor_sums = []
for counter in counters:
total = sum(counter.itervalues())
squared_factor_sums.append(sum([divide_and_square(val, total)
for val in counter.values()]))
print(squared_factor_sums)
</code></pre>
<p>prints:</p>
<pre><code>[4, 4, 4, 4, 3]
[0, 1, 3, 1, 3]
[1.0, 0.625, 0.375, 0.5, 0.25]
</code></pre>
<ul>
<li>first the character other that ATGC are removed.</li>
<li>then the duplications are avoided by casting the sub_list into a set</li>
<li><a href="https://docs.python.org/2/library/itertools.html?highlight=itertools.combinations#itertools.combinations" rel="nofollow">itertools.combinations</a> is used to get all the unique combinations of the elements in the set</li>
<li>combinations are finally counted</li>
</ul>
| 1 |
2016-09-12T14:07:18Z
|
[
"python",
"list",
"compare",
"counter"
] |
SQL query date range in python
| 39,451,583 |
<p>I was simply trying to query an <code>SQLServer</code> database in a specific date range. Somehow I just can't figure it out myself. Here is what I did:</p>
<pre><code> import pyodbc
import pandas as pd
con = pyodbc.connect("Driver={SQL Server}; Server=link")
tab = pd.read_sql_query("SELECT * FROM OPENQUERY(aaa, 'SELECT * FROM bbb.ccc WHERE number like (''K12345%'')')",con)
tab['DT']
0 2015-09-17 08:51:41
1 2015-09-17 09:14:09
2 2015-09-17 09:15:03
3 2015-09-24 15:20:55
4 2015-09-24 15:23:47
5 2015-10-02 08:49:59
6 2015-10-30 14:08:40
7 2015-10-30 14:13:38
8 2015-11-03 14:30:06
9 2015-11-03 14:30:22
10 2015-11-04 07:14:40
11 2015-11-04 10:43:51
Name: DT, dtype: datetime64[ns]
</code></pre>
<p>Now I thought I should be able to select the records on the dates between <code>2015-09-18</code> and <code>2015-10-02</code> by using the following query. Somehow it failed with error </p>
<blockquote>
<p>DatabaseError: Execution failed on sql: SELECT * FROM OPENQUERY(aaa, 'SELECT * FROM bbb.ccc WHERE DT between ''2015-09-18'' and ''2015-10-02''')".</p>
</blockquote>
<p>Can someone help explain what I did wrong?</p>
<pre><code> tab2 = pd.read_sql_query("SELECT * FROM OPENQUERY(aaa, 'SELECT * FROM bbb.ccc `WHERE DT between ''2015-09-18'' and ''2015-10-02''')",con)`
</code></pre>
| 1 |
2016-09-12T13:33:23Z
| 39,451,767 |
<p>You should use <a href="https://mkleehammer.github.io/pyodbc/#params" rel="nofollow">parameter binding</a>:</p>
<pre><code>tab2 = pd.read_sql_query("SELECT * FROM bbb.ccc WHERE DT between ? and ?", con, params=['2015-09-18', '2015-10-02'])
</code></pre>
<p>The <code>?</code> are placeholders for the values you are passing from the list. The number of <code>?</code>'s must match the number of items from your list.</p>
<p>And I'm not sure why you have a <code>SELECT *</code> wrapped in another <code>SELECT *</code> so I simplified with just the innermost select.</p>
| 0 |
2016-09-12T13:42:17Z
|
[
"python",
"sql-server",
"tsql"
] |
Posting Request Data
| 39,451,640 |
<p>I am trying to post requests with Python to register an account.</p>
<p>It is not creating the account.</p>
<p>Any help would be great! </p>
<p>It has to accept the user's email and password and confirmation of their password.</p>
<pre><code>import requests
with requests.Session() as c:
url="http://icebithosting.com/register.php"
EMAIL="charliep1551@gmail.com"
PASSWORD = "test"
c.get(url)
login_data= dict(username=EMAIL,password=PASSWORD,confirm=PASSWORD)
c.post(url, data=login_data,)
page=c.get("http://icebithosting.com/")
print (page.content)
</code></pre>
| 0 |
2016-09-12T13:36:07Z
| 39,453,786 |
<p>Your form file names are incorrect, they should be:</p>
<pre><code>email:'foo@bar.com'
password:'bar'
confirm_password:'bar' # confirm_password
</code></pre>
<p>Which you can see if you monitor the request in chrome tools:</p>
<p><a href="http://i.stack.imgur.com/ISTEe.png" rel="nofollow"><img src="http://i.stack.imgur.com/ISTEe.png" alt="enter image description here"></a></p>
| -1 |
2016-09-12T15:24:22Z
|
[
"python",
"python-3.x",
"module",
"python-requests"
] |
Pythonic way to find integer in list of strings
| 39,451,670 |
<p>I have an array as follows:</p>
<pre><code>hour = ['01','02','12']
</code></pre>
<p>and I want </p>
<pre><code>h = 1
str(h) in hour
</code></pre>
<p>to return <code>True</code>. What would be the most "Pythonic" way to do this? I could of course pad <code>h</code> with a zero, but is there a better way?</p>
| -1 |
2016-09-12T13:37:39Z
| 39,451,826 |
<p>A good rule of thumb is that the type and structure of data should reflect the model you have in mind. So, if your model is that hours are integers in the range 1..24, or whatever, you should model them that way:</p>
<pre><code>hours = [ int(hr) for hr in hour ]
</code></pre>
<p>then things like:</p>
<pre><code>h in hours
</code></pre>
<p>become clean and obvious.</p>
| 1 |
2016-09-12T13:45:30Z
|
[
"python",
"arrays"
] |
Pythonic way to find integer in list of strings
| 39,451,670 |
<p>I have an array as follows:</p>
<pre><code>hour = ['01','02','12']
</code></pre>
<p>and I want </p>
<pre><code>h = 1
str(h) in hour
</code></pre>
<p>to return <code>True</code>. What would be the most "Pythonic" way to do this? I could of course pad <code>h</code> with a zero, but is there a better way?</p>
| -1 |
2016-09-12T13:37:39Z
| 39,451,842 |
<pre><code>if [i for i in hour if h == int(i)]:
print("found it")
</code></pre>
<p><code>[i for i in hour if h == int(i)]</code> is a list comprehension which basically means generating a list from an iterable object. It looks into <code>hours</code> which is a list which is iterable, and one at a time checks if the value is the same as <code>h</code>. We cast it to <code>int(i)</code> because we are comparing ints and <code>i</code> is a string. </p>
<p>In python an empty list <code>[]</code> returns false when using in if statements. So by combining a list comprehension, it allows us to quickly check if there is a value we care about or not. If there's a list, that means the list comprehension found a value we care about. </p>
| 0 |
2016-09-12T13:46:21Z
|
[
"python",
"arrays"
] |
Export a MySQL database with contacts to a compatible CardDav system
| 39,451,708 |
<p>I have a standard MySQL database, with a table containing contacts (I'm adding contacts to the table using a webapp using Zend Framework), thus with my own fields.</p>
<p>Is it possible to create a server which would be compatible to be used with the Address Book system of OsX? I think I must be compatible with the CardDav system.</p>
<p>Has anyone already done that? If yes, how did you handle it? Created your own server? Is there a CardDav library for Python for example? I just want to be able to read my contacts using the Address Book of OsX.</p>
<p>Thanks a lot for your answers,</p>
<p>Best,</p>
<p>Jean</p>
| 0 |
2016-09-12T13:39:50Z
| 39,457,249 |
<blockquote>
<p>Is it possible to create a server which would be compatible to be used
with the Address Book system of OsX? I think I must be compatible with
the CardDav system.</p>
</blockquote>
<p>Yes you can create such a server and there are plenty already. You can choose between either CardDAV or LDAP, depending on your needs. When LDAP is good enough for your use case, you might get even get away with just configuring OpenLDAP to use your database.</p>
<p>LDAP is usually just read & query only (think big company address book / yellow pages). CardDAV is usually read/write and full sync.</p>
<blockquote>
<p>Has anyone already done that?</p>
</blockquote>
<p>Many people have, the <a href="http://carddav.calconnect.org/implementations/servers.html" rel="nofollow">CalConnect CardDAV Server Implementations</a> site alone lists 16, most of them FOSS. There are more.</p>
<blockquote>
<p>If yes, how did you handle it? Created your own server?</p>
</blockquote>
<p>I think this is the most common approach.</p>
<blockquote>
<p>Is there a CardDav library for Python for example?</p>
</blockquote>
<p>Please do your research, this is trivial to figure out ...</p>
<p>Many PHP servers (you mentioned Zend) are using <a href="http://sabre.io" rel="nofollow">SabreDAV</a> as a basis.</p>
<blockquote>
<p>I just want to be able to read my contacts using the Address Book of OsX.</p>
</blockquote>
<p>That makes it a lot easier. While you can use a library like SabreDAV, implementing CardDAV readonly is really not that hard. Authentication, a few XML requests for locating an addressbook and then some code to render your existing records as vCards.</p>
<p>If you want to add editing, things get more complicated.</p>
| 0 |
2016-09-12T19:17:20Z
|
[
"python",
"mysql",
"osx",
"carddav"
] |
python string formatting portion cut off
| 39,451,756 |
<p>I'm a total beginner and this is my first question so I know this is probably stupid. But in python when I try and format a string with this code:</p>
<pre><code>x = 7
print "% how do you do" % (x)
</code></pre>
<p>I get this as a result:</p>
<pre><code>7w do you do
</code></pre>
<p>Is there a reason why the "ho" in "how" is getting cut off?</p>
| 0 |
2016-09-12T13:41:52Z
| 39,451,848 |
<p>Probably using %d instead of % will solve the problem</p>
| 0 |
2016-09-12T13:46:36Z
|
[
"python",
"string-formatting"
] |
python string formatting portion cut off
| 39,451,756 |
<p>I'm a total beginner and this is my first question so I know this is probably stupid. But in python when I try and format a string with this code:</p>
<pre><code>x = 7
print "% how do you do" % (x)
</code></pre>
<p>I get this as a result:</p>
<pre><code>7w do you do
</code></pre>
<p>Is there a reason why the "ho" in "how" is getting cut off?</p>
| 0 |
2016-09-12T13:41:52Z
| 39,451,992 |
<p>Best solution here is to use <code>format</code> function.</p>
<pre><code>>>> '{0} is bigger than {1}'.format('elephant', 'dog')
'elephant is bigger than dog
</code></pre>
<p>You can read more about formatting in the <a href="https://docs.python.org/3.1/library/string.html#format-examples" rel="nofollow">python documentation</a></p>
| 0 |
2016-09-12T13:54:15Z
|
[
"python",
"string-formatting"
] |
Beautiful soup missing some html table tags
| 39,451,810 |
<p>I'm trying to extract data from a website using beautiful soup to parse the html. I'm currently trying to get the table data from the following webpage :</p>
<p><a href="http://www.installationsclassees.developpement-durable.gouv.fr/ficheEtablissement.php?selectRegion=-1&selectDept=-1&champcommune=&champNomEtabl=&selectRegSeveso=-1&selectRegEtab=-1&selectPrioriteNat=-1&selectIPPC=-1&champActivitePrinc=-1&champListeIC=&selectDeclaEmi=&champEtablBase=544&champEtablNumero=18&ordre=&champNoEnregTrouves=4480&champPremierEnregAffiche=0&champNoEnregAffiches=20" rel="nofollow">link to webpage</a></p>
<p>I want to get the data from the table. First I save the page as an html file on my computer (this part works fine, I checked that I got all the information) but when I try to parse with the following code :</p>
<pre><code>soup = BeautifulSoup(fh, 'html.parser')
table = soup.find_all('table')
cols = table[0].find_all('tr')
cells = cols[1].find_all('td')`
</code></pre>
<p>I don't get any results (specifically it crashes, saying there's no element at index 1). Any idea of where it could come from?</p>
<p>Thanks</p>
| 0 |
2016-09-12T13:44:31Z
| 39,452,389 |
<p>Ok actually it was an issue in the html file, in the first line the html tags were opened with th but closed with td. I don't know much about HTML but replacing the th by td solved the issue.</p>
<pre><code><tr class="listeEtablenTete">
<th title="Rubrique IC">Rubri. IC</td>
<th title="Alin&eacute;a">Ali.&nbsp;</td>
<th title="Date d'autorisation">Date auto.</td>
<th >Etat d'activit&eacute;</td>
<th title="R&eacute;gime">R&eacute;g.</td>
<th >Activit&eacute;</td>
<th >Volume</td>
<th >Unit&eacute;</td>`
</code></pre>
<p>Thanks !</p>
| 0 |
2016-09-12T14:11:37Z
|
[
"python",
"beautifulsoup"
] |
What's the closest I can get to calling a Python function using a different Python version?
| 39,451,822 |
<p>Say I have two files:</p>
<pre><code># spam.py
import library_Python3_only as l3
def spam(x,y)
return l3.bar(x).baz(y)
</code></pre>
<p>and</p>
<pre><code># beans.py
import library_Python2_only as l2
...
</code></pre>
<p>Now suppose I wish to call <code>spam</code> from within <code>beans</code>. It's not directly possible since both files depend on incompatible Python versions. Of course I can <code>Popen</code> a different python process, but how could I pass in the arguments and retrieve the results without too much stream-parsing pain?</p>
| 18 |
2016-09-12T13:45:20Z
| 39,451,894 |
<p>Assuming the caller is Python3.5+, you have access to a nicer <a href="https://docs.python.org/3/library/subprocess.html">subprocess</a> module. Perhaps you could user <code>subprocess.run</code>, and communicate via pickled Python objects sent through stdin and stdout, respectively. There would be some setup to do, but no parsing on your side, or mucking with strings etc.</p>
<p>Here's an example of Python2 code via subprocess.Popen</p>
<pre><code>p = subprocess.Popen(python3_args, stdin=subprocess.PIPE, stdout=subprocess.PIPE)
stdout, stderr = p.communicate(pickle.dumps(python3_args))
result = pickle.load(stdout)
</code></pre>
| 10 |
2016-09-12T13:49:12Z
|
[
"python",
"compatibility",
"popen"
] |
What's the closest I can get to calling a Python function using a different Python version?
| 39,451,822 |
<p>Say I have two files:</p>
<pre><code># spam.py
import library_Python3_only as l3
def spam(x,y)
return l3.bar(x).baz(y)
</code></pre>
<p>and</p>
<pre><code># beans.py
import library_Python2_only as l2
...
</code></pre>
<p>Now suppose I wish to call <code>spam</code> from within <code>beans</code>. It's not directly possible since both files depend on incompatible Python versions. Of course I can <code>Popen</code> a different python process, but how could I pass in the arguments and retrieve the results without too much stream-parsing pain?</p>
| 18 |
2016-09-12T13:45:20Z
| 39,452,087 |
<p>You could create a simple script as such :</p>
<pre><code>import sys
import my_wrapped_module
import json
params = sys.argv
script = params.pop(0)
function = params.pop(0)
print(json.dumps(getattr(my_wrapped_module, function)(*params)))
</code></pre>
<p>You'll be able to call it like that :</p>
<pre><code>pythonx.x wrapper.py myfunction param1 param2
</code></pre>
<p>This is obviously a security hazard though, be careful.</p>
<p>Also note that if your params are anything else than string or integers, you'll have some issues, so maybe think about transmitting params as a json string, and convert it using <code>json.loads()</code> in the wrapper.</p>
| 1 |
2016-09-12T13:58:00Z
|
[
"python",
"compatibility",
"popen"
] |
What's the closest I can get to calling a Python function using a different Python version?
| 39,451,822 |
<p>Say I have two files:</p>
<pre><code># spam.py
import library_Python3_only as l3
def spam(x,y)
return l3.bar(x).baz(y)
</code></pre>
<p>and</p>
<pre><code># beans.py
import library_Python2_only as l2
...
</code></pre>
<p>Now suppose I wish to call <code>spam</code> from within <code>beans</code>. It's not directly possible since both files depend on incompatible Python versions. Of course I can <code>Popen</code> a different python process, but how could I pass in the arguments and retrieve the results without too much stream-parsing pain?</p>
| 18 |
2016-09-12T13:45:20Z
| 39,452,583 |
<p>Here is a complete example implementation using <code>subprocess</code> and <code>pickle</code> that I actually tested. Note that you need to use protocol version 2 explicitly for pickling on the Python 3 side (at least for the combo Python 3.5.2 and Python 2.7.3).</p>
<pre><code># py3bridge.py
import sys
import pickle
import importlib
import io
import traceback
import subprocess
class Py3Wrapper(object):
def __init__(self, mod_name, func_name):
self.mod_name = mod_name
self.func_name = func_name
def __call__(self, *args, **kwargs):
p = subprocess.Popen(['python3', '-m', 'py3bridge',
self.mod_name, self.func_name],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE)
stdout, _ = p.communicate(pickle.dumps((args, kwargs)))
data = pickle.loads(stdout)
if data['success']:
return data['result']
else:
raise Exception(data['stacktrace'])
def main():
try:
target_module = sys.argv[1]
target_function = sys.argv[2]
args, kwargs = pickle.load(sys.stdin.buffer)
mod = importlib.import_module(target_module)
func = getattr(mod, target_function)
result = func(*args, **kwargs)
data = dict(success=True, result=result)
except Exception:
st = io.StringIO()
traceback.print_exc(file=st)
data = dict(success=False, stacktrace=st.getvalue())
pickle.dump(data, sys.stdout.buffer, 2)
if __name__ == '__main__':
main()
</code></pre>
<p>The Python 3 module (using the <code>pathlib</code> module for the showcase)</p>
<pre><code># spam.py
import pathlib
def listdir(p):
return [str(c) for c in pathlib.Path(p).iterdir()]
</code></pre>
<p>The Python 2 module using <code>spam.listdir</code></p>
<pre><code># beans.py
import py3bridge
delegate = py3bridge.Py3Wrapper('spam', 'listdir')
py3result = delegate('.')
print py3result
</code></pre>
| 11 |
2016-09-12T14:22:21Z
|
[
"python",
"compatibility",
"popen"
] |
What's the closest I can get to calling a Python function using a different Python version?
| 39,451,822 |
<p>Say I have two files:</p>
<pre><code># spam.py
import library_Python3_only as l3
def spam(x,y)
return l3.bar(x).baz(y)
</code></pre>
<p>and</p>
<pre><code># beans.py
import library_Python2_only as l2
...
</code></pre>
<p>Now suppose I wish to call <code>spam</code> from within <code>beans</code>. It's not directly possible since both files depend on incompatible Python versions. Of course I can <code>Popen</code> a different python process, but how could I pass in the arguments and retrieve the results without too much stream-parsing pain?</p>
| 18 |
2016-09-12T13:45:20Z
| 39,454,074 |
<p>It's possible to use the <a href="https://docs.python.org/2/library/multiprocessing.html#managers" rel="nofollow"><code>multiprocessing.managers</code></a> module to achieve what you want. It does require a small amount of hacking though.</p>
<p>Given a module that has functions you want to expose then you need to create a <code>Manager</code> that can create proxies for those functions.</p>
<p>manager process that serves proxies to the py3 functions:</p>
<pre><code>from multiprocessing.managers import BaseManager
import spam
class SpamManager(BaseManager):
pass
# Register a way of getting the spam module.
# You can use the exposed arg to control what is exposed.
# By default only "public" functions (without a leading underscore) are exposed,
# but can only ever expose functions or methods.
SpamManager.register("get_spam", callable=(lambda: spam), exposed=["add", "sub"])
# specifying the address as localhost means the manager is only visible to
# processes on this machine
manager = SpamManager(address=('localhost', 50000), authkey=b'abc',
serializer='xmlrpclib')
server = manager.get_server()
server.serve_forever()
</code></pre>
<p>I've redefined <code>spam</code> to contain two function called <code>add</code> and <code>sub</code>.</p>
<pre><code># spam.py
def add(x, y):
return x + y
def sub(x, y):
return x - y
</code></pre>
<p>client process that uses the py3 functions exposed by the <code>SpamManager</code>.</p>
<pre><code>from __future__ import print_function
from multiprocessing.managers import BaseManager
class SpamManager(BaseManager):
pass
SpamManager.register("get_spam")
m = SpamManager(address=('localhost', 50000), authkey=b'abc',
serializer='xmlrpclib')
m.connect()
spam = m.get_spam()
print("1 + 2 = ", spam.add(1, 2)) # prints 1 + 2 = 3
print("1 - 2 = ", spam.sub(1, 2)) # prints 1 - 2 = -1
spam.__name__ # Attribute Error -- spam is a module, but its __name__ attribute
# is not exposed
</code></pre>
<p>Once set up, this form gives an easy way of accessing functions and values. It also allows these functions and values to be used them in a similar way that you might use them if they were not proxies. Finally, it allows you to set a password on the server process so that only authorised processes can access the manager. That the manager is long running, also means that a new process doesn't have to be started for each function call you make.</p>
<p>One limitation is that I've used the <code>xmlrpclib</code> module rather than <code>pickle</code> to send data back and forth between the server and the client. This is because python2 and python3 use different protocols for <code>pickle</code>. You could fix this by adding your own client to <code>multiprocessing.managers.listener_client</code> that uses an agreed upon protocol for pickling objects.</p>
| 1 |
2016-09-12T15:40:30Z
|
[
"python",
"compatibility",
"popen"
] |
403 error "Not Authorized to access this resource/api" Google Admin SDK in web app even being admin
| 39,451,823 |
<p>I'm struggling to find the problem since two days without any idea why I get this error now even though the app was fully functional one month before.</p>
<p>Among the tasks done by the web app, it makes an Admin SDK API call to get the list of members of a group. The app has the scope <code>https://www.googleapis.com/auth/admin.directory.group.readonly</code>, but I get the 403 error "Not Authorized to access this resource/api" (I verified that the Admin SDK API was enabled in Google's Console).
By the way, the app had no problem to make a request to Google Classroom API before.</p>
<p>The most incredible thing here is that the app secrets have been generated by an admin account. And that I get this error when I log into the app with this same admin account. However, when I do tests with the documentation's (<a href="https://developers.google.com/admin-sdk/directory/v1/reference/members/list#response" rel="nofollow">https://developers.google.com/admin-sdk/directory/v1/reference/members/list#response</a>) I get a 200 response without a problem with exactly the same authorized scope.</p>
<p>Getting the list of members worked without a problem before with this admin account. Since then, nothing has changed in source code or in configuration so far as I know. So I think the problem may be related to the client secrets in some way, but I have no idea how.</p>
<p><strong>This web app will only be used by this admin account</strong></p>
<p>In my research on StackOverflow, I found most things talking about "Google Apps Domain-Wide Delegation of Authority" (<a href="https://developers.google.com/admin-sdk/directory/v1/guides/delegation" rel="nofollow">https://developers.google.com/admin-sdk/directory/v1/guides/delegation</a>), but I never used this when it worked before. And I would like to avoid this.</p>
<p>Do you have an idea why I get this 403 error with the web app even though it works when just testing the request in the documentation and I'm using a Super-Admin account ?</p>
<p><strong>Edit: I've now tested a simple snippet with "Google Apps Domain-Wide Delegation of Authority" based on this gist <a href="https://gist.github.com/MeLight/1f4517560a9761317d13ebb2cdc670d3" rel="nofollow">https://gist.github.com/MeLight/1f4517560a9761317d13ebb2cdc670d3</a> and the snippet alone works.
However, when using it inside my app, I still get the 403 error. I'm getting insane, what could be the permission issue?</strong></p>
| -1 |
2016-09-12T13:45:25Z
| 39,470,559 |
<p>Finally, after knocking my head against the wall, I found what was the problem:
the <code>groupKey</code> used to get the members of a group <code>https://www.googleapis.com/admin/directory/v1/groups/groupKey/members</code> had a <strong>trailing whitespace</strong>.</p>
<p>Yes, that's all. I just had to strip the string in the code (one line in Python) and it all went well.</p>
<p>All I did, searching for permissions / client ID / Delegation of Authority was for nuts it seems.</p>
<p>Seriously, the 403 error âNot Authorized to access this resource/apiâ of Google's API could have been more explicit.</p>
| 0 |
2016-09-13T12:50:30Z
|
[
"python",
"web-applications",
"google-api",
"google-admin-sdk"
] |
OpenCV python: Show images in channel's color and not in grayscale
| 39,452,011 |
<p>I read an rgb image</p>
<pre><code>img = cv2.imread('image.jpg')
</code></pre>
<p>I split channels:</p>
<pre><code>b,g,r = cv2.split(img)
</code></pre>
<p>When I'm trying to show red image I get a grayscale image. Can I see it in red?</p>
<pre><code>cv2.imshow('Red image',r)
</code></pre>
| 2 |
2016-09-12T13:54:53Z
| 39,452,189 |
<p>Make blue and green channels of all zeroes, and merge these with your red channel.
Then you will see just the red channel displayed in red.</p>
<p>Or you could just set the b and g channels of the image to 0. </p>
<pre><code>img[:,:,0] = 0
img[:,:,1] = 0
</code></pre>
| 3 |
2016-09-12T14:02:31Z
|
[
"python",
"python-2.7",
"opencv"
] |
Remove uuid4 string pattern
| 39,452,068 |
<p>I have the below string examples</p>
<pre><code>1# 00000 Gin-a19ea68e-64bf-4471-b4d1-44f6bd9c1708-62fa6ae2-599c-4ff1-8249-bf6411ce3be7-83930e63-2149-40f0-b6ff-0838596a9b89 Kin
2# 00000 Gin-a19ea68e-64bf-4471-b4d1-44f6bd9c1708 Kin
</code></pre>
<p>I am trying to remove the uuid4 generated string and any text that comes to the right of uuid4 string pattern in python.</p>
<p>The output should be <code>00000 Gin</code> in both the examples</p>
<p>I have checked here <a href="http://stackoverflow.com/questions/11384589/what-is-the-correct-regex-for-matching-values-generated-by-uuid-uuid4-hex">What is the correct regex for matching values generated by uuid.uuid4().hex?</a>. But still doesnt help.</p>
| 1 |
2016-09-12T13:57:07Z
| 39,452,154 |
<p>You could use:</p>
<pre><code>import re
strings = ["00000 Gin-a19ea68e-64bf-4471-b4d1-44f6bd9c1708-62fa6ae2-599c-4ff1-8249-bf6411ce3be7-83930e63-2149-40f0-b6ff-0838596a9b89 Kin",
"00000 Gin-a19ea68e-64bf-4471-b4d1-44f6bd9c1708 Kin"]
rx = re.compile(r'^[^-]+')
# match the start and anything not - greedily
new_strings = [match.group(0)
for string in strings
for match in [rx.search(string)]
if match]
print(new_strings)
# ['00000 Gin', '00000 Gin']
</code></pre>
<p><hr>
See <a href="http://ideone.com/c6MxV1" rel="nofollow"><strong>a demo on ideone.com</strong></a>.
<hr>
To actually <strong><em>check</em></strong> if your string is of the desired format, you could use the following expression:</p>
<pre><code>^
(?P<interesting>.+?) # before
(?P<uid>\b\w{8}-(?:\w{4}-){3}\w{12}\b) # uid
(?P<junk>.+) # garbage
$
</code></pre>
<p>See a demo for this one on <a href="https://regex101.com/r/uM0cA0/2" rel="nofollow"><strong>regex101.com</strong></a> (mind the modifiers!).</p>
| 1 |
2016-09-12T14:01:07Z
|
[
"python",
"uuid"
] |
How to fillna() with value 0 after calling resample?
| 39,452,095 |
<p>Either I don't understand the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html">documentation</a> or it is outdated.</p>
<p>If I run </p>
<pre><code>user[["DOC_ACC_DT", "USER_SIGNON_ID"]].groupby("DOC_ACC_DT").agg(["count"]).resample("1D").fillna(value=0, method="ffill")
</code></pre>
<p>It get</p>
<pre><code>TypeError: fillna() got an unexpected keyword argument 'value'
</code></pre>
<p>If I just run</p>
<pre><code>.fillna(0)
</code></pre>
<p>I get </p>
<pre><code>ValueError: Invalid fill method. Expecting pad (ffill), backfill (bfill) or nearest. Got 0
</code></pre>
<p>If I then set</p>
<pre><code>.fillna(0, method="ffill")
</code></pre>
<p>I get </p>
<pre><code>TypeError: fillna() got multiple values for keyword argument 'method'
</code></pre>
<p>so the only thing that works is</p>
<pre><code>.fillna("ffill")
</code></pre>
<p>but of course that makes just a forward fill. However, I want to replace <code>NaN</code> with zeros. What am I doing wrong here?</p>
| 6 |
2016-09-12T13:58:15Z
| 39,452,261 |
<p>Well, I don't get why the code above is not working and I'm going to wait for somebody to give a better answer than this but I just found</p>
<pre><code>.replace(np.nan, 0)
</code></pre>
<p>does what I would have expected from <code>.fillna(0)</code>.</p>
| 3 |
2016-09-12T14:05:39Z
|
[
"python",
"pandas"
] |
How to fillna() with value 0 after calling resample?
| 39,452,095 |
<p>Either I don't understand the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html">documentation</a> or it is outdated.</p>
<p>If I run </p>
<pre><code>user[["DOC_ACC_DT", "USER_SIGNON_ID"]].groupby("DOC_ACC_DT").agg(["count"]).resample("1D").fillna(value=0, method="ffill")
</code></pre>
<p>It get</p>
<pre><code>TypeError: fillna() got an unexpected keyword argument 'value'
</code></pre>
<p>If I just run</p>
<pre><code>.fillna(0)
</code></pre>
<p>I get </p>
<pre><code>ValueError: Invalid fill method. Expecting pad (ffill), backfill (bfill) or nearest. Got 0
</code></pre>
<p>If I then set</p>
<pre><code>.fillna(0, method="ffill")
</code></pre>
<p>I get </p>
<pre><code>TypeError: fillna() got multiple values for keyword argument 'method'
</code></pre>
<p>so the only thing that works is</p>
<pre><code>.fillna("ffill")
</code></pre>
<p>but of course that makes just a forward fill. However, I want to replace <code>NaN</code> with zeros. What am I doing wrong here?</p>
| 6 |
2016-09-12T13:58:15Z
| 39,453,317 |
<p>The only workaround close to using <code>fillna</code> directly would be to call it after performing <code>.head(len(df.index))</code>. </p>
<p>I'm presuming <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html" rel="nofollow"><code>DF.head</code></a> to be useful in this case mainly because when resample function is applied to a groupby object, it will act as a filter on the input, returning a reduced shape of the original due to elimination of groups.</p>
<p>Calling <code>DF.head()</code> does not get affected by this transformation and returns the entire <code>DF</code>.</p>
<p><strong>Demo:</strong></p>
<pre><code>np.random.seed(42)
df = pd.DataFrame(np.random.randn(10, 2),
index=pd.date_range('1/1/2016', freq='10D', periods=10),
columns=['A', 'B']).reset_index()
df
index A B
0 2016-01-01 0.496714 -0.138264
1 2016-01-11 0.647689 1.523030
2 2016-01-21 -0.234153 -0.234137
3 2016-01-31 1.579213 0.767435
4 2016-02-10 -0.469474 0.542560
5 2016-02-20 -0.463418 -0.465730
6 2016-03-01 0.241962 -1.913280
7 2016-03-11 -1.724918 -0.562288
8 2016-03-21 -1.012831 0.314247
9 2016-03-31 -0.908024 -1.412304
</code></pre>
<p><strong>Operations:</strong></p>
<pre><code>resampled_group = df[['index', 'A']].groupby(['index'])['A'].agg('count').resample('2D')
resampled_group.head(len(resampled_group.index)).fillna(0).head(20)
index
2016-01-01 1.0
2016-01-03 0.0
2016-01-05 0.0
2016-01-07 0.0
2016-01-09 0.0
2016-01-11 1.0
2016-01-13 0.0
2016-01-15 0.0
2016-01-17 0.0
2016-01-19 0.0
2016-01-21 1.0
2016-01-23 0.0
2016-01-25 0.0
2016-01-27 0.0
2016-01-29 0.0
2016-01-31 1.0
2016-02-02 0.0
2016-02-04 0.0
2016-02-06 0.0
2016-02-08 0.0
Freq: 2D, Name: A, dtype: float64
</code></pre>
| 1 |
2016-09-12T15:00:32Z
|
[
"python",
"pandas"
] |
How to fillna() with value 0 after calling resample?
| 39,452,095 |
<p>Either I don't understand the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html">documentation</a> or it is outdated.</p>
<p>If I run </p>
<pre><code>user[["DOC_ACC_DT", "USER_SIGNON_ID"]].groupby("DOC_ACC_DT").agg(["count"]).resample("1D").fillna(value=0, method="ffill")
</code></pre>
<p>It get</p>
<pre><code>TypeError: fillna() got an unexpected keyword argument 'value'
</code></pre>
<p>If I just run</p>
<pre><code>.fillna(0)
</code></pre>
<p>I get </p>
<pre><code>ValueError: Invalid fill method. Expecting pad (ffill), backfill (bfill) or nearest. Got 0
</code></pre>
<p>If I then set</p>
<pre><code>.fillna(0, method="ffill")
</code></pre>
<p>I get </p>
<pre><code>TypeError: fillna() got multiple values for keyword argument 'method'
</code></pre>
<p>so the only thing that works is</p>
<pre><code>.fillna("ffill")
</code></pre>
<p>but of course that makes just a forward fill. However, I want to replace <code>NaN</code> with zeros. What am I doing wrong here?</p>
| 6 |
2016-09-12T13:58:15Z
| 39,465,991 |
<p>I do some test and it is very interesting.</p>
<p>Sample:</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(1)
rng = pd.date_range('1/1/2012', periods=20, freq='S')
df = pd.DataFrame({'a':['a'] * 10 + ['b'] * 10,
'b':np.random.randint(0, 500, len(rng))}, index=rng)
df.b.iloc[3:8] = np.nan
print (df)
a b
2012-01-01 00:00:00 a 37.0
2012-01-01 00:00:01 a 235.0
2012-01-01 00:00:02 a 396.0
2012-01-01 00:00:03 a NaN
2012-01-01 00:00:04 a NaN
2012-01-01 00:00:05 a NaN
2012-01-01 00:00:06 a NaN
2012-01-01 00:00:07 a NaN
2012-01-01 00:00:08 a 335.0
2012-01-01 00:00:09 a 448.0
2012-01-01 00:00:10 b 144.0
2012-01-01 00:00:11 b 129.0
2012-01-01 00:00:12 b 460.0
2012-01-01 00:00:13 b 71.0
2012-01-01 00:00:14 b 237.0
2012-01-01 00:00:15 b 390.0
2012-01-01 00:00:16 b 281.0
2012-01-01 00:00:17 b 178.0
2012-01-01 00:00:18 b 276.0
2012-01-01 00:00:19 b 254.0
</code></pre>
<p><strong>Downsampling</strong>:</p>
<p>Possible solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.tseries.resample.Resampler.asfreq.html" rel="nofollow"><code>Resampler.asfreq</code></a>:</p>
<p>If use <code>asfreq</code>, behaviour is same aggregating by <code>first</code>:</p>
<pre><code>print (df.groupby('a').resample('2S').first())
a b
a
a 2012-01-01 00:00:00 a 37.0
2012-01-01 00:00:02 a 396.0
2012-01-01 00:00:04 a NaN
2012-01-01 00:00:06 a NaN
2012-01-01 00:00:08 a 335.0
b 2012-01-01 00:00:10 b 144.0
2012-01-01 00:00:12 b 460.0
2012-01-01 00:00:14 b 237.0
2012-01-01 00:00:16 b 281.0
2012-01-01 00:00:18 b 276.0
</code></pre>
<pre><code>print (df.groupby('a').resample('2S').first().fillna(0))
a b
a
a 2012-01-01 00:00:00 a 37.0
2012-01-01 00:00:02 a 396.0
2012-01-01 00:00:04 a 0.0
2012-01-01 00:00:06 a 0.0
2012-01-01 00:00:08 a 335.0
b 2012-01-01 00:00:10 b 144.0
2012-01-01 00:00:12 b 460.0
2012-01-01 00:00:14 b 237.0
2012-01-01 00:00:16 b 281.0
2012-01-01 00:00:18 b 276.0
print (df.groupby('a').resample('2S').asfreq().fillna(0))
a b
a
a 2012-01-01 00:00:00 a 37.0
2012-01-01 00:00:02 a 396.0
2012-01-01 00:00:04 a 0.0
2012-01-01 00:00:06 a 0.0
2012-01-01 00:00:08 a 335.0
b 2012-01-01 00:00:10 b 144.0
2012-01-01 00:00:12 b 460.0
2012-01-01 00:00:14 b 237.0
2012-01-01 00:00:16 b 281.0
2012-01-01 00:00:18 b 276.0
</code></pre>
<p>If use <code>replace</code> another values are aggregating as <code>mean</code>:</p>
<pre><code>print (df.groupby('a').resample('2S').mean())
b
a
a 2012-01-01 00:00:00 136.0
2012-01-01 00:00:02 396.0
2012-01-01 00:00:04 NaN
2012-01-01 00:00:06 NaN
2012-01-01 00:00:08 391.5
b 2012-01-01 00:00:10 136.5
2012-01-01 00:00:12 265.5
2012-01-01 00:00:14 313.5
2012-01-01 00:00:16 229.5
2012-01-01 00:00:18 265.0
</code></pre>
<pre><code>print (df.groupby('a').resample('2S').mean().fillna(0))
b
a
a 2012-01-01 00:00:00 136.0
2012-01-01 00:00:02 396.0
2012-01-01 00:00:04 0.0
2012-01-01 00:00:06 0.0
2012-01-01 00:00:08 391.5
b 2012-01-01 00:00:10 136.5
2012-01-01 00:00:12 265.5
2012-01-01 00:00:14 313.5
2012-01-01 00:00:16 229.5
2012-01-01 00:00:18 265.0
print (df.groupby('a').resample('2S').replace(np.nan,0))
b
a
a 2012-01-01 00:00:00 136.0
2012-01-01 00:00:02 396.0
2012-01-01 00:00:04 0.0
2012-01-01 00:00:06 0.0
2012-01-01 00:00:08 391.5
b 2012-01-01 00:00:10 136.5
2012-01-01 00:00:12 265.5
2012-01-01 00:00:14 313.5
2012-01-01 00:00:16 229.5
2012-01-01 00:00:18 265.0
</code></pre>
<p><strong>Upsampling</strong>:</p>
<p>Use <code>asfreq</code>, it is same as <code>replace</code>:</p>
<pre><code>print (df.groupby('a').resample('200L').asfreq().fillna(0))
a b
a
a 2012-01-01 00:00:00.000 a 37.0
2012-01-01 00:00:00.200 0 0.0
2012-01-01 00:00:00.400 0 0.0
2012-01-01 00:00:00.600 0 0.0
2012-01-01 00:00:00.800 0 0.0
2012-01-01 00:00:01.000 a 235.0
2012-01-01 00:00:01.200 0 0.0
2012-01-01 00:00:01.400 0 0.0
2012-01-01 00:00:01.600 0 0.0
2012-01-01 00:00:01.800 0 0.0
2012-01-01 00:00:02.000 a 396.0
2012-01-01 00:00:02.200 0 0.0
2012-01-01 00:00:02.400 0 0.0
...
print (df.groupby('a').resample('200L').replace(np.nan,0))
b
a
a 2012-01-01 00:00:00.000 37.0
2012-01-01 00:00:00.200 0.0
2012-01-01 00:00:00.400 0.0
2012-01-01 00:00:00.600 0.0
2012-01-01 00:00:00.800 0.0
2012-01-01 00:00:01.000 235.0
2012-01-01 00:00:01.200 0.0
2012-01-01 00:00:01.400 0.0
2012-01-01 00:00:01.600 0.0
2012-01-01 00:00:01.800 0.0
2012-01-01 00:00:02.000 396.0
2012-01-01 00:00:02.200 0.0
2012-01-01 00:00:02.400 0.0
...
</code></pre>
<pre><code>print ((df.groupby('a').resample('200L').replace(np.nan,0).b ==
df.groupby('a').resample('200L').asfreq().fillna(0).b).all())
True
</code></pre>
<p><strong>Conclusion</strong>:</p>
<p>For downsampling use same aggregating function like <code>sum</code>, <code>first</code> or <code>mean</code> and for upsampling <code>asfreq</code>.</p>
| 1 |
2016-09-13T08:57:26Z
|
[
"python",
"pandas"
] |
Faster For Loop to Manipulate Data in Pandas
| 39,452,097 |
<p>I am working with pandas dataframes that have shapes of ~<code>(100000, 50)</code> and although I can achieve the desired data formatting and manipulations, I find my code takes longer than desired to run (3-10mins) depending on the specific task, including:</p>
<ol>
<li>Combining strings in different columns</li>
<li>Applying a function to each instance within a data frame series</li>
<li>Checking if a value is contained within a separate list or numpy array</li>
</ol>
<p>I will have larger data frames in the future and want to ensure I'm using the appropriate coding methods to avoid very long processing times. I find my <code>for</code> loops take the longest. I try to avoid <code>for</code> loops with list comprehensions and series operators (e.g. <code>df.loc[:,'C'] = df.A + df.B</code>) but in some cases, I need to perform more complicated/involved manipulations with nested <code>for</code> loops. For example, the below iterates through a dataframe's series <code>history</code> (a series of lists), and subsequently iterates through each item within each <code>list</code>:</p>
<pre><code>for row in DF.iterrows():
removelist = []
for i in xrange(0, len(row[1]['history'])-1):
if ((row[1]['history'][i]['title'] == row[1]['history'][i+1]['title']) &
(row[1]['history'][i]['dept'] == row[1]['history'][i+1]['dept']) &
(row[1]['history'][i]['office'] == row[1]['history'][i+1]['office']) &
(row[1]['history'][i]['employment'] == row[1]['history'][i+1]['employment'])):
removelist.append(i)
newlist = [v for i, v in enumerate(row[1]['history']) if i not in removelist]
</code></pre>
<p>I know that list comprehensions can accommodate nested <code>for</code> loops but the above would seem really cumbersome within a list comprehension.</p>
<p><strong>My questions:</strong> what other techniques can I use to achieve the same functionality as a <code>for</code> loop with shorter processing time? And when iterating through a series containing lists, should I use a different technique other than a nested <code>for</code> loop?</p>
| 1 |
2016-09-12T13:58:27Z
| 39,452,439 |
<p>So what you seem to have here is a dataframe where the history entry of each row contains a list of dictionaries? Like:</p>
<pre><code>import pandas as pd
john_history = [{'title': 'a', 'dept': 'cs'}, {'title': 'cj', 'dept': 'sales'}]
john_history
jill_history = [{'title': 'boss', 'dept': 'cs'}, {'title': 'boss', 'dept': 'cs'}, {'title': 'junior', 'dept': 'cs'}]
jill_history
df = pd.DataFrame({'history': [john_history, jill_history],
'firstname': ['john', 'jill']})
</code></pre>
<p>I would restructure your data so that you use pandas structures at the bottom level of your structure, e.g. a dict of DataFrames where each DataFrame is the history (I don't think Panel works here as the DataFrames may have different lengths):</p>
<pre><code>john_history = pd.DataFrame({'title': ['a', 'cj'], 'dept': ['cs', 'sales']})
john_history['name'] = 'john'
jill_history = pd.DataFrame({'title': ['boss', 'boss', 'junior'], 'dept': ['cs', 'cs', 'cs']})
jill_history['name'] = 'jill'
people = pd.concat([john_history, jill_history])
</code></pre>
<p>You can then process them using groupby like:</p>
<pre><code>people.groupby('name').apply(pd.DataFrame.drop_duplicates)
</code></pre>
<p>In general, if you cannot find the functionality you want within pandas/numpy, you should find that using the pandas primitives to create it rather than iterating over a dataframe will be faster. For example to recreate your logic above, first create a new dataframe that is the first one shifted:</p>
<pre><code>df2 = df.shift()
</code></pre>
<p>Now you can create a selection by comparing the contents of the dataframes and only keeping the ones that are different and use that to filter the dataframe:</p>
<pre><code>selection_array = (df.history == df2.history) & (df.title == df2.title)
unduplicated_consecutive = df[~selection_array]
print(unduplicated_consecutive)
history id title
0 a 1 x
1 b 2 y
# or in one line:
df[~((df.history == df2.history) & (df.title == df2.title))]
# or:
df[(df.history != df2.history) | (df.title != df2.title)]
</code></pre>
<p>So putting this into the groupby:</p>
<pre><code>def drop_consecutive_duplicates(df):
df2 = df.shift()
return df.drop(df[(df.dept == df2.dept) & (df.title == df2.title)].index)
people.groupby('name').apply(drop_consecutive_duplicates)
</code></pre>
| 1 |
2016-09-12T14:14:40Z
|
[
"python",
"pandas",
"numpy"
] |
django ORM - prevent direct setting of model fields
| 39,452,165 |
<p>I have a Django class </p>
<pre><code>class Chat(models.Model):
primary_node = models.ForeignKey('nodes.Node', blank=True, null=True, related_name='chats_at_this_pri_node', on_delete=models.SET_NULL)
secondary_node = models.ForeignKey('nodes.Node', blank=True, null=True, related_name='chats_at_this_sec_node', on_delete=models.SET_NULL)
</code></pre>
<p>I want to forbid direct assigning of fields, such as</p>
<pre><code>chat.primary_node = some_node
</code></pre>
<p>and instead create a method <code>chat.assign_node(primary, secondary)</code> that updates nodes via Django <code>Chat.update()</code> model method. </p>
<p>The reason is that I want to log all changes to these nodes (count changes and update other model fields with the new count), but dont want myself and other developers to forget that we cannot assign fields directly, as this wouldn't trigger custom <code>assign_node</code> logic. </p>
<p>How Ñan I do that? </p>
| 0 |
2016-09-12T14:01:31Z
| 39,452,271 |
<p>This simply disable setting of 'primary_node'</p>
<pre><code>class Chat(models.Model)
def __setattr__(self, attrname, val):
if attrname == 'primary_node': #sets only the attribute if it's not primary_node
print('[!] You cannot assign primary_node like that! Use assign_node() method please.')
else:
super(Chat, self).__setattr__(attrname, val)
</code></pre>
| 0 |
2016-09-12T14:06:15Z
|
[
"python",
"django"
] |
django ORM - prevent direct setting of model fields
| 39,452,165 |
<p>I have a Django class </p>
<pre><code>class Chat(models.Model):
primary_node = models.ForeignKey('nodes.Node', blank=True, null=True, related_name='chats_at_this_pri_node', on_delete=models.SET_NULL)
secondary_node = models.ForeignKey('nodes.Node', blank=True, null=True, related_name='chats_at_this_sec_node', on_delete=models.SET_NULL)
</code></pre>
<p>I want to forbid direct assigning of fields, such as</p>
<pre><code>chat.primary_node = some_node
</code></pre>
<p>and instead create a method <code>chat.assign_node(primary, secondary)</code> that updates nodes via Django <code>Chat.update()</code> model method. </p>
<p>The reason is that I want to log all changes to these nodes (count changes and update other model fields with the new count), but dont want myself and other developers to forget that we cannot assign fields directly, as this wouldn't trigger custom <code>assign_node</code> logic. </p>
<p>How Ñan I do that? </p>
| 0 |
2016-09-12T14:01:31Z
| 39,454,176 |
<p>You could try do prevent assignment to these fields with <code>__setattr__</code>, but I recommend you don't do that for two reasons:</p>
<ol>
<li>this will probably introduce all kinds of unexpected side-effects</li>
<li>you are performing a check at runtime, to prevent something introduced at the time the code was written. That's like running a (tiny) unit-test on every request.</li>
</ol>
<p>I would simply rename the fields to <code>_primary_node</code> and <code>_secondary_node</code> to indicate that these are private fields and not intended to be used directly.</p>
<p>In addition, you could write a hook for your version control system that checks for assignments to these fields. This could be a simple grep for <code>_primary_node =</code> or something fancier like a plugin for a linter like flake8.</p>
| 0 |
2016-09-12T15:46:53Z
|
[
"python",
"django"
] |
At which moment and how often are executed the __init__.py files by python
| 39,452,319 |
<p>Can someone help and clarify, when using <code>import</code> command(s), at which moment the <em>__init__.py</em> files in various package directory(s) is/are executed?</p>
<ol>
<li>For each included module?</li>
<li>Only once at 1st <code>import</code> command?</li>
<li>For each <code>import</code> command?</li>
</ol>
| 0 |
2016-09-12T14:08:19Z
| 39,455,964 |
<p>It's evaluated on first module import. On next imports, intrpreter detects that module was already loaded and simply returns reference to it. There is no need to re-execute code.</p>
<p>Quoting <a href="https://docs.python.org/3/reference/import.html" rel="nofollow">The import system</a>:</p>
<p>On caching modules:</p>
<blockquote>
<p>The first place checked during import search is sys.modules. This mapping serves as a cache of all modules that have been previously imported, including the intermediate paths. So if foo.bar.baz was previously imported, sys.modules will contain entries for foo, foo.bar, and foo.bar.baz. Each key will have as its value the corresponding module object.</p>
<p><strong>During import, the module name is looked up in sys.modules and if present, the associated value is the module satisfying the import, and the process completes.</strong> However, if the value is None, then an ImportError is raised. If the module name is missing, Python will continue searching for the module.</p>
</blockquote>
<p>On executing <code>__init__</code> when importing:</p>
<blockquote>
<p>Python defines two types of packages, regular packages and namespace packages. Regular packages are traditional packages as they existed in Python 3.2 and earlier. A regular package is typically implemented as a directory containing an <strong>init</strong>.py file. <strong>When a regular package is imported, this __init__.py file is implicitly executed, and the objects it defines are bound to names in the packageâs namespace. The __init__.py file can contain the same Python code that any other module can contain, and Python will add some additional attributes to the module when it is imported.</strong></p>
</blockquote>
| 0 |
2016-09-12T17:49:47Z
|
[
"python",
"python-import"
] |
Insert elements in lists under array python
| 39,452,531 |
<p>I have a array consisting of tuples.</p>
<pre><code>Data = [('1234', 'abcd'), ('5678', 'efgh')]
</code></pre>
<p>I now have another set of variables in an array:</p>
<pre><code>add = ["#happy", "#excited"]
</code></pre>
<p>I'm trying to append 'add' to 'Data' in the same order such that the output should look like:</p>
<pre><code>data_new = [('1234', 'abcd', '#happy'), ('5678', 'efgh',"#excited")]
</code></pre>
<p>Is that possible? </p>
| 0 |
2016-09-12T14:19:54Z
| 39,452,833 |
<p>You can use list comprehension with <code>enumerate()</code>:</p>
<pre><code>>>> Data = [('1234', 'abcd'), ('5678', 'efgh')]
>>> add = ['#happy', '#excited']
>>> [x + (add[i],) for i,x in enumerate(Data)]
[('1234', 'abcd', '#happy'), ('5678', 'efgh', '#excited')]
</code></pre>
<p>Note that a common pythonic way to solve this type of problem is with <code>zip()</code>, but it doesn't immediately give the desired output for your example because you end up with nested tuples:</p>
<pre><code>>>> zip(Data,add) # or list(zip(Data,add)) in Python3
[(('1234', 'abcd'), '#happy'), (('5678', 'efgh'), '#excited')]
</code></pre>
| 2 |
2016-09-12T14:34:46Z
|
[
"python",
"arrays",
"list",
"python-2.7",
"tuples"
] |
Insert elements in lists under array python
| 39,452,531 |
<p>I have a array consisting of tuples.</p>
<pre><code>Data = [('1234', 'abcd'), ('5678', 'efgh')]
</code></pre>
<p>I now have another set of variables in an array:</p>
<pre><code>add = ["#happy", "#excited"]
</code></pre>
<p>I'm trying to append 'add' to 'Data' in the same order such that the output should look like:</p>
<pre><code>data_new = [('1234', 'abcd', '#happy'), ('5678', 'efgh',"#excited")]
</code></pre>
<p>Is that possible? </p>
| 0 |
2016-09-12T14:19:54Z
| 39,454,115 |
<p>You can add tuples in a list comprehension and use zip:</p>
<pre><code>>>> [t+(e,) for t, e in zip(Data, add)]
[('1234', 'abcd', '#happy'), ('5678', 'efgh', '#excited')]
</code></pre>
<p>(works in Python 2 and 3)</p>
| 1 |
2016-09-12T15:42:58Z
|
[
"python",
"arrays",
"list",
"python-2.7",
"tuples"
] |
Issues using json.load
| 39,452,587 |
<p>I am trying to print a list of commit messages from a git repo using python. The code I am using so far is:</p>
<pre><code>import requests, json, pprint
password = "password"
user = "user"
r = requests.get("https://api.github.com/repos/MyProduct/ios-app/commits", auth=(user, password))
j = json.load(r.json())
jsonData = j["data"]
for item in jsonData:
message = item.get("message")
print message
</code></pre>
<p>I'm not totally sure what I should be doing here. After doing the HTTP request is it correct that I need to create a JSON and then convert it to a python object? Currently I'm getting the error <code>TypeError: expected string or buffer</code>. What am I doing wrong here? Any pointers would be really appreciated. thanks</p>
| -1 |
2016-09-12T14:22:32Z
| 39,452,668 |
<p>The <code>.json()</code> method on requests object already return a proper dict. No need to parse it. So just do <code>j = r.json()</code>.</p>
<p>Use <code>json.load</code> to get a dict from <a href="https://docs.python.org/3/glossary.html#term-file-like-object" rel="nofollow">file-like objects</a> and <code>json.loads</code> with strings.</p>
| 2 |
2016-09-12T14:25:52Z
|
[
"python",
"json"
] |
Getting an N-N relation in a template
| 39,452,708 |
<p>I have these models (an owner can have multiple cars, and a car can have multiple owners):</p>
<pre><code>class Car(models.Model):
name = models.CharField(max_length=20)
class Owner(models.Model):
name = models.CharField(max_length=20)
class CarOwner(models.Model):
car = models.ForeignKey(Car, default = 0, on_delete=Models.CASCADE)
owner = models.ForeignKey(Owner, default = 0, on_delete=Models.CASCADE)
</code></pre>
<p>and I want to create a template that shows a list of cars with their owners, so I did this:</p>
<pre><code>def cars(request):
cars = models.Cars.objects.all()
return render(request, 'template.html', { "cars": cars })
</code></pre>
<p>My <code>template.html</code> is as simple as</p>
<pre><code>{% for car in cars %}
{{ car.name }} <br>
{% endfor %}
</code></pre>
<p>How can I now refer to all users per car in the <code>template.html</code> file?</p>
| 0 |
2016-09-12T14:27:31Z
| 39,452,780 |
<p>CarOwner is the through table of a many-to-many relationship. You should make that explicit:</p>
<pre><code>class Car(models.Model):
name = models.CharField(max_length=20)
owners = models.ManyToManyField('Owner', through='CarOwner')
</code></pre>
<p>Now you can do:</p>
<pre><code>{% for car in cars %}
{{ car.name }} <br>
{% for owner in car.owners.all %}
{{ owner.name }}
{% endfor %}
{% endfor %}
</code></pre>
| 0 |
2016-09-12T14:30:59Z
|
[
"python",
"django"
] |
Python dict.fromkeys return same id element
| 39,452,721 |
<p>When i do this on python i update all keys in one time.</p>
<pre><code>>>> base = {}
>>> keys = ['a', 'b', 'c']
>>> base.update(dict.fromkeys(keys, {}))
>>> base.get('a')['d'] = {}
>>> base
{'a': {'d': {}}, 'c': {'d': {}}, 'b': {'d': {}}}
>>> map(id, base.values())
[140536040273352, 140536040273352, 140536040273352]
</code></pre>
<p>If instead of <code>.get</code> i use <code>[]</code> operator this not happen:</p>
<pre><code>>>> base['a']['d'] = {}
>>> base
{'a': {'d': {}}, 'c': {}, 'b': {}}
</code></pre>
<p>Why?</p>
| -1 |
2016-09-12T14:28:04Z
| 39,452,781 |
<p>When you initialize the value for the new keys as <code>{}</code> a new dictionary is created and a reference to this dictionary is becoming the values. There is only one dictionary and so if you change one, you will change "all".</p>
| 0 |
2016-09-12T14:31:02Z
|
[
"python",
"dictionary"
] |
Python dict.fromkeys return same id element
| 39,452,721 |
<p>When i do this on python i update all keys in one time.</p>
<pre><code>>>> base = {}
>>> keys = ['a', 'b', 'c']
>>> base.update(dict.fromkeys(keys, {}))
>>> base.get('a')['d'] = {}
>>> base
{'a': {'d': {}}, 'c': {'d': {}}, 'b': {'d': {}}}
>>> map(id, base.values())
[140536040273352, 140536040273352, 140536040273352]
</code></pre>
<p>If instead of <code>.get</code> i use <code>[]</code> operator this not happen:</p>
<pre><code>>>> base['a']['d'] = {}
>>> base
{'a': {'d': {}}, 'c': {}, 'b': {}}
</code></pre>
<p>Why?</p>
| -1 |
2016-09-12T14:28:04Z
| 39,453,198 |
<p>I tried it with both Python 2.7.6 and 3.4.3. I get the same answer when either <code>get('a')</code> or <code>['a']</code> is used. Appreciate if you can verify this at your end. Python does object reuse. Thus, <code>dict.fromkeys()</code> reuses the same empty <code>dict</code> is to initialize. To make each one a separate object, you can do this:</p>
<pre><code>base.update(zip(keys, ({} for _ in keys)))
</code></pre>
| 0 |
2016-09-12T14:54:04Z
|
[
"python",
"dictionary"
] |
Coloring given words in a text with Python
| 39,452,744 |
<p>I have already read the questions about how coloring text with Python and the Colorama package but I didn't find what I was looking for.</p>
<p>I have some raw text:</p>
<blockquote>
<p>Impossible considered invitation him men instrument saw celebrated unpleasant. Put rest and must set kind next many near nay. He exquisite continued explained middleton am. Voice hours young woody has she think equal.</p>
</blockquote>
<p>And two lists of words:</p>
<pre><code>good = ["instrument", "kind", "exquisite", "young"]
bad = ["impossible", "unpleasant", "woody"]
</code></pre>
<p>I would like to print that text in a terminal so that words in <code>good</code> are displayed in green and words in <code>bad</code> are displayed in red.</p>
<p>I know I could use colorama, check each word sequentially and make a print statement for this word but it doesn't sound like a good solution. Is there an effective way to do that?</p>
| 0 |
2016-09-12T14:28:53Z
| 39,452,869 |
<p>This should work:</p>
<pre><code>from colorama import Fore, Style
for word in text.split(' '):
if word in good:
print Fore.red + word
elif word in bad:
print Fore.green + word
else:
print Style.RESET_ALL + word
</code></pre>
| 2 |
2016-09-12T14:36:18Z
|
[
"python"
] |
Coloring given words in a text with Python
| 39,452,744 |
<p>I have already read the questions about how coloring text with Python and the Colorama package but I didn't find what I was looking for.</p>
<p>I have some raw text:</p>
<blockquote>
<p>Impossible considered invitation him men instrument saw celebrated unpleasant. Put rest and must set kind next many near nay. He exquisite continued explained middleton am. Voice hours young woody has she think equal.</p>
</blockquote>
<p>And two lists of words:</p>
<pre><code>good = ["instrument", "kind", "exquisite", "young"]
bad = ["impossible", "unpleasant", "woody"]
</code></pre>
<p>I would like to print that text in a terminal so that words in <code>good</code> are displayed in green and words in <code>bad</code> are displayed in red.</p>
<p>I know I could use colorama, check each word sequentially and make a print statement for this word but it doesn't sound like a good solution. Is there an effective way to do that?</p>
| 0 |
2016-09-12T14:28:53Z
| 39,452,987 |
<p>You could always do this (though it's probably a little slow):</p>
<pre><code>from colorama import Fore
for word in good:
text = text.replace(word, Fore.GREEN+word)
for word in bad:
text = text.replace(word, Fore.RED+word)
print(text)
</code></pre>
<p><code>re.sub</code> might also be interesting here, especially since you probably don't want to replace words that are inside other words so you could use <code>r'\bword\b'</code>.</p>
| 1 |
2016-09-12T14:43:42Z
|
[
"python"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.