title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
Separation of a String into Individual Words (Python) | 39,775,070 | <p>So I have this code here:</p>
<pre><code>#assign a string to variable
x = "example text"
#create set to store separated words
xset = []
#create base word
xword = ""
for letter in x:
if letter == " ":
#add word
xset.append(xword)
#add space
xset.append(letter)
#reset base word
else:
#add letter
xword = xword + letter
#back to line 9
#print set with separated words
print xset
</code></pre>
<p>So pretty self explanatory, the for function looks at each letter in <code>x</code>, and if it isn't a space it adds it to <code>xword</code> (Imagine it like <code>xword = "examp" + "l"</code>, that would make <code>xword = "exampl"</code>). If it is a space, it adds <code>xword</code> and a space to the set, and resets <code>xword</code>.</p>
<p>My issue comes to the fact that the word <code>"text"</code> is not being included in the final set. When this code is run, and the set is printed, it gives this:`['example', ' ']</p>
<p>So why isn't the <code>"text"</code> appearing in the set?</p>
| 0 | 2016-09-29T16:02:17Z | 39,775,213 | <p>Because when your code reaches the space in <code>x</code> it appends <code>xword</code>. But this only happens when it reaches a space. As there are no spaces after text, the final result is not appended to <code>xset</code> Also, you were not resetting <code>xword</code>:</p>
<pre><code>#assign a string to variable
x = "example text"
#create set to store separated words
xset = []
#create base word
xword = ""
for letter in x:
if letter == " ":
#add word
xset.append(xword)
#add space
xset.append(letter)
#reset base word
xword = ""
else:
#add letter
xword = xword + letter
#back to line 9
#print set with separated words
xset.append(xword)
print xset
</code></pre>
<p>Output:</p>
<pre><code>['example', ' ', 'text']
</code></pre>
| 1 | 2016-09-29T16:11:01Z | [
"python",
"string",
"python-2.7"
]
|
Separation of a String into Individual Words (Python) | 39,775,070 | <p>So I have this code here:</p>
<pre><code>#assign a string to variable
x = "example text"
#create set to store separated words
xset = []
#create base word
xword = ""
for letter in x:
if letter == " ":
#add word
xset.append(xword)
#add space
xset.append(letter)
#reset base word
else:
#add letter
xword = xword + letter
#back to line 9
#print set with separated words
print xset
</code></pre>
<p>So pretty self explanatory, the for function looks at each letter in <code>x</code>, and if it isn't a space it adds it to <code>xword</code> (Imagine it like <code>xword = "examp" + "l"</code>, that would make <code>xword = "exampl"</code>). If it is a space, it adds <code>xword</code> and a space to the set, and resets <code>xword</code>.</p>
<p>My issue comes to the fact that the word <code>"text"</code> is not being included in the final set. When this code is run, and the set is printed, it gives this:`['example', ' ']</p>
<p>So why isn't the <code>"text"</code> appearing in the set?</p>
| 0 | 2016-09-29T16:02:17Z | 39,775,446 | <p>Just use string.split(). This command will return list of word in your string. Here is documentation on it: <a href="https://docs.python.org/2/library/string.html" rel="nofollow">https://docs.python.org/2/library/string.html</a></p>
<p>If your values are space separated do something like this:</p>
<pre><code>r.split()
Out[26]: ['some', 'text', 'here']
</code></pre>
<p>If your seperator different from space you can specify it like this: </p>
<pre><code>s.split(',')
Out[21]: ['some', 'text', 'here']
</code></pre>
| 0 | 2016-09-29T16:23:02Z | [
"python",
"string",
"python-2.7"
]
|
Confused with elements sizing in Kivy | 39,775,117 | <p>I've just started to learn Kivy and confused with elements sizing. Let's say I want to create video player time bar with progress bar in middle and time labels on sides.</p>
<p>What I got so far:</p>
<pre><code><TimeLabel@Label>:
width: 100
padding: 10, 0
font_size: '14sp'
<Player>:
BoxLayout:
orientation: 'horizontal'
pos: 0, 0
size: root.width, 40
canvas:
Color:
rgba: 0, 0.5, 0, 0.8
Rectangle:
pos: self.pos
size: self.size
TimeLabel:
size_hint: None, 1
text: "0"
ProgressBar:
value_normalized: 0.5
TimeLabel:
size_hint: None, 1
text: "10:00:00"
</code></pre>
<p>On Windows everything seems to be ok:</p>
<p><a href="http://i.stack.imgur.com/vahKN.png" rel="nofollow"><img src="http://i.stack.imgur.com/vahKN.png" alt="enter image description here"></a></p>
<p>While on my Galaxy S4 not:</p>
<p><a href="http://i.stack.imgur.com/vao2E.png" rel="nofollow"><img src="http://i.stack.imgur.com/vao2E.png" alt="enter image description here"></a></p>
<p>As you see on last screenshot tab too is low and progress bar ran into time label. Of cause I can increase bar height and labels width, but in this case all this would be too big on Windows.</p>
<p>How can I fix sizes on Android saving current proportions on Windows? </p>
| 0 | 2016-09-29T16:05:32Z | 39,776,199 | <p>Never ever use default pixel sizing for your widgets, like you did in several places. Use either size hint, or dp unit. This way, your UI will look the same everywhere. More info <a href="https://kivy.org/docs/api-kivy.metrics.html" rel="nofollow">here</a>.</p>
| 4 | 2016-09-29T17:07:50Z | [
"python",
"kivy",
"kivy-language"
]
|
Web Scraping : Get graph Coordinates from Webpage | 39,775,220 | <p>I wanted some help with web scraping. I want to retrieve players ranking which are plotted on the graph in this <a href="http://www.icc-cricket.com/player-rankings/profile/sachin-tendulkar" rel="nofollow">link</a></p>
<p>Visit the link. Click on Rating, and then hover over the points in the plot. Its <code>y-coordinate</code> will be displayed along with other details.
I want to extract all those details. </p>
<p>Any help highly appreciated. </p>
<p>I am attaching the screenshot also.
<a href="http://i.stack.imgur.com/PglIR.png" rel="nofollow"><img src="http://i.stack.imgur.com/PglIR.png" alt="enter image description here"></a></p>
| -1 | 2016-09-29T16:11:39Z | 39,775,753 | <p>Each <code>circle</code> element has a <code>cy</code> attribute that we can find with the following:</p>
<pre><code>var circ = document.querySelector('circle')
console.log(circ.getAttribute('cy') // cy= 123.586....
</code></pre>
<p>This will give you the y coordinate for the first <code>circle</code> element. You can use this idea to get all of them and find out the actual value.</p>
| 0 | 2016-09-29T16:40:38Z | [
"javascript",
"python",
"html",
"web-scraping"
]
|
Removing elements from the list when looping over it | 39,775,231 | <p>This part of my code does not scale if <strong>dimension</strong> gets bigger. </p>
<p>I loop over my data and accumulate them every <strong>dt</strong> time window. To do this I compare lower and upper time value. When I reach upper bound, I break the <strong>for loop</strong> for efficiency. The next time I run <strong>for loop</strong> I want to start not from its beginning but from the element I stopped previously, for efficiency. <strong>How can I do that?</strong> </p>
<p>I tried to remove/pop elements of the list but indexes get messed up. I read that I cannot modify the list I loop over, but my goal seems to be not uncommon so there has to be solution. I don't care about original <strong>data</strong> list later in my code, I only want optimization of my accumulation.</p>
<pre><code># Here I generate data for you to show my problem
from random import randint
import numpy as np
dimension = 200
times = [randint(0, 1000) for p in range(0, dimension)]
times.sort()
values = [randint(0, dimension) for p in range(0, dimension)]
data = [(values[k], times[k]) for k in range(dimension)]
dt = 50.0
t = min(times)
pixels = []
timestamps = []
# this is my problem
while (t <= max(times)):
accumulator = np.zeros(dimension)
for idx, content in enumerate(data):
# comparing lower bound of the 'time' window
if content[1] >= t:
# comparing upper bound of the 'time' window
if (content[1] < t + dt):
accumulator[content[0]] += 1
# if I pop the first element from the list after accumulating, indexes are screwed when looping further
# data.pop(0)
else:
# all further entries are bigger because they are sorted
break
pixels.append(accumulator)
timestamps.append(t)
t += dt
</code></pre>
| 0 | 2016-09-29T16:12:00Z | 39,777,695 | <p>In a simpler form, I think you are trying to do:</p>
<pre><code>In [158]: times=[0, 4, 6, 10]
In [159]: data=np.arange(12)
In [160]: cnt=[0 for _ in times]
In [161]: for i in range(len(times)-1):
...: for d in data:
...: if d>=times[i] and d<times[i+1]:
...: cnt[i]+=1
...:
In [162]: cnt
Out[162]: [4, 2, 4, 0]
</code></pre>
<p>And you are trying to make this <code>data</code> loop more efficient by breaking form the loop when <code>d</code> gets too large, and by starting the next loop after items which have already been counted.</p>
<p>Adding the break is easy as you've done:</p>
<pre><code>In [163]: cnt=[0 for _ in times]
In [164]: for i in range(len(times)-1):
...: for d in data:
...: if d>=times[i]:
...: if d<times[i+1]:
...: cnt[i]+=1
...: else:
...: break
In [165]: cnt
Out[165]: [4, 2, 4, 0]
</code></pre>
<p>One way to skip the counted stuff is to replace the <code>for d in data</code> with a index loop; and keep track of where we stopped last time around:</p>
<pre><code>In [166]: cnt=[0 for _ in times]
In [167]: start=0
...: for i in range(len(times)-1):
...: for j in range(start,len(data)):
...: d = data[j]
...: if d>=times[i]:
...: if d<times[i+1]:
...: cnt[i]+=1
...: else:
...: start = j
...: break
...:
In [168]: cnt
Out[168]: [4, 2, 4, 0]
</code></pre>
<p>A <code>pop</code> based version requires that I work with a list (my <code>data</code> is an array), a requires inserting the value back at the break</p>
<pre><code>In [186]: datal=data.tolist()
In [187]: cnt=[0 for _ in times]
In [188]: for i in range(len(times)-1):
...: while True:
...: d = datal.pop(0)
...: if d>=times[i]:
...: if d<times[i+1]:
...: cnt[i]+=1
...: else:
...: datal.insert(0,d)
...: break
...:
In [189]: cnt
Out[189]: [4, 2, 4, 0]
In [190]: datal
Out[190]: [10, 11]
</code></pre>
<p>This isn't perfect, since I still have items on the list at the end (my <code>times</code> don't cover the whole <code>data</code> range). But it tests the idea.</p>
<p>Here's something closer to your attempt:</p>
<pre><code>In [203]: for i in range(len(times)-1):
...: for d in datal[:]:
...: if d>=times[i]:
...: if d<times[i+1]:
...: cnt[i]+=1
...: datal.pop(0)
...: else:
...: break
...:
</code></pre>
<p>The key difference is that I iterate on a copy of <code>datal</code>. That way the <code>pop</code> affects <code>datal</code>, but doesn't affect the current iteration. Admittedly there's a cost to the copy, so the speed up might be significant.</p>
<p>A different approach would be to loop on <code>data</code>, and step <code>time</code> as the <code>t</code> and <code>t+dt</code> boundaries are crossed.</p>
<pre><code>In [222]: times=[0, 4, 6, 10,100]
In [223]: cnt=[0 for _ in times]; i=0
In [224]: for d in data:
...: if d>=times[i]:
...: if d<times[i+1]:
...: cnt[i]+=1
...: else:
...: i += 1
...: cnt[i]+=1
...:
In [225]: cnt
Out[225]: [4, 2, 4, 2, 0]
</code></pre>
| 0 | 2016-09-29T18:39:26Z | [
"python",
"list",
"numpy",
"iteration",
"pop"
]
|
Python input never equals an integer | 39,775,273 | <p>I want to insert a number and if I put any number other than 4 it will tell me it's wrong, but if it's false it will tell me "gg you win, noob.". However when I insert 4, it tells me it's incorrect.</p>
<pre><code>x = input("Insert a numer: ")
while x != 4:
print("incorrect")
x =input("Insert another number: ")
if x == 4:
print("gg you win, noob")
</code></pre>
| 1 | 2016-09-29T16:14:19Z | 39,775,336 | <p>In Python 3+, <code>input</code> returns a string, and <code>4</code> does not equal <code>'4'</code>. You will have to amend to:</p>
<pre><code>while x != '4':
</code></pre>
<p>or alternatively use <code>int</code>, being careful to check for a <code>ValueError</code> if the input is not an int.</p>
| 3 | 2016-09-29T16:17:06Z | [
"python"
]
|
Python input never equals an integer | 39,775,273 | <p>I want to insert a number and if I put any number other than 4 it will tell me it's wrong, but if it's false it will tell me "gg you win, noob.". However when I insert 4, it tells me it's incorrect.</p>
<pre><code>x = input("Insert a numer: ")
while x != 4:
print("incorrect")
x =input("Insert another number: ")
if x == 4:
print("gg you win, noob")
</code></pre>
| 1 | 2016-09-29T16:14:19Z | 39,775,339 | <p>The result from <code>input()</code> will be a string, which you'll need to convert to an integer before comparing it:</p>
<pre><code>x = int(input("Insert another number: ")
</code></pre>
<p>This will raise a <code>ValueError</code> if your input is not a number.</p>
| 1 | 2016-09-29T16:17:14Z | [
"python"
]
|
Python input never equals an integer | 39,775,273 | <p>I want to insert a number and if I put any number other than 4 it will tell me it's wrong, but if it's false it will tell me "gg you win, noob.". However when I insert 4, it tells me it's incorrect.</p>
<pre><code>x = input("Insert a numer: ")
while x != 4:
print("incorrect")
x =input("Insert another number: ")
if x == 4:
print("gg you win, noob")
</code></pre>
| 1 | 2016-09-29T16:14:19Z | 39,775,471 | <p>Here, <code>if x == 4</code> is not necessary. Because until <code>x</code> is equal to <code>4</code> the <code>while</code> loop won't be passed. You can try like this:</p>
<pre><code>x = int(input("Insert a numer: "))
while x != 4:
print("incorrect")
x = int(input("Insert another number: "))
print("gg you win, noob")
</code></pre>
| 0 | 2016-09-29T16:24:43Z | [
"python"
]
|
Python input never equals an integer | 39,775,273 | <p>I want to insert a number and if I put any number other than 4 it will tell me it's wrong, but if it's false it will tell me "gg you win, noob.". However when I insert 4, it tells me it's incorrect.</p>
<pre><code>x = input("Insert a numer: ")
while x != 4:
print("incorrect")
x =input("Insert another number: ")
if x == 4:
print("gg you win, noob")
</code></pre>
| 1 | 2016-09-29T16:14:19Z | 39,776,181 | <p>Python 2 and 3 differ in the function <code>input()</code>.</p>
<ul>
<li>In Python 2, <code>input()</code> is equivalent to <code>eval(raw_input())</code>.</li>
<li>In Python 3, there is no <code>raw_input()</code>, but <code>input()</code> works like Python 2's<code>raw_input()</code>.</li>
</ul>
<p>In your case:</p>
<ul>
<li>In Python 2, <code>input()</code> gives you <code>4</code> with type <code>int</code>, so your program works.</li>
<li>In Python 3, <code>input()</code> gives you <code>'4'</code> with type <code>str</code>, so your program is buggy.</li>
</ul>
<p>In Python 3, one way to fix this is to use <code>eval(input())</code>. But using <code>eval</code> on an untrusted string is very dangerous (yes your program works dangerously in Python 2). So you should validate the input first.</p>
| 0 | 2016-09-29T17:07:23Z | [
"python"
]
|
Python input never equals an integer | 39,775,273 | <p>I want to insert a number and if I put any number other than 4 it will tell me it's wrong, but if it's false it will tell me "gg you win, noob.". However when I insert 4, it tells me it's incorrect.</p>
<pre><code>x = input("Insert a numer: ")
while x != 4:
print("incorrect")
x =input("Insert another number: ")
if x == 4:
print("gg you win, noob")
</code></pre>
| 1 | 2016-09-29T16:14:19Z | 39,780,497 | <p>Try this:</p>
<pre><code> z = 0
while z != "gg you win, noob":
try:
x = int(input("Insert a numer: "))
while x != 4:
print("incorrect")
x =int(input("Insert another number: "))
if x == 4:
z = "gg you win, noob"
print(z)
except:
print('Only input numbers')
</code></pre>
<p>This will convert all your input values into integers. If you do not input an integer, the <code>except</code> statement will prompt you to only input numbers, and the <code>while True</code> loop will repeat your script from the beginning instead of raising an error.</p>
| 0 | 2016-09-29T21:43:03Z | [
"python"
]
|
Pandas: remove duplicate record while keeping its old value in dataframe for reference | 39,775,337 | <p>I am rewriting a piece of the old code using pandas. My data frame looks like this:</p>
<pre><code>index stop_id stop_name stop_lat stop_lon stop_id2
0 A12 Some St 40.889248 -73.898583 None
1 A14 Some St 40.889758 -73.908573 None
2 B09 Some St 40.788924 -74.846576 None
3 A22 Some St 40.889248 -73.898583 None
</code></pre>
<p>Note that stop_lat and stop_lon are duplicated for stop_ids 'A12' and 'A22'.</p>
<p>I want to remove the duplicate stop (stop_id='A22') while updating stop_d2 with the removed record's stop_id. So the data frame would look like this:</p>
<pre><code>index stop_id stop_name stop_lat stop_lon stop_id2
0 A12 Some St 40.889248 -73.898583 A22
1 A14 Some St 40.889758 -73.908573 None
2 B09 Some St 40.788924 -74.846576 None
</code></pre>
<p>Previously I have worked this task with keeping my data in dictionary:</p>
<pre><code>d={'A12':['Some St', 40.889248, -73.898583, None],'A14': ['Some St', 40.889758, -73.908573, None],'B09':['Some St, 40.788924,-74.846576, None], 'A22':['Some St', 40.889248, -73.898583, None]}
if d['A12'][1]+d['A12'][2]==d['A22'][1]+d['A22'][2]:
del d['A22']
d['A12'][-1]='A22'
</code></pre>
<p>I want to do similar task in pandas. I know if I just use:
df=df.drop_duplicates(['stop_lat','stop_lon'])</p>
<p>I will loose the duplicate record and won't retain its id.I need to keep id of the removed stop for proper metadata.</p>
| 1 | 2016-09-29T16:17:07Z | 39,775,530 | <pre><code>new_df = df[df.duplicated(subset = ['stop_lat', 'stop_lon'], keep='first')]
duplicates_df = df[df.duplicated(subset = ['stop_lat', 'stop_lon'], keep = 'last')][['stop_lat', 'stop_lon', 'stop_id']]
new_df.merge(duplicates_df, how='left', on=['stop_lat, 'stop_lon'])
</code></pre>
| 1 | 2016-09-29T16:28:00Z | [
"python",
"pandas"
]
|
Pandas: remove duplicate record while keeping its old value in dataframe for reference | 39,775,337 | <p>I am rewriting a piece of the old code using pandas. My data frame looks like this:</p>
<pre><code>index stop_id stop_name stop_lat stop_lon stop_id2
0 A12 Some St 40.889248 -73.898583 None
1 A14 Some St 40.889758 -73.908573 None
2 B09 Some St 40.788924 -74.846576 None
3 A22 Some St 40.889248 -73.898583 None
</code></pre>
<p>Note that stop_lat and stop_lon are duplicated for stop_ids 'A12' and 'A22'.</p>
<p>I want to remove the duplicate stop (stop_id='A22') while updating stop_d2 with the removed record's stop_id. So the data frame would look like this:</p>
<pre><code>index stop_id stop_name stop_lat stop_lon stop_id2
0 A12 Some St 40.889248 -73.898583 A22
1 A14 Some St 40.889758 -73.908573 None
2 B09 Some St 40.788924 -74.846576 None
</code></pre>
<p>Previously I have worked this task with keeping my data in dictionary:</p>
<pre><code>d={'A12':['Some St', 40.889248, -73.898583, None],'A14': ['Some St', 40.889758, -73.908573, None],'B09':['Some St, 40.788924,-74.846576, None], 'A22':['Some St', 40.889248, -73.898583, None]}
if d['A12'][1]+d['A12'][2]==d['A22'][1]+d['A22'][2]:
del d['A22']
d['A12'][-1]='A22'
</code></pre>
<p>I want to do similar task in pandas. I know if I just use:
df=df.drop_duplicates(['stop_lat','stop_lon'])</p>
<p>I will loose the duplicate record and won't retain its id.I need to keep id of the removed stop for proper metadata.</p>
| 1 | 2016-09-29T16:17:07Z | 39,776,026 | <p><strong><em>get duplicate mask</em></strong></p>
<pre><code>cols = ['stop_lat', 'stop_lon']
dups = df.duplicated(subset=cols)
</code></pre>
<p><strong><em>subset df with mask</em></strong></p>
<pre><code>nodups = df[~dups].set_index(cols)
</code></pre>
<p><strong><em>dups may be duplicated themselves</em></strong></p>
<pre><code>first_dup = df[dups].drop_duplicates(subset=cols)
first_dup = first_dup.set_index(cols).stop_id
</code></pre>
<p><strong><em>assign accordingly</em></strong></p>
<pre><code>nodups.loc[first_dup.index, 'stop_id2'] = first_dup
nodups
</code></pre>
<p><a href="http://i.stack.imgur.com/3gta6.png" rel="nofollow"><img src="http://i.stack.imgur.com/3gta6.png" alt="enter image description here"></a></p>
| 2 | 2016-09-29T16:57:38Z | [
"python",
"pandas"
]
|
Python fromtimestamp() method inconsistent | 39,775,408 | <p>I am getting data from a WSO2 DAS, which contains a Unix time stamp. I have been developing this website using PyCharm. I primarily develop on a Windows 10 machine, occasionally on a MAC, and deploy on a Linux box. </p>
<p>I have a fairly simple use case, where I want to take the date from WSO2, convert it to a human readable local time, and display it to the user. <strong>The problem</strong> that I am seeing is different results using Python to convert the time stamp on a Windows and MAC machine. </p>
<pre><code># Convert the millisecond time stamp to seconds
ts = int(hit['timestamp']) / 1000
# Convert the timestamp to a human readable format
ts = datetime.fromtimestamp(ts).strftime('%Y-%m-%d %H:%M:%S')
</code></pre>
<p>The python documentation for the fromtimestamp() method says the following:</p>
<blockquote>
<p>date.fromtimestamp(timestamp)
Return the local date corresponding to the POSIX timestamp, such as is returned by time.time(). This may raise ValueError, if the timestamp is out of the range of values supported by the platform C localtime() function. Itâs common for this to be restricted to years from 1970 through 2038. Note that on non-POSIX systems that include leap seconds in their notion of a timestamp, leap seconds are ignored by fromtimestamp().</p>
</blockquote>
<p>That documentation can be found here:<a href="https://docs.python.org/2/library/datetime.html" rel="nofollow">https://docs.python.org/2/library/datetime.html</a> </p>
<p>Now this works perfectly on my windows machine. </p>
<p><strong>Line 1 Results on Windows</strong>: 1474560434.73</p>
<p>Plugging this into a timestamp converter results in:</p>
<p>GMT: Thu, 22 Sep 2016 16:07:14 GMT</p>
<p>Your time zone: 9/22/2016, 11:07:14 AM GMT-5:00 DST</p>
<p>Perfect! So I expect the result of line two to be 2016-09-22 11:07:14. This can be seen copied directly from my code:</p>
<p><strong>Line 2 Results on Windows</strong>: '2016-09-22 11:07:14'</p>
<p><strong>Now is where the fun begins.</strong> I have the exact same code on my MAC, also being run from PyCharm. Running the same code, line one results in:</p>
<p><strong>Line 1 Results on MAC</strong>: 1474560434.73</p>
<p>Same as above, we expect thsi to be 11:07:14 AM on 9/22. </p>
<p><strong>Line 2 Results on MAC</strong>: '2016-09-22 16:07:14'</p>
<p>No good. This is the result, still in UTC.</p>
<p>I have confirmed that both computers are set to my local time zone. What is <strong>even more weird</strong> is that while trying to debug this issue I found the following results:</p>
<pre><code>t = time.localtime()
u = time.gmtime(time.mktime(t))
offset = timegm(t) - timegm(u)
</code></pre>
<p>Running the above code in a terminal on my MAC gives the correct local time (UTC-5), the correct UTC time, and the offset of -18000 (which is correct). But! Running that same code in PyCharm on my MAC gives an offset of zero, and shows the local time and UTC time as the same. </p>
<p>I was originally confused by why the results are different on my Windows machine vs. MAC, but could assume that it was an OS difference. But now I am seeing differences on the MAC terminal vs. MAC PyCharm. All I can assume is that my MAC machine is referencing the correct time zone (which is why it works in the terminal) but PyCharm is referencing some other location for timezone and thinks that it is UTC. </p>
| 2 | 2016-09-29T16:21:16Z | 39,776,385 | <p>Python probably uses the TZ from OS/shell environment.</p>
<p>PyCharm seems to use TIME_ZONE found in settings.py file. <a href="https://en.wikipedia.org/wiki/List_of_tz_database_time_zones" rel="nofollow">Here</a> is the complete list of timezones. Try it and see.</p>
| 2 | 2016-09-29T17:18:50Z | [
"python",
"osx",
"datetime",
"pycharm"
]
|
gitpython returns 'git push --porcelain origin' returned with exit code 128 | 39,775,489 | <p>I'm trying to push a new git repo upstream using gitpython module. Below are the steps that I'm doing and get an error 128.</p>
<pre><code># Initialize a local git repo
init_repo = Repo.init(gitlocalrepodir+"%s" %(gitinitrepo))
# Add a file to this new local git repo
init_repo.index.add([filename])
# Initial commit
init_repo.index.commit('Initial Commit - %s' %(timestr))
# Create remote
init_repo.create_remote('origin', giturl+gitinitrepo+'.git')
# Push upstream (Origin)
init_repo.remotes.origin.push()
</code></pre>
<p>While executing the push(), gitpython throws an exception:</p>
<pre><code>'git push --porcelain origin' returned with exit code 128
</code></pre>
<p>Access to github is via SSH. </p>
<p>Do you see anything wrong that I'm doing?</p>
| 1 | 2016-09-29T16:25:43Z | 39,792,455 | <p>You need to capture the output from the git command.</p>
<p>Given this Progress class:</p>
<pre><code>class Progress(git.RemoteProgress):
def __init__( self, progress_call_back ):
self.progress_call_back = progress_call_back
super().__init__()
self.__all_dropped_lines = []
def update( self, op_code, cur_count, max_count=None, message='' ):
pass
def line_dropped( self, line ):
if line.startswith( 'POST git-upload-pack' ):
return
self.__all_dropped_lines.append( line )
def allErrorLines( self ):
return self.error_lines() + self.__all_dropped_lines
def allDroppedLines( self ):
return self.__all_dropped_lines
</code></pre>
<p>You can write code like this:</p>
<pre><code>progress = Progress()
try:
for info in remote.push( progress=progress ):
info_callback( info )
for line in progress.allDroppedLines():
log.info( line )
except GitCommandError:
for line in progress.allErrorLines():
log.error( line )
raise
</code></pre>
<p>When you run with this you will still get the 128 error, but you will also have the output og git to explain the problem.</p>
| -1 | 2016-09-30T13:15:29Z | [
"python",
"gitpython"
]
|
Pandas - applying groupings and counts to multiple columns in order to generate/change a dataframe | 39,775,506 | <p>I'm sure what I'm trying to do is fairly simple for those with better knowledge of PD, but I'm simply stuck at transforming:</p>
<pre><code>+---------+------------+-------+
| Trigger | Date | Value |
+---------+------------+-------+
| 1 | 01/01/2016 | a |
+---------+------------+-------+
| 2 | 01/01/2016 | b |
+---------+------------+-------+
| 3 | 01/01/2016 | c |
+---------+------------+-------+
...etc, into:
+------------+---------------------+---------+---------+---------+
| Date | #of triggers | count a | count b | count c |
+------------+---------------------+---------+---------+---------+
| 01/01/2016 | 3 | 1 | 1 | 1 |
+------------+---------------------+---------+---------+---------+
| 02/01/2016 | 5 | 2 | 1 | 2 |
+------------+---------------------+---------+---------+---------+
... and so on
</code></pre>
<p>The issue is, I've got no bloody idea of how to achieve this..
I've scoured SO, but I can't seem to find anything that applies to my specific case.</p>
<p>I presume I'd have to group it all by date, but then once that is done, what do I need to do to get the remaining columns?</p>
<p>The initial DF is loaded from an SQL Alchemy query object, and then I want to manipulate it to get the result I described above. How would one do this?</p>
<p>Thanks</p>
| 1 | 2016-09-29T16:26:39Z | 39,775,612 | <pre><code>df.groupby(['Date','Value']).count().unstack(level=-1)
</code></pre>
| 2 | 2016-09-29T16:32:51Z | [
"python",
"pandas",
"dataframe"
]
|
Pandas - applying groupings and counts to multiple columns in order to generate/change a dataframe | 39,775,506 | <p>I'm sure what I'm trying to do is fairly simple for those with better knowledge of PD, but I'm simply stuck at transforming:</p>
<pre><code>+---------+------------+-------+
| Trigger | Date | Value |
+---------+------------+-------+
| 1 | 01/01/2016 | a |
+---------+------------+-------+
| 2 | 01/01/2016 | b |
+---------+------------+-------+
| 3 | 01/01/2016 | c |
+---------+------------+-------+
...etc, into:
+------------+---------------------+---------+---------+---------+
| Date | #of triggers | count a | count b | count c |
+------------+---------------------+---------+---------+---------+
| 01/01/2016 | 3 | 1 | 1 | 1 |
+------------+---------------------+---------+---------+---------+
| 02/01/2016 | 5 | 2 | 1 | 2 |
+------------+---------------------+---------+---------+---------+
... and so on
</code></pre>
<p>The issue is, I've got no bloody idea of how to achieve this..
I've scoured SO, but I can't seem to find anything that applies to my specific case.</p>
<p>I presume I'd have to group it all by date, but then once that is done, what do I need to do to get the remaining columns?</p>
<p>The initial DF is loaded from an SQL Alchemy query object, and then I want to manipulate it to get the result I described above. How would one do this?</p>
<p>Thanks</p>
| 1 | 2016-09-29T16:26:39Z | 39,776,108 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>GroupBy.size</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>, also parameter <code>sort=False</code> is helpful:</p>
<pre><code>df1 = df.groupby(['Date','Value'])['Value'].size().unstack(fill_value=0)
df1['Total'] = df1.sum(axis=1)
cols = df1.columns[-1:].union(df1.columns[:-1])
df1 = df1[cols]
print (df1)
Value Total a b c
Date
01/01/2016 3 1 1 1
</code></pre>
<p>Also you can check <a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1822/grouping-data/6874/aggregating-by-size-and-count#t=201609291707051483739">SO documentation - differences between size and count</a>.</p>
| 1 | 2016-09-29T17:03:07Z | [
"python",
"pandas",
"dataframe"
]
|
Python switch case | 39,775,536 | <p>I am trying to use dictionary as switch case on python, however, the parameter does not seem to be pass to the called function, please help:</p>
<pre><code>def switchcase(num,cc):
def fa(num):
out= num*1.1;
def fb(num):
out= num*2.2;
def fc(num):
out= num*3.3;
def fd(num):
out= num*4.4;
options = {
"a":fa(num),
"b":fb(num),
"c":fc(num),
"d":fd(num)
}
out=0
options[cc];
return out
print switchcase(10,"a")
</code></pre>
<p>the output is 0, I could not figure out the problem</p>
| 0 | 2016-09-29T16:28:22Z | 39,775,560 | <p>The problem is:</p>
<pre><code>out=0
options[cc];
return out
</code></pre>
<p>Basically -- no matter what <code>options[cc]</code> gives you, you're going to return <code>0</code> because that's the value of <code>out</code>. Note that setting <code>out</code> in the various <code>fa</code>, <code>fb</code>, ... functions does <em>not</em> change the value of <code>out</code> in the caller.</p>
<p>You probably want:</p>
<pre><code>def switchcase(num,cc):
def fa(num):
return num*1.1;
def fb(num):
return num*2.2;
def fc(num):
return num*3.3;
def fd(num):
return num*4.4;
options = {
"a":fa(num),
"b":fb(num),
"c":fc(num),
"d":fd(num)
}
return options[cc];
</code></pre>
<p>Also note that this will be horribly inefficient in practice. You're creating 4 functions (and calling each) every time you call <code>switchcase</code>.</p>
<p>I'm guessing that you actually want to create a pre-made map of functions. Then you can pick up the function that you actually want from the map and call it with the given number:</p>
<pre><code>def fa(num):
return num*1.1
def fb(num):
return num*2.2
def fc(num):
return num*3.3
def fd(num):
return num*4.4
OPTIONS = {
"a":fa,
"b":fb,
"c":fc,
"d":fd
}
def switchcase(num,cc):
return OPTIONS[cc](num)
</code></pre>
| 6 | 2016-09-29T16:30:02Z | [
"python",
"switch-statement"
]
|
Python - Check if part of element is in set | 39,775,614 | <p>I have a set with this output:</p>
<pre><code> set( [Rule(chain='OUTPUT', num='3', pkts='0', bytes='0', target='ACCEPT', prot='tcp', opt='--', inp='*', out='*', source='0.0.0.0/0', destination='10.10.7.84', extra='tcp spt:7390'),
Rule(chain='INPUT', num='1', pkts='0', bytes='0', target='ACCEPT', prot='tcp', opt='--', inp='*', out='*', source='148.100.0.0/16', destination='0.0.0.0/0', extra='tcp dpt:7390'),
Rule(chain='INPUT', num='3', pkts='0', bytes='0', target='ACCEPT', prot='tcp', opt='--', inp='*', out='*', source='10.10.7.84', destination='0.0.0.0/0', extra='tcp dpt:7390')])
</code></pre>
<p>I want to check if an element matches an item in this set, but disregarding</p>
<pre><code>num='', pkts='', bytes=''
</code></pre>
<p>Is this possible?</p>
| 1 | 2016-09-29T16:32:55Z | 39,775,733 | <p>Untested code, but this is one way of going about it. If this rule should extend to any kind of equality check, I advise modifying the magic <code>__eq__(self, other)</code> method of the <code>Rule</code> class with the <code>rules_match</code> logic.</p>
<pre><code>def rules_match(rule1, rule2):
attributes_to_check = ['chain', 'target', 'prot', 'opt', ...]
return all(getattr(rule1, attribute) == getattr(rule2, attribute) for attribute in attributes_to_check)
def rule_in_set(rule, rule_set):
return any(rules_match(rule, i) for i in rule_set)
</code></pre>
| 0 | 2016-09-29T16:39:43Z | [
"python"
]
|
Get checkbox value in django | 39,775,676 | <p>I have passed a python list (list_exp) in my html template and now i would like to get the result of my multiple checkbox in view.py with a dictionary.
{list_exp[0] : True/False, list_exp[1] : True/False.....} </p>
<pre><code><form action="" method="post">
{% for name in list_exp%}
<input type="checkbox" name="{{name}}"><label> Experiment : {{name}}</label>
<br>
{% endfor %}
<input type="submit" value="Submit">
</div>
</form>
</code></pre>
| 0 | 2016-09-29T16:36:41Z | 39,777,259 | <p>From what I know, if you give every input checkbox the same name, you can refer to the them as a list in your views.py. </p>
<p>So in the template:</p>
<pre><code>{% for name in list_exp %}
<input type="checkbox" name="list_exp"><label> Experiment : {{name}}</label>
<br>
{% endfor %}
</code></pre>
<p>And then in the views.py:</p>
<pre><code>request.POST.getlist('list_exp')
</code></pre>
<p>will return a list as expected</p>
| 0 | 2016-09-29T18:13:21Z | [
"python",
"django",
"input",
"submit"
]
|
how to change r,g,b color to 2 coordinates in python? | 39,775,699 | <pre><code>from PIL import Image
picture = Image.open("C:/Lab/photos/frog2.png")
r,g,b = picture.getpixel( (0,0) )
print("Red: {0}, Green: {1}, Blue: {2}".format(r,g,b))
</code></pre>
<p>the result is Red: 57, Green: 66, Blue: 19
but i want to change this r,g,b to 2 coorinates like(x,y)</p>
<p>what should i type?</p>
| 0 | 2016-09-29T16:37:45Z | 39,776,411 | <p><a href="https://github.com/gtaylor/python-colormath" rel="nofollow"><code>colormath</code></a> can be used to convert a RGB color to a xyY color, and from there you can simply ignore the Y coordinate in order to get a xy chromacity.</p>
| 1 | 2016-09-29T17:20:10Z | [
"python",
"python-2.7",
"coordinates",
"rgb",
"dimensions"
]
|
How to enforce both xlim and ylim while using ax.axis('equal')? | 39,775,709 | <p>I want to use <code>ax.axis('equal')</code> to force even spacing on X & Y, but I also want to prescribe specific ranges for the X and Y axes. If the margins are also fixed, the problem is over constrained and the result is shown on the left side of the Figure <a href="http://i.stack.imgur.com/Lv0mX.png" rel="nofollow">1</a>. If instead, the margins were allowed to automatically increase themselves to take up the slack, then <code>xlim</code> and <code>ylim</code> could stay as I set them while still satisfying <code>axis('equal')</code>. An example of what I'm after is shown on the right side of Figure <a href="http://i.stack.imgur.com/Lv0mX.png" rel="nofollow">1</a>. <strong>How can I allow the plot margins to "float"?</strong></p>
<pre><code>f,ax=plt.subplots(1) #open a figure
ax.axis('equal') #make the axes have equal spacing
ax.plot([0,20],[0,20]) #test data set
#change the plot axis limits
ax.set_xlim([2,18])
ax.set_ylim([5,15])
#read the plot axis limits
xlim2=array(ax.get_xlim())
ylim2=array(ax.get_ylim())
#define indices for drawing a rectangle with xlim2, ylim2
sqx=array([0,1,1,0,0])
sqy=array([0,0,1,1,0])
#plot a thick rectangle marking the xlim2, ylim2
ax.plot(xlim2[sqx],ylim2[sqy],lw=3) #this does not go all the way around the edge
</code></pre>
<p><a href="http://i.stack.imgur.com/Lv0mX.png" rel="nofollow"><img src="http://i.stack.imgur.com/Lv0mX.png" alt="actual and desired output"></a>
Figure 1: output from the above code snippet.</p>
| 1 | 2016-09-29T16:38:08Z | 39,795,483 | <pre><code>ax.set_aspect('equal',adjustable='box')
</code></pre>
<p><a href="http://i.stack.imgur.com/hjGkL.png" rel="nofollow"><img src="http://i.stack.imgur.com/hjGkL.png" alt="enter image description here"></a></p>
| 2 | 2016-09-30T15:53:54Z | [
"python",
"python-2.7",
"matplotlib",
"plot"
]
|
Decoding Actionscript ByteArray using python | 39,775,727 | <p>I am using actionscript to send an array to the server with this code(I am only writing that part of the code here):-</p>
<pre><code>var a:ByteArray=new ByteArray;
a.writeObject({'a':'b','c':'d'});
socket.writeBytes(a);
socket.flush();
</code></pre>
<p>here I already opened a socket to a port on my server and I have a python code listenting on that port. How do I decode the bytes I receive using python when I run the swf after compiling it?
I received the following in the server side:- <code>b'\n\x0b\x01\x03c\x06\x03d\x03a\x06\x03b\x01'</code></p>
| 1 | 2016-09-29T16:39:26Z | 39,776,338 | <p>ActionScript uses <a href="https://en.wikipedia.org/wiki/Action_Message_Format" rel="nofollow">AMF</a> format. There is an AMF library for Python which you can use: <a href="https://pypi.python.org/pypi/PyAMF" rel="nofollow">PyAMF</a>.</p>
<p>I got this when I tested it (with <em>Python 2.7</em>):</p>
<pre><code>>>> import pyamf
>>> for item in pyamf.decode('\n\x0b\x01\x03c\x06\x03d\x03a\x06\x03b\x01'):
... print item
...
{'a': u'b', 'c': u'd'}
</code></pre>
| 2 | 2016-09-29T17:15:37Z | [
"python",
"arrays",
"actionscript-3",
"flex",
"decode"
]
|
xlabels do not show up with seaborn and tight despined layout in python | 39,775,747 | <p>I would like to plot the following dataframe with xtick labels rotated while also having a tight, despined layout (using <code>tight_layout()</code> from matplotlib and <code>despine</code> from seaborn). the following does not work because the labels do not show up:</p>
<pre><code>import matplotlib.pylab as plt
import seaborn as sns
import pandas
df = pandas.DataFrame({"x": ["XYZ1", "XYZ2", "XYZ3", "XYZ4"],
"y": [0, 1, 0, 1]})
plt.figure(figsize=(5,5))
sns.set_style("ticks")
g = sns.pointplot(x="x", y="y", data=df)
sns.despine(trim=True, offset=2)
g.set_xticklabels(g.get_xticklabels(), rotation=55, ha="center")
plt.tight_layout()
</code></pre>
<p>it produces:</p>
<p><a href="http://i.stack.imgur.com/2aWKs.png" rel="nofollow"><img src="http://i.stack.imgur.com/2aWKs.png" alt="enter image description here"></a></p>
<p>the xtick labels ("XYZ1", "XYZ2", ...) do not appear. if i remove despine the ticks appear but then not despined. if i change ticks before despine/tight_layout, they appear but are not rotated. how can this be done? </p>
| 0 | 2016-09-29T16:40:13Z | 39,777,362 | <p>on my machine the following works</p>
<pre><code>import matplotlib.pylab as plt
import seaborn as sns
import pandas
df = pandas.DataFrame({"x": ["XYZ1", "XYZ2", "XYZ3", "XYZ4"],
"y": [0, 1, 0, 1]})
plt.figure(figsize=(5,5))
sns.set_style("ticks")
g = sns.pointplot(x="x", y="y", data=df)
sns.despine(trim=True, offset=2)
g.set_xticklabels(df['x'], rotation=55, ha="center")
plt.tight_layout()
plt.show()
</code></pre>
<p>and produces</p>
<p><a href="http://i.stack.imgur.com/e2avf.png" rel="nofollow"><img src="http://i.stack.imgur.com/e2avf.png" alt="enter image description here"></a></p>
| 2 | 2016-09-29T18:19:07Z | [
"python",
"matplotlib",
"seaborn"
]
|
Extract series objects from Pandas DataFrame | 39,775,828 | <p>I have a dataframe with the columns</p>
<pre><code>['CPL4', 'Part Number', 'Calendar Year/Month', 'Sales', 'Inventory']
</code></pre>
<p>For each 'Part Number', 'Calendar Year/Month' will be unique on each Part Number.</p>
<p>I want to convert each part number to a univariate Series with 'Calendar Year/Month' as the index and either 'Sales' or 'Inventory' as the value.</p>
<p>How can I accomplish this using pandas built-in functions and not iterating through the dataframe manually?</p>
| 0 | 2016-09-29T16:44:51Z | 39,776,167 | <p>you can use the groupby method such has:</p>
<pre><code>grouped_df = df.groupby('Part Number')
</code></pre>
<p>and then you can access the df of a certain part number and set the index easily such has:</p>
<pre><code>new_df = grouped_df.get_group('THEPARTNUMBERYOUWANT').set_index('Calendar Year/Month')
</code></pre>
<p>if you only want the 2 columns you can do:</p>
<pre><code>print new_df[['Sales', 'Inventory']]]
</code></pre>
| 1 | 2016-09-29T17:06:34Z | [
"python",
"pandas",
"dataframe",
"time-series"
]
|
Extract series objects from Pandas DataFrame | 39,775,828 | <p>I have a dataframe with the columns</p>
<pre><code>['CPL4', 'Part Number', 'Calendar Year/Month', 'Sales', 'Inventory']
</code></pre>
<p>For each 'Part Number', 'Calendar Year/Month' will be unique on each Part Number.</p>
<p>I want to convert each part number to a univariate Series with 'Calendar Year/Month' as the index and either 'Sales' or 'Inventory' as the value.</p>
<p>How can I accomplish this using pandas built-in functions and not iterating through the dataframe manually?</p>
| 0 | 2016-09-29T16:44:51Z | 39,776,221 | <p>In pandas this is called a MultiIndex. Try:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(file,
index=['Part Number', 'Calendar Year/Month'],
columns = ['Sales', 'Inventory'])
</code></pre>
| 1 | 2016-09-29T17:08:47Z | [
"python",
"pandas",
"dataframe",
"time-series"
]
|
Extract series objects from Pandas DataFrame | 39,775,828 | <p>I have a dataframe with the columns</p>
<pre><code>['CPL4', 'Part Number', 'Calendar Year/Month', 'Sales', 'Inventory']
</code></pre>
<p>For each 'Part Number', 'Calendar Year/Month' will be unique on each Part Number.</p>
<p>I want to convert each part number to a univariate Series with 'Calendar Year/Month' as the index and either 'Sales' or 'Inventory' as the value.</p>
<p>How can I accomplish this using pandas built-in functions and not iterating through the dataframe manually?</p>
| 0 | 2016-09-29T16:44:51Z | 39,776,947 | <p>From the answers and comments here, along with a little more research, I ended with the following solution.</p>
<pre><code>temp_series = df[df[ "Part Number" == sku ] ].pivot(columns = ["Calendar Year/Month"], values = "Sales").iloc[0]
</code></pre>
<p>Where sku is a specific part number from df["Part Number"].unique()</p>
<p>This will give you a univariate time series(temp_series) indexed by "Calendar Year/Month" with values of "Sales" EG:</p>
<pre><code>1.2015 NaN
1.2016 NaN
2.2015 NaN
2.2016 NaN
3.2015 NaN
3.2016 NaN
4.2015 NaN
4.2016 NaN
5.2015 NaN
5.2016 NaN
6.2015 NaN
6.2016 NaN
7.2015 NaN
7.2016 NaN
8.2015 NaN
8.2016 NaN
9.2015 NaN
10.2015 NaN
11.2015 NaN
12.2015 NaN
Name: 161, dtype: float64
<class 'pandas.core.series.Series'>])
</code></pre>
<p>from the columns</p>
<pre><code>['CPL4', 'Part Number', 'Calendar Year/Month', 'Sales', 'Inventory']
</code></pre>
| 0 | 2016-09-29T17:53:24Z | [
"python",
"pandas",
"dataframe",
"time-series"
]
|
Include two different sized data frames together in a pandas panel | 39,776,122 | <p>I'm crafting a panel of multiple data frames. Each is rather long. </p>
<p>I create the dfs, combine in a dictionary, then combine into a panel;</p>
<pre><code>for name in names: # large list of paths
# Do some code to get data info (dI), dataframe (df) and nameID
# Create a dictionary out of dfs by nameID
dictDFs[nameID] = df
# Collect all dataframes into one from dictionary dictDFs
pn = pd.Panel(dictDFs)
</code></pre>
<p>I then create a pickle file, <code>pn.to_pickle(path)</code></p>
<p>I would like to attach other info to the data frame not in the array. I don't want to change the size or shape of the data, keeping the array uniquely integers. </p>
<p>I cannot pack them as a tuple; disliked by Panel. However, this is what I thought it should look like:</p>
<pre><code># Create a dictionary out of df and dI by nameID
dictDFs[nameID] = (df,dI)
</code></pre>
<p>Thanks</p>
| 0 | 2016-09-29T17:03:37Z | 39,966,207 | <p>I was able to solve this. The key to it was converting the dataframe to a tuple and using the tuple as the dictionary key such that the panel key was immutable:</p>
<pre><code>for name in names: # List of names
nm = base(name)[:-4]
# Uses each name to extract, trim, cure, and make meaningful
dfInfo,df = some_function(name)
dfInfo = dfInfo.rename(index=str, columns={0: nm})
</code></pre>
<p>Transform the tuples into a tuple of <code>pandas.core.frame.Pandas</code>:</p>
<pre><code> tups = tuple(dfInfo.itertuples(index=False))
</code></pre>
<p>The <code>_fields</code> is the same for each tuple item for the one column dataframe:</p>
<pre><code> nmT = tups[0]._fields[0]
</code></pre>
<p>Create a tuple from the tupled dataframe info:</p>
<pre><code> dfInfo = (nmT, tuple(pd.Series(tup).loc[0] for tup in tups))
</code></pre>
<p>Now we can use the immutable tuple to create a dictionary with the key from the data info:</p>
<pre><code> dictDFs[dfInfo] = df
# Collect all dataframes into one from dictionary dictDFs
pn = pd.Panel(dictDFs)
pn.to_pickle(path)
</code></pre>
| 0 | 2016-10-10T20:26:58Z | [
"python",
"pandas",
"panel",
"pickle"
]
|
How do i include the csrf_token to dropzone Post request (Django) | 39,776,136 | <p>Allright this is resolved. Just editing in case anyone runs into the same problem.</p>
<p>Add the Code posted in Comment marked as answer in the same javascript file.
When defining</p>
<pre><code>var myDropzone = new Dropzone(...
...//More stuff here
headers:{
'X-CSRFToken' : csrftoken
}
</code></pre>
<p>And thats it.</p>
<p>So im getting a 403 Forbidden when submitting the POST request through dropzone.js to django.Django displayed the message saying that I didnt include the CSRF token, but I don't know how to actually include it if im not using a form in the HTML.</p>
<h1>document_form.html</h1>
<pre><code>{% extends 'base.html' %}
{% load staticfiles %}
{% block title %}Add files{% endblock %}
{% block files %}
<div class="container-fluid" id="container-dropzone">
<div id="actions" class="row">
<div class="col-lg-7">
<span class="btn btn-success file-input-button">
<i class="glyphicon glyphicon-plus"></i>
<span>Add files...</span>
</span>
<button type="submit" class="btn btn-primary start">
<i class="glyphicon glyphicon-upload"></i>
<span>Start upload</span>
</button>
<button type="reset" class="btn btn-warning cancel">
<i class="glyphicon glyphicon-ban-circle"></i>
<span>Cancel upload</span>
</button>
</div>
<div class="col-lg-5">
<!-- file processing state -->
<span class="fileupload-process">
<div id="total-progress" class="progress progress-striped active" role="progressbar" aria-valuemin="0" aria-valuemax="100" aria-valuenow="0">
<div class="progress-bar progress-bar-success" style="width:0%;" data-dz-uploadprogress></div>
</div>
</span>
</div>
</div>
<div class="table table-striped files" id="previews">
<div id="template" class="file-row">
<div>
<span class="preview"><img data-dz-thumbnail></span>
</div>
<div>
<p class="name" data-dz-name></p>
<strong class="error text-danger" data-dz-errormessage></strong>
</div>
<div>
<p class="size" data-dz-size></p>
<div class="progress progress-striped active" role="progressbar" aria-valuemin="0"
aria-valuemax="100" aria-valuenow="0">
<div class="progress-bar progress-bar-success" style="width:0%"
data-dz-uploadprogress>
</div>
</div>
</div>
<div>
<button class="btn btn-primary start">
<i class="glyphicon glyphicon-upload"></i>
<span>Start</span>
</button>
<button data-dz-remove class="btn btn-warning cancel">
<i class="glyphicon glyphicon-ban-circle"></i>
<span>Cancel</span>
</button>
<button data-dz-remove class="btn btn-danger delete">
<i class="glyphicon glyphicon-trash"></i>
<span>Delete</span>
</button>
</div>
</div> <!-- /table-striped -->
</div> <!-- /container-fluid -->
</div>
{% endblock %}
{% block dz-add %}
<script src="{% static 'js/dropzone-bootstrap.js' %}"></script>
{% endblock %}
</code></pre>
<h1>dropzone-bootstrap.js</h1>
<pre><code>$(function() {
var previewNode = document.querySelector("#template");
previewNode.id = "";
var previewTemplate = previewNode.parentNode.innerHTML;
previewNode.parentNode.removeChild(previewNode);
var myDropzone = new Dropzone(document.querySelector("#container-dropzone") , {
url: "/dashby/files/add/", //url to make the request to.
thumbnailWidth: 80,
thumbnailHeight: 80,
parallelUploads: 20,
previewTemplate: previewTemplate,
autoQueue: false,
previewsContainer: "#previews",
clickable: ".file-input-button",
headers: { // Tried to apply the token this way but no success.
'X-CSRFToken': $('meta[name="token"]').attr('content')
}
});
myDropzone.on("addedfile", function(file){
file.previewElement.querySelector(".start").onclick = function(){
myDropzone.enqueueFile(file);
};
});
myDropzone.on("totaluploadprogress", function(progress){
document.querySelector("#total-progress .progress-bar").style.width = progress + "%";
});
myDropzone.on("sending", function(file){
// Show total progress on start and disable START button.
document.querySelector("#total-progress").style.opacity = "1";
file.previewElement.querySelector(".start").setAttribute("disabled", "disabled");
});
// Hide progress bar when complete.
myDropzone.on("queuecomplete", function(progress){
document.querySelector("#total-progress").style.opacity = "0";
});
// Setup buttons for every file.
document.querySelector("#actions .start").onclick = function(){
myDropzone.enqueueFiles(myDropzone.getFilesWithStatus(Dropzone.ADDED));
};
document.querySelector("#actions .cancel").onclick = function(){
myDropzone.removeAllFiles(true);
};
});
</code></pre>
<p>In my <strong>base.html</strong> im adding all the required files (dropzone, jquery, bootstrap and my custom javascript file)</p>
<p>For the django form handling:</p>
<h1>views.py</h1>
<pre><code>class DocumentCreate(CreateView):
model = Document
fields = ['file']
def form_valid(self, form):
self.object = form.save()
data = {'status': 'success'}
response = JSONResponse(data, mimetype =
response_mimetype(self.request))
return response
</code></pre>
<h1>My "Document" model</h1>
<pre><code>class Document(models.Model):
file = models.FileField(upload_to = 'files/',
validators=[validate_file_type])
uploaded_at = models.DateTimeField(auto_now_add = True)
extension = models.CharField(max_length = 30, blank = True)
thumbnail = models.ImageField(blank = True, null = True)
def clean(self):
self.file.seek(0)
self.extension = self.file.name.split('/')[-1].split('.')[-1]
if self.extension == 'xlsx' or self.extension == 'xls':
self.thumbnail = 'xlsx.png'
elif self.extension == 'pptx' or self.extension == 'ppt':
self.thumbnail = 'pptx.png'
elif self.extension == 'docx' or self.extension == 'doc':
self.thumbnail = 'docx.png'
def delete(self, *args, **kwargs):
#delete file from /media/files
self.file.delete(save = False)
#call parent delete method.
super().delete(*args, **kwargs)
#Redirect to file list page.
def get_absolute_url(self):
return reverse('dashby-files:files')
def __str__(self):
#cut the 'files/'
return self.file.name.split('/')[-1]
class Meta():
#order by upload_date descending
#for bootstrap grid system. (start left side)
ordering = ['-uploaded_at']
</code></pre>
<p>I created a Json response to handle the dropzone.</p>
<h1>response.py</h1>
<pre><code>from django.http import HttpResponse
import json
MIMEANY = '*/*'
MIMEJSON = 'application/json'
MIMETEXT = 'text/plain'
# Integrating Dropzone.js with Django.
def response_mimetype(request):
can_json = MIMEJSON in request.META['HTTP_ACCEPT']
can_json |= MIMEANY in request.META['HTTP_ACCEPT']
return MIMEJSON if can_json else MIMETEXT
# Custom HttpResponse
class JSONResponse(HttpResponse):
def __init__(self, obj='', json_opts=None, mimetype=MIMEJSON,
*args, **kwargs):
json_opts = json_opts if isinstance(json_opts, dict) else {}
content = json.dumps(obj, **json_opts)
super(JSONResponse, self).__init__(content, mimetype,
*args, **kwargs)
</code></pre>
<p>I've been stuck with this problem for a day now, so decided to ask for help here as I havn't been able to find one.</p>
<p>Thanks to anyone that takes the time to read and for any help/tips that I can get.</p>
| 1 | 2016-09-29T17:04:43Z | 39,776,233 | <p>The <a href="https://docs.djangoproject.com/en/1.10/ref/csrf/#ajax" rel="nofollow">docs</a> recommend getting the CSRF token from the cookie, not the DOM. Try that.</p>
| 0 | 2016-09-29T17:09:20Z | [
"javascript",
"jquery",
"python",
"django",
"dropzone.js"
]
|
How do i include the csrf_token to dropzone Post request (Django) | 39,776,136 | <p>Allright this is resolved. Just editing in case anyone runs into the same problem.</p>
<p>Add the Code posted in Comment marked as answer in the same javascript file.
When defining</p>
<pre><code>var myDropzone = new Dropzone(...
...//More stuff here
headers:{
'X-CSRFToken' : csrftoken
}
</code></pre>
<p>And thats it.</p>
<p>So im getting a 403 Forbidden when submitting the POST request through dropzone.js to django.Django displayed the message saying that I didnt include the CSRF token, but I don't know how to actually include it if im not using a form in the HTML.</p>
<h1>document_form.html</h1>
<pre><code>{% extends 'base.html' %}
{% load staticfiles %}
{% block title %}Add files{% endblock %}
{% block files %}
<div class="container-fluid" id="container-dropzone">
<div id="actions" class="row">
<div class="col-lg-7">
<span class="btn btn-success file-input-button">
<i class="glyphicon glyphicon-plus"></i>
<span>Add files...</span>
</span>
<button type="submit" class="btn btn-primary start">
<i class="glyphicon glyphicon-upload"></i>
<span>Start upload</span>
</button>
<button type="reset" class="btn btn-warning cancel">
<i class="glyphicon glyphicon-ban-circle"></i>
<span>Cancel upload</span>
</button>
</div>
<div class="col-lg-5">
<!-- file processing state -->
<span class="fileupload-process">
<div id="total-progress" class="progress progress-striped active" role="progressbar" aria-valuemin="0" aria-valuemax="100" aria-valuenow="0">
<div class="progress-bar progress-bar-success" style="width:0%;" data-dz-uploadprogress></div>
</div>
</span>
</div>
</div>
<div class="table table-striped files" id="previews">
<div id="template" class="file-row">
<div>
<span class="preview"><img data-dz-thumbnail></span>
</div>
<div>
<p class="name" data-dz-name></p>
<strong class="error text-danger" data-dz-errormessage></strong>
</div>
<div>
<p class="size" data-dz-size></p>
<div class="progress progress-striped active" role="progressbar" aria-valuemin="0"
aria-valuemax="100" aria-valuenow="0">
<div class="progress-bar progress-bar-success" style="width:0%"
data-dz-uploadprogress>
</div>
</div>
</div>
<div>
<button class="btn btn-primary start">
<i class="glyphicon glyphicon-upload"></i>
<span>Start</span>
</button>
<button data-dz-remove class="btn btn-warning cancel">
<i class="glyphicon glyphicon-ban-circle"></i>
<span>Cancel</span>
</button>
<button data-dz-remove class="btn btn-danger delete">
<i class="glyphicon glyphicon-trash"></i>
<span>Delete</span>
</button>
</div>
</div> <!-- /table-striped -->
</div> <!-- /container-fluid -->
</div>
{% endblock %}
{% block dz-add %}
<script src="{% static 'js/dropzone-bootstrap.js' %}"></script>
{% endblock %}
</code></pre>
<h1>dropzone-bootstrap.js</h1>
<pre><code>$(function() {
var previewNode = document.querySelector("#template");
previewNode.id = "";
var previewTemplate = previewNode.parentNode.innerHTML;
previewNode.parentNode.removeChild(previewNode);
var myDropzone = new Dropzone(document.querySelector("#container-dropzone") , {
url: "/dashby/files/add/", //url to make the request to.
thumbnailWidth: 80,
thumbnailHeight: 80,
parallelUploads: 20,
previewTemplate: previewTemplate,
autoQueue: false,
previewsContainer: "#previews",
clickable: ".file-input-button",
headers: { // Tried to apply the token this way but no success.
'X-CSRFToken': $('meta[name="token"]').attr('content')
}
});
myDropzone.on("addedfile", function(file){
file.previewElement.querySelector(".start").onclick = function(){
myDropzone.enqueueFile(file);
};
});
myDropzone.on("totaluploadprogress", function(progress){
document.querySelector("#total-progress .progress-bar").style.width = progress + "%";
});
myDropzone.on("sending", function(file){
// Show total progress on start and disable START button.
document.querySelector("#total-progress").style.opacity = "1";
file.previewElement.querySelector(".start").setAttribute("disabled", "disabled");
});
// Hide progress bar when complete.
myDropzone.on("queuecomplete", function(progress){
document.querySelector("#total-progress").style.opacity = "0";
});
// Setup buttons for every file.
document.querySelector("#actions .start").onclick = function(){
myDropzone.enqueueFiles(myDropzone.getFilesWithStatus(Dropzone.ADDED));
};
document.querySelector("#actions .cancel").onclick = function(){
myDropzone.removeAllFiles(true);
};
});
</code></pre>
<p>In my <strong>base.html</strong> im adding all the required files (dropzone, jquery, bootstrap and my custom javascript file)</p>
<p>For the django form handling:</p>
<h1>views.py</h1>
<pre><code>class DocumentCreate(CreateView):
model = Document
fields = ['file']
def form_valid(self, form):
self.object = form.save()
data = {'status': 'success'}
response = JSONResponse(data, mimetype =
response_mimetype(self.request))
return response
</code></pre>
<h1>My "Document" model</h1>
<pre><code>class Document(models.Model):
file = models.FileField(upload_to = 'files/',
validators=[validate_file_type])
uploaded_at = models.DateTimeField(auto_now_add = True)
extension = models.CharField(max_length = 30, blank = True)
thumbnail = models.ImageField(blank = True, null = True)
def clean(self):
self.file.seek(0)
self.extension = self.file.name.split('/')[-1].split('.')[-1]
if self.extension == 'xlsx' or self.extension == 'xls':
self.thumbnail = 'xlsx.png'
elif self.extension == 'pptx' or self.extension == 'ppt':
self.thumbnail = 'pptx.png'
elif self.extension == 'docx' or self.extension == 'doc':
self.thumbnail = 'docx.png'
def delete(self, *args, **kwargs):
#delete file from /media/files
self.file.delete(save = False)
#call parent delete method.
super().delete(*args, **kwargs)
#Redirect to file list page.
def get_absolute_url(self):
return reverse('dashby-files:files')
def __str__(self):
#cut the 'files/'
return self.file.name.split('/')[-1]
class Meta():
#order by upload_date descending
#for bootstrap grid system. (start left side)
ordering = ['-uploaded_at']
</code></pre>
<p>I created a Json response to handle the dropzone.</p>
<h1>response.py</h1>
<pre><code>from django.http import HttpResponse
import json
MIMEANY = '*/*'
MIMEJSON = 'application/json'
MIMETEXT = 'text/plain'
# Integrating Dropzone.js with Django.
def response_mimetype(request):
can_json = MIMEJSON in request.META['HTTP_ACCEPT']
can_json |= MIMEANY in request.META['HTTP_ACCEPT']
return MIMEJSON if can_json else MIMETEXT
# Custom HttpResponse
class JSONResponse(HttpResponse):
def __init__(self, obj='', json_opts=None, mimetype=MIMEJSON,
*args, **kwargs):
json_opts = json_opts if isinstance(json_opts, dict) else {}
content = json.dumps(obj, **json_opts)
super(JSONResponse, self).__init__(content, mimetype,
*args, **kwargs)
</code></pre>
<p>I've been stuck with this problem for a day now, so decided to ask for help here as I havn't been able to find one.</p>
<p>Thanks to anyone that takes the time to read and for any help/tips that I can get.</p>
| 1 | 2016-09-29T17:04:43Z | 39,776,325 | <p>Djangoâs docs have a <a href="https://docs.djangoproject.com/en/stable/ref/csrf/#ajax" rel="nofollow">reference for this</a>:</p>
<blockquote>
<p>While [a special parameter] can be used for AJAX POST requests, it has some inconveniences: you have to remember to pass the CSRF token in as POST data with every POST request. For this reason, there is an alternative method: on each <code>XMLHttpRequest</code>, set a custom <code>X-CSRFToken</code> header to the value of the CSRF token. This is often easier, because many JavaScript frameworks provide hooks that allow headers to be set on every request.<br>
[â¦] </p>
<p>Acquiring the token is straightforward:</p>
<pre><code>// using jQuery
function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie !== '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = jQuery.trim(cookies[i]);
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) === (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}
var csrftoken = getCookie('csrftoken');
</code></pre>
<p>[â¦]
Finally, youâll have to actually set the header on your AJAX request, while protecting the CSRF token from being sent to other domains using <a href="https://api.jquery.com/jQuery.ajax" rel="nofollow">settings.crossDomain</a> in jQuery 1.5.1 and newer:</p>
<pre><code>function csrfSafeMethod(method) {
// these HTTP methods do not require CSRF protection
return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method));
}
$.ajaxSetup({
beforeSend: function(xhr, settings) {
if (!csrfSafeMethod(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
}
}
});
</code></pre>
</blockquote>
<p>If you run these two code blocks before you start to make requests, it should Just Workâ¢.</p>
<p>In summary, just use this code block:</p>
<pre><code>// from https://docs.djangoproject.com/en/1.10/ref/csrf/ via http://stackoverflow.com/a/39776325/5244995.
function getCookie(name) {
var cookieValue = null;
if (document.cookie && document.cookie !== '') {
var cookies = document.cookie.split(';');
for (var i = 0; i < cookies.length; i++) {
var cookie = jQuery.trim(cookies[i]);
// Does this cookie string begin with the name we want?
if (cookie.substring(0, name.length + 1) === (name + '=')) {
cookieValue = decodeURIComponent(cookie.substring(name.length + 1));
break;
}
}
}
return cookieValue;
}
var csrftoken = getCookie('csrftoken');
function csrfSafeMethod(method) {
// these HTTP methods do not require CSRF protection
return (/^(GET|HEAD|OPTIONS|TRACE)$/.test(method));
}
$.ajaxSetup({
beforeSend: function(xhr, settings) {
if (!csrfSafeMethod(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken);
}
}
});
</code></pre>
| 0 | 2016-09-29T17:14:52Z | [
"javascript",
"jquery",
"python",
"django",
"dropzone.js"
]
|
How do i include the csrf_token to dropzone Post request (Django) | 39,776,136 | <p>Allright this is resolved. Just editing in case anyone runs into the same problem.</p>
<p>Add the Code posted in Comment marked as answer in the same javascript file.
When defining</p>
<pre><code>var myDropzone = new Dropzone(...
...//More stuff here
headers:{
'X-CSRFToken' : csrftoken
}
</code></pre>
<p>And thats it.</p>
<p>So im getting a 403 Forbidden when submitting the POST request through dropzone.js to django.Django displayed the message saying that I didnt include the CSRF token, but I don't know how to actually include it if im not using a form in the HTML.</p>
<h1>document_form.html</h1>
<pre><code>{% extends 'base.html' %}
{% load staticfiles %}
{% block title %}Add files{% endblock %}
{% block files %}
<div class="container-fluid" id="container-dropzone">
<div id="actions" class="row">
<div class="col-lg-7">
<span class="btn btn-success file-input-button">
<i class="glyphicon glyphicon-plus"></i>
<span>Add files...</span>
</span>
<button type="submit" class="btn btn-primary start">
<i class="glyphicon glyphicon-upload"></i>
<span>Start upload</span>
</button>
<button type="reset" class="btn btn-warning cancel">
<i class="glyphicon glyphicon-ban-circle"></i>
<span>Cancel upload</span>
</button>
</div>
<div class="col-lg-5">
<!-- file processing state -->
<span class="fileupload-process">
<div id="total-progress" class="progress progress-striped active" role="progressbar" aria-valuemin="0" aria-valuemax="100" aria-valuenow="0">
<div class="progress-bar progress-bar-success" style="width:0%;" data-dz-uploadprogress></div>
</div>
</span>
</div>
</div>
<div class="table table-striped files" id="previews">
<div id="template" class="file-row">
<div>
<span class="preview"><img data-dz-thumbnail></span>
</div>
<div>
<p class="name" data-dz-name></p>
<strong class="error text-danger" data-dz-errormessage></strong>
</div>
<div>
<p class="size" data-dz-size></p>
<div class="progress progress-striped active" role="progressbar" aria-valuemin="0"
aria-valuemax="100" aria-valuenow="0">
<div class="progress-bar progress-bar-success" style="width:0%"
data-dz-uploadprogress>
</div>
</div>
</div>
<div>
<button class="btn btn-primary start">
<i class="glyphicon glyphicon-upload"></i>
<span>Start</span>
</button>
<button data-dz-remove class="btn btn-warning cancel">
<i class="glyphicon glyphicon-ban-circle"></i>
<span>Cancel</span>
</button>
<button data-dz-remove class="btn btn-danger delete">
<i class="glyphicon glyphicon-trash"></i>
<span>Delete</span>
</button>
</div>
</div> <!-- /table-striped -->
</div> <!-- /container-fluid -->
</div>
{% endblock %}
{% block dz-add %}
<script src="{% static 'js/dropzone-bootstrap.js' %}"></script>
{% endblock %}
</code></pre>
<h1>dropzone-bootstrap.js</h1>
<pre><code>$(function() {
var previewNode = document.querySelector("#template");
previewNode.id = "";
var previewTemplate = previewNode.parentNode.innerHTML;
previewNode.parentNode.removeChild(previewNode);
var myDropzone = new Dropzone(document.querySelector("#container-dropzone") , {
url: "/dashby/files/add/", //url to make the request to.
thumbnailWidth: 80,
thumbnailHeight: 80,
parallelUploads: 20,
previewTemplate: previewTemplate,
autoQueue: false,
previewsContainer: "#previews",
clickable: ".file-input-button",
headers: { // Tried to apply the token this way but no success.
'X-CSRFToken': $('meta[name="token"]').attr('content')
}
});
myDropzone.on("addedfile", function(file){
file.previewElement.querySelector(".start").onclick = function(){
myDropzone.enqueueFile(file);
};
});
myDropzone.on("totaluploadprogress", function(progress){
document.querySelector("#total-progress .progress-bar").style.width = progress + "%";
});
myDropzone.on("sending", function(file){
// Show total progress on start and disable START button.
document.querySelector("#total-progress").style.opacity = "1";
file.previewElement.querySelector(".start").setAttribute("disabled", "disabled");
});
// Hide progress bar when complete.
myDropzone.on("queuecomplete", function(progress){
document.querySelector("#total-progress").style.opacity = "0";
});
// Setup buttons for every file.
document.querySelector("#actions .start").onclick = function(){
myDropzone.enqueueFiles(myDropzone.getFilesWithStatus(Dropzone.ADDED));
};
document.querySelector("#actions .cancel").onclick = function(){
myDropzone.removeAllFiles(true);
};
});
</code></pre>
<p>In my <strong>base.html</strong> im adding all the required files (dropzone, jquery, bootstrap and my custom javascript file)</p>
<p>For the django form handling:</p>
<h1>views.py</h1>
<pre><code>class DocumentCreate(CreateView):
model = Document
fields = ['file']
def form_valid(self, form):
self.object = form.save()
data = {'status': 'success'}
response = JSONResponse(data, mimetype =
response_mimetype(self.request))
return response
</code></pre>
<h1>My "Document" model</h1>
<pre><code>class Document(models.Model):
file = models.FileField(upload_to = 'files/',
validators=[validate_file_type])
uploaded_at = models.DateTimeField(auto_now_add = True)
extension = models.CharField(max_length = 30, blank = True)
thumbnail = models.ImageField(blank = True, null = True)
def clean(self):
self.file.seek(0)
self.extension = self.file.name.split('/')[-1].split('.')[-1]
if self.extension == 'xlsx' or self.extension == 'xls':
self.thumbnail = 'xlsx.png'
elif self.extension == 'pptx' or self.extension == 'ppt':
self.thumbnail = 'pptx.png'
elif self.extension == 'docx' or self.extension == 'doc':
self.thumbnail = 'docx.png'
def delete(self, *args, **kwargs):
#delete file from /media/files
self.file.delete(save = False)
#call parent delete method.
super().delete(*args, **kwargs)
#Redirect to file list page.
def get_absolute_url(self):
return reverse('dashby-files:files')
def __str__(self):
#cut the 'files/'
return self.file.name.split('/')[-1]
class Meta():
#order by upload_date descending
#for bootstrap grid system. (start left side)
ordering = ['-uploaded_at']
</code></pre>
<p>I created a Json response to handle the dropzone.</p>
<h1>response.py</h1>
<pre><code>from django.http import HttpResponse
import json
MIMEANY = '*/*'
MIMEJSON = 'application/json'
MIMETEXT = 'text/plain'
# Integrating Dropzone.js with Django.
def response_mimetype(request):
can_json = MIMEJSON in request.META['HTTP_ACCEPT']
can_json |= MIMEANY in request.META['HTTP_ACCEPT']
return MIMEJSON if can_json else MIMETEXT
# Custom HttpResponse
class JSONResponse(HttpResponse):
def __init__(self, obj='', json_opts=None, mimetype=MIMEJSON,
*args, **kwargs):
json_opts = json_opts if isinstance(json_opts, dict) else {}
content = json.dumps(obj, **json_opts)
super(JSONResponse, self).__init__(content, mimetype,
*args, **kwargs)
</code></pre>
<p>I've been stuck with this problem for a day now, so decided to ask for help here as I havn't been able to find one.</p>
<p>Thanks to anyone that takes the time to read and for any help/tips that I can get.</p>
| 1 | 2016-09-29T17:04:43Z | 39,776,634 | <p>just place <code>{% csrf_token %}</code> anywhere in your Html file . it's automatically add </p>
<pre><code> <input type="hidden" name="csrfmiddlewaretoken" value="**************" />
</code></pre>
<p>Before sending data to the server just add extra field <code>csrf_token</code> which value is <code>$("input[name='csrfmiddlewaretoken']").val();</code></p>
| 0 | 2016-09-29T17:34:04Z | [
"javascript",
"jquery",
"python",
"django",
"dropzone.js"
]
|
Python print on same line after delay | 39,776,182 | <p>This is my code:</p>
<pre><code>def get(url):
print 'GET: ' + url,
r = requests.get(url)
print 'DONE'
return r
get('https://www.###.com/getfoo')
</code></pre>
<p>The output:</p>
<blockquote>
<p>GET: <a href="https://www.###.com/getfoo">https://www.###.com/getfoo</a> DONE</p>
</blockquote>
<p><br/>
<code>request.get</code> needs about 5 seconds to get a result. <em>However</em> the output displays after those 5 sec, not the first imedeately and the <strong>DONE</strong> after 5 seconds. </p>
<p>Why?</p>
| 0 | 2016-09-29T17:07:24Z | 39,776,252 | <p>Because the console is line-buffering the output. Call <code>sys.stdout.flush()</code> in order to flush it.</p>
| 2 | 2016-09-29T17:10:36Z | [
"python"
]
|
Python print on same line after delay | 39,776,182 | <p>This is my code:</p>
<pre><code>def get(url):
print 'GET: ' + url,
r = requests.get(url)
print 'DONE'
return r
get('https://www.###.com/getfoo')
</code></pre>
<p>The output:</p>
<blockquote>
<p>GET: <a href="https://www.###.com/getfoo">https://www.###.com/getfoo</a> DONE</p>
</blockquote>
<p><br/>
<code>request.get</code> needs about 5 seconds to get a result. <em>However</em> the output displays after those 5 sec, not the first imedeately and the <strong>DONE</strong> after 5 seconds. </p>
<p>Why?</p>
| 0 | 2016-09-29T17:07:24Z | 39,776,326 | <p>Its bcs print() doen not flush to stdout;</p>
<p>You can do it like this </p>
<pre><code>import requests
from sys import stdout
def get(url):
stdout.write( 'GET: ' + url+"\n");
stdout.flush(); # this func() will flush all to stdout
r = requests.get(url);
print("DONE");
return r;
get("http://programmersforum.ru/");
</code></pre>
| 1 | 2016-09-29T17:14:53Z | [
"python"
]
|
Google Sheets API - Formatting inserted values | 39,776,188 | <p>Through this code I've update a bunch of rows in Google Spreadsheet.
The request goes well and returns me the <code>updatedRange</code> below.</p>
<pre><code>result = service.spreadsheets().values().append(
spreadsheetId=spreadsheetId,
range=rangeName,
valueInputOption="RAW",
insertDataOption="INSERT_ROWS",
body=body
).execute()
print(result)
print("Range updated")
updateRange = result['updates']['updatedRange']
</code></pre>
<p>Now I would like to do a <code>batchUpdate</code> request to set the formatting or set a protected range, but those API require a range specified as <code>startRowIndex</code>, <code>endRowIndex</code> and so on.</p>
<p>How could I retrieve the rows index from the <code>updatedRange</code>?</p>
| 0 | 2016-09-29T17:07:29Z | 39,838,566 | <p>Waiting for a native or better answer, I'll post a function I've created to translate a <code>namedRange</code> into a <code>gridRange</code>.
The function is far from perfect and does not translate the sheet name to a sheet id (I left that task to another specific function), but accept named ranges in the form:</p>
<ul>
<li><code>sheet!A:B</code></li>
<li><code>sheet!A1:B</code></li>
<li><code>sheet!A:B5</code></li>
<li><code>sheet!A1:B5</code></li>
</ul>
<p>Here is the code</p>
<pre><code> import re
def namedRange2Grid(self, rangeName):
ascii_uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
match = re.match(".*?\!([A-Z0-9]+)\:([A-Z0-9]+)", rangeName)
if match:
start = match.group(1)
end = match.group(2)
matchStart = re.match("([A-Z]{1,})([1-9]+){0,}", start)
matchEnd = re.match("([A-Z]{1,})([1-9]+){0,}", end)
if matchStart and matchEnd:
GridRange = {}
letterStart = matchStart.group(1)
letterEnd = matchEnd.group(1)
if matchStart.group(2):
numberStart = int(matchStart.group(2))
GridRange['startRowIndex'] = numberStart - 1
if matchEnd.group(2):
numberEnd = int(matchEnd.group(2))
GridRange['endRowIndex'] = numberEnd
i = 0
for l in range(0, len(letterStart)):
i = i + (l * len(ascii_uppercase))
i = i + ascii_uppercase.index(letterStart[l])
GridRange['startColumnIndex'] = i
i = 0
for l in range(0, len(letterEnd)):
i = i + (l * len(ascii_uppercase))
i = i + ascii_uppercase.index(letterEnd[l])
GridRange['endColumnIndex'] = i + 1
return GridRange
</code></pre>
| 0 | 2016-10-03T18:57:49Z | [
"python",
"google-spreadsheet-api"
]
|
Using .replace() method on a string read from HTML file containing Unicode | 39,776,201 | <p>I want to read a .html file as raw text and replace instances of a substring that contains unicode characters with another substring. Assume that the file <code>mm03.html</code> contains only one line of text:</p>
<pre><code><span style='font-size:14.0pt'>«test»</span>
</code></pre>
<p>I would like to read <code>mm03.html</code>, parse its raw text as a string, and call replace so that the output will look like this:</p>
<pre><code><span style='font-size:14.0pt'>TEST</span>
</code></pre>
<p>The first time I tried to do this, I wrote the following code...</p>
<pre><code># -*- coding: utf-8 -*-
import codecs
htmlBase = codecs.open("mm03.html",'r')
htmlFill = htmlBase.read()
print htmlFill
htmlFill = htmlFill.replace("«test»","TEST")
print htmlFill
htmlBase.close()
</code></pre>
<p>...with the expectation that it would first print the original line listed above, and then the the second. Instead, it listed the first line twice.</p>
<p>Okay. So it's probably a Unicode decoding problem, right? Maybe, but when I modify the code according to Unicode-related advice found all over this site, problems of varying shades persist. Moreover, the desired functionality can be achieved by defining htmlBase explicitly as...</p>
<pre><code>htmlBase = """<span style='font-size:14.0pt'>«test»</span>"""
</code></pre>
<p>...which leads me to believe there's something I don't know about reading html files in python. I've tried opening mmo3.html in 'w' mode, but that doesn't seem to work and tends to nuke the original file. It doesn't make much sense that a string read from a read-only file should itself be read-only, but I might be wrong.</p>
<p>Following are several script/output pairs I've chewed through.</p>
<ol>
<li><p>Adding the unquoted character 'u' before the string I wish to replace</p>
<pre><code># -*- coding: utf-8 -*-
import codecs
htmlBase = codecs.open("mm03.html",'r')
htmlFill = htmlBase.read()
print htmlFill
htmlFill = htmlFill.replace(u"«test»","TEST")
print htmlFill
htmlBase.close()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code><span style='font-size:14.0pt'>½testâ</span>
Traceback (most recent call last):
File "test2.py", line 6, in <module>
htmlFill = htmlFill.replace(u"â¬Â½testâ¬â","TEST")
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 31: ordinal not in range(128)
</code></pre></li>
<li><p>Applying .decode('utf-8') to the string passed from .read()</p>
<pre><code># -*- coding: utf-8 -*-
import codecs
htmlBase = codecs.open("mm03.html",'r')
htmlFill = htmlBase.read().decode('utf-8')
print htmlFill
htmlFill = htmlFill.replace(u"«test»","TEST")
print htmlFill
htmlBase.close()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "test2.py", line 4, in <module>
htmlFill = htmlBase.read().decode('utf-8')
File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xab in position 31: invalid start byte
</code></pre></li>
<li><p>Applying .encode('utf-8') to the string passed from .read()</p>
<pre><code># -*- coding: utf-8 -*-
import codecs
htmlBase = codecs.open("mm03.html",'r')
htmlFill = htmlBase.read().encode('utf-8')
print htmlFill
htmlFill = htmlFill.replace(u"«test»","TEST")
print htmlFill
htmlBase.close()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "test2.py", line 4, in <module>
htmlFill = htmlBase.read().encode('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 31: ordinal not in range(128)
</code></pre></li>
<li><p>Applying .decode('utf-8') to the string passed from .read(), without the 'u' suffix on the target substring</p>
<pre><code># -*- coding: utf-8 -*-
import codecs
htmlBase = codecs.open("mm03.html",'r')
htmlFill = htmlBase.read().decode('utf-8')
print htmlFill
htmlFill = htmlFill.replace("«test»","TEST")
print htmlFill
htmlBase.close()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "test2.py", line 4, in <module>
htmlFill = htmlBase.read().decode('utf-8')
File "C:\Python27\lib\encodings\utf_8.py", line 16, in decode
return codecs.utf_8_decode(input, errors, True)
UnicodeDecodeError: 'utf8' codec can't decode byte 0xab in position 31: invalid start byte
</code></pre></li>
<li><p>Applying .encode('utf-8') to the string passed from .read(), without the 'u' suffix on the target substring</p>
<pre><code># -*- coding: utf-8 -*-
import codecs
htmlBase = codecs.open("mm03.html",'r')
htmlFill = htmlBase.read().encode('utf-8')
print htmlFill
htmlFill = htmlFill.replace("«test»","TEST")
print htmlFill
htmlBase.close()
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "test2.py", line 4, in <module>
htmlFill = htmlBase.read().encode('utf-8')
UnicodeDecodeError: 'ascii' codec can't decode byte 0xab in position 31: ordinal not in range(128)
</code></pre></li>
</ol>
| 0 | 2016-09-29T17:07:51Z | 39,777,256 | <p>You need to decode the string that you want to replace before passing it to <code>str.replace()</code>. This works for me:</p>
<pre><code># -*- coding: utf-8 -*-
import codecs
htmlBase = codecs.open("mm03.html",'r')
htmlFill = htmlBase.read()
htmlFill = codecs.decode(htmlFill,'utf-8')
substr = codecs.decode("«test»",'utf-8')
htmlFill = htmlFill.replace(substr,"TEST")
print htmlFill
htmlBase.close()
</code></pre>
| 0 | 2016-09-29T18:13:07Z | [
"python",
"html",
"string",
"unicode",
"io"
]
|
Setting rules with scrapy crawlspider | 39,776,236 | <p>I'm trying out the scrapy CrawlSpider subclass for the first time. I've created the following spider strongly based on the docs example at <a href="https://doc.scrapy.org/en/latest/topics/spiders.html#crawlspider-example" rel="nofollow">https://doc.scrapy.org/en/latest/topics/spiders.html#crawlspider-example</a>:</p>
<pre><code>class Test_Spider(CrawlSpider):
name = "test"
allowed_domains = ['http://www.dragonflieswellness.com']
start_urls = ['http://www.dragonflieswellness.com/wp-content/uploads/2015/09/']
rules = (
# Extract links matching 'category.php' (but not matching 'subsection.php')
# and follow links from them (since no callback means follow=True by default).
# Rule(LinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),
# Extract links matching 'item.php' and parse them with the spider's method parse_item
Rule(LinkExtractor(allow='.jpg'), callback='parse_item'),
)
def parse_item(self, response):
self.logger.info('Hi, this is an item page! %s', response.url)
print(response.url)
</code></pre>
<p>I'm trying to get the spider to loop start at the prescibed directory and then extract all the '.jpg' links in the directory, but I see :</p>
<pre><code>2016-09-29 13:07:35 [scrapy] INFO: Spider opened
2016-09-29 13:07:35 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-09-29 13:07:35 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-09-29 13:07:36 [scrapy] DEBUG: Crawled (200) <GET http://www.dragonflieswellness.com/wp-content/uploads/2015/09/> (referer: None)
2016-09-29 13:07:36 [scrapy] INFO: Closing spider (finished)
</code></pre>
<p>How can I get this working?</p>
| 0 | 2016-09-29T17:09:28Z | 39,810,686 | <p>First of all, the purpose of using rules is to not only extract links, but, above all, follow them. If you just want to extract links (and, say, save them for later), you don't have to specify spider rules. If you, on the other hand, want to download the images, use a <a href="https://doc.scrapy.org/en/latest/topics/media-pipeline.html" rel="nofollow">pipeline</a>.</p>
<p>That said, the reason the spider does not follow the links is hidden in the implementation of LinkExtractor:</p>
<pre><code># common file extensions that are not followed if they occur in links
IGNORED_EXTENSIONS = [
# images
'mng', 'pct', 'bmp', 'gif', 'jpg', 'jpeg', 'png', 'pst', 'psp', 'tif',
'tiff', 'ai', 'drw', 'dxf', 'eps', 'ps', 'svg',
# audio
'mp3', 'wma', 'ogg', 'wav', 'ra', 'aac', 'mid', 'au', 'aiff',
# video
'3gp', 'asf', 'asx', 'avi', 'mov', 'mp4', 'mpg', 'qt', 'rm', 'swf', 'wmv',
'm4a',
# office suites
'xls', 'xlsx', 'ppt', 'pptx', 'pps', 'doc', 'docx', 'odt', 'ods', 'odg',
'odp',
# other
'css', 'pdf', 'exe', 'bin', 'rss', 'zip', 'rar',
]
</code></pre>
<p><strong>EDIT:</strong></p>
<p>In order to download images using ImagesPipeline in this example:</p>
<p>Add this to settings:</p>
<pre><code>ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1}
IMAGES_STORE = '/home/user/some_directory' # use a correct path
</code></pre>
<p>Create a new item:</p>
<pre><code>class MyImageItem(Item):
images = Field()
image_urls = Field()
</code></pre>
<p>Modify your spider (add a parse method):</p>
<pre><code> def parse(self, response):
loader = ItemLoader(item=MyImageItem(), response=response)
img_paths = response.xpath('//a[substring(@href, string-length(@href)-3)=".jpg"]/@href').extract()
loader.add_value('image_urls', [self.start_urls[0] + img_path for img_path in img_paths])
return loader.load_item()
</code></pre>
<p>The xpath searches for all hrefs that end with ".jpg" and the extract() method creates a list thereof.</p>
<p>A loader is an additional feature that simplifies creating object, but you could do without it.</p>
<p>Note that I'm no expert and there might be a better, more elegant solution. This one, however, works fine.</p>
| 1 | 2016-10-01T19:31:33Z | [
"python",
"scrapy"
]
|
How to determine function parameter type in python? | 39,776,321 | <p>For example I have a poorly documented library. I have an object from it and I want to know what are the types of the arguments certain method accepts.</p>
<p>In IPython I can run</p>
<pre><code>In [28]: tdb.getData?
Signature: tdb.getData(time, point_coords, sinterp=0, tinterp=0, data_set='isotropic1024coarse', getFunction='getVelocity', make_modulo=False)
Docstring: <no docstring>
File: ~/.local/lib/python3.5/site-packages/pyJHTDB/libJHTDB.py
Type: method
</code></pre>
<p>but it does not give me the types of the arguments. I don't know exactly what is the type of <code>point_coords</code>.</p>
| 0 | 2016-09-29T17:14:47Z | 39,776,651 | <p>Usually, functions in Python accept arguments of any type, so you cannot define what type it expects.</p>
<p>Still, the function probably does make some implicit assumptions about the received object.</p>
<p>Take this function for example:</p>
<pre><code>def is_long(x):
return len(x) > 1000
</code></pre>
<p>What type of argument <code>x</code> does this function accept? Any type, as long as it has length defined.</p>
<p>So, it can take a string, or a list, or a dict, or any custom object you create, as long as it implements <code>__len__</code>. But it won't take an integer.</p>
<pre><code>is_long('abcd') # ok
is_long([1, 2, 3, 4]) # ok
is_long(11) # not ok
</code></pre>
<p><strong>To answer the question:</strong> How can you tell what assumtions the function makes?</p>
<ul>
<li>read the documentation</li>
<li>read the doc string (try <code>help(funcname)</code>)</li>
<li>guess: Pass any argument to it and see how it fails. If it fails with <em>AttributeError: X instance has no attribute 'get_value'</em>, it expects something with <code>get_value</code>.</li>
</ul>
| 1 | 2016-09-29T17:35:11Z | [
"python",
"types"
]
|
Can't get Scrapy to parse and follow 301, 302 redirects | 39,776,377 | <p>I'm trying to write a very simple website crawler to list URLs along with referrer and status codes for 200, 301, 302 and 404 http status codes.</p>
<p>Turns out that Scrapy works great and my script uses it correctly to crawl the website and can list urls with 200 and 404 status codes without problems.</p>
<p><strong>The problem is:</strong> I can't find how to have scrapy follow redirects AND parse/output them. I can get one to work but not both.</p>
<p>What I've tried so far:</p>
<ul>
<li><p>setting <code>meta={'dont_redirect':True}</code> and setting <code>REDIRECTS_ENABLED = False</code></p></li>
<li><p>adding 301, 302 to handle_httpstatus_list</p></li>
<li><p>changing settings specified in the redirect middleware doc</p></li>
<li><p>reading the redirect middleware code for insight</p></li>
<li><p>various combo of all of the above</p></li>
<li><p>other random stuff</p></li>
</ul>
<p>Here's the <a href="https://gitlab.savoirfairelinux.com/fpotvin/easy-scrapy/tree/master" rel="nofollow">public repo</a> if you want to take a look at the code.</p>
| 0 | 2016-09-29T17:18:03Z | 39,788,550 | <p>If you want to parse 301 and 302 responses, and follow them at the same time, ask for 301 and 302 to be processed by your callback and mimick the behavior of RedirectMiddleware.</p>
<h2>Test 1 (not working)</h2>
<p>Let's illustrate with a simple spider to start with (not working as you intend yet):</p>
<pre><code>import scrapy
class HandleSpider(scrapy.Spider):
name = "handle"
start_urls = (
'https://httpbin.org/get',
'https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F',
)
def parse(self, response):
self.logger.info("got response for %r" % response.url)
</code></pre>
<p>Right now, the spider asks for 2 pages, and the 2nd one should redirect to <a href="http://www.example.com" rel="nofollow">http://www.example.com</a></p>
<pre><code>$ scrapy runspider test.py
2016-09-30 11:28:17 [scrapy] INFO: Scrapy 1.1.3 started (bot: scrapybot)
2016-09-30 11:28:18 [scrapy] DEBUG: Crawled (200) <GET https://httpbin.org/get> (referer: None)
2016-09-30 11:28:18 [scrapy] DEBUG: Redirecting (302) to <GET http://example.com/> from <GET https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F>
2016-09-30 11:28:18 [handle] INFO: got response for 'https://httpbin.org/get'
2016-09-30 11:28:18 [scrapy] DEBUG: Crawled (200) <GET http://example.com/> (referer: None)
2016-09-30 11:28:18 [handle] INFO: got response for 'http://example.com/'
2016-09-30 11:28:18 [scrapy] INFO: Spider closed (finished)
</code></pre>
<p>The 302 is handled by <code>RedirectMiddleware</code> automatically and it does not get passed to your callback.</p>
<h2>Test 2 (still not quite right)</h2>
<p>Let's configure the spider to handle 301 and 302s in the callback, <a href="https://doc.scrapy.org/en/latest/topics/downloader-middleware.html?#module-scrapy.downloadermiddlewares.redirect" rel="nofollow">using <code>handle_httpstatus_list</code></a>:</p>
<pre><code>import scrapy
class HandleSpider(scrapy.Spider):
name = "handle"
start_urls = (
'https://httpbin.org/get',
'https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F',
)
handle_httpstatus_list = [301, 302]
def parse(self, response):
self.logger.info("got response %d for %r" % (response.status, response.url))
</code></pre>
<p>Let's run it:</p>
<pre><code>$ scrapy runspider test.py
2016-09-30 11:33:32 [scrapy] INFO: Scrapy 1.1.3 started (bot: scrapybot)
2016-09-30 11:33:32 [scrapy] DEBUG: Crawled (200) <GET https://httpbin.org/get> (referer: None)
2016-09-30 11:33:32 [scrapy] DEBUG: Crawled (302) <GET https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F> (referer: None)
2016-09-30 11:33:33 [handle] INFO: got response 200 for 'https://httpbin.org/get'
2016-09-30 11:33:33 [handle] INFO: got response 302 for 'https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F'
2016-09-30 11:33:33 [scrapy] INFO: Spider closed (finished)
</code></pre>
<p>Here, we're missing the redirection.</p>
<h2>Test 3 (working)</h2>
<p>Do the <a href="https://github.com/scrapy/scrapy/blob/ebef6d7c6dd8922210db8a4a44f48fe27ee0cd16/scrapy/downloadermiddlewares/redirect.py#L54" rel="nofollow">same as RedirectMiddleware</a> but in the spider callback:</p>
<pre><code>from six.moves.urllib.parse import urljoin
import scrapy
from scrapy.utils.python import to_native_str
class HandleSpider(scrapy.Spider):
name = "handle"
start_urls = (
'https://httpbin.org/get',
'https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F',
)
handle_httpstatus_list = [301, 302]
def parse(self, response):
self.logger.info("got response %d for %r" % (response.status, response.url))
# do something with the response here...
# handle redirection
# this is copied/adapted from RedirectMiddleware
if response.status >= 300 and response.status < 400:
# HTTP header is ascii or latin1, redirected url will be percent-encoded utf-8
location = to_native_str(response.headers['location'].decode('latin1'))
# get the original request
request = response.request
# and the URL we got redirected to
redirected_url = urljoin(request.url, location)
if response.status in (301, 307) or request.method == 'HEAD':
redirected = request.replace(url=redirected_url)
yield redirected
else:
redirected = request.replace(url=redirected_url, method='GET', body='')
redirected.headers.pop('Content-Type', None)
redirected.headers.pop('Content-Length', None)
yield redirected
</code></pre>
<p>And run the spider again:</p>
<pre><code>$ scrapy runspider test.py
2016-09-30 11:45:20 [scrapy] INFO: Scrapy 1.1.3 started (bot: scrapybot)
2016-09-30 11:45:21 [scrapy] DEBUG: Crawled (302) <GET https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F> (referer: None)
2016-09-30 11:45:21 [scrapy] DEBUG: Crawled (200) <GET https://httpbin.org/get> (referer: None)
2016-09-30 11:45:21 [handle] INFO: got response 302 for 'https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F'
2016-09-30 11:45:21 [handle] INFO: got response 200 for 'https://httpbin.org/get'
2016-09-30 11:45:21 [scrapy] DEBUG: Crawled (200) <GET http://example.com/> (referer: https://httpbin.org/redirect-to?url=http%3A%2F%2Fexample.com%2F)
2016-09-30 11:45:21 [handle] INFO: got response 200 for 'http://example.com/'
2016-09-30 11:45:21 [scrapy] INFO: Spider closed (finished)
</code></pre>
<p>We got redirected to <a href="http://www.example.com" rel="nofollow">http://www.example.com</a> and we also got the response through our callback.</p>
| 0 | 2016-09-30T09:48:40Z | [
"python",
"scrapy"
]
|
SPARQL Parser for INSERT and DELETE queries | 39,776,519 | <p>I need to test if a certain <code>SPARQL</code> queries are indeed an <code>INSERT</code> or <code>DELETE</code> statements. Therefore, I have been using different parsers (mainly, <code>fyzz</code> and <code>SPARQLWrapper</code>) as follow: </p>
<pre><code>try:
sparql = parse(Query_String)
except Exception as e:
raise ContentsUnacceptable(
" The query attribute does not conform to the SPARQL query syntax !!")
</code></pre>
<p>and :</p>
<pre><code>try:
sparql = SPARQLWrapper("local/graph/file")
sparql.setQuery(Query_String)
results = sparql.query().convert()
except Exception as e:
raise ContentsUnacceptable(
" The query does not conform to the SPARQL query syntax !!")
</code></pre>
<p>Both of my code work fine with <code>SELECT</code> statements, but as soon as I try an <code>INSERT</code> or <code>DELETE</code> statements, I got the exception written on my code <code>"The query does not conform to the SPARQL query syntax !!"</code> which it may means that the INSERT and DELETE statement are not recognizable by the parser. According to my research there are mainly those two parser for SPARQL queries. is there a way to make those statements recognizable by the fyzz and SPARQLWrapper parser?</p>
| 0 | 2016-09-29T17:27:08Z | 39,908,261 | <p>SPARQLWrapper is not actually intended as a SPARQL parser. All it does is act as a client for a SPARQL endpoint service - AFAICT the actual parsing of the query string is done by the endpoint itself.</p>
<p>In your specific example, you're pointing SPARQLWrapper to a local file, which simply won't work, because a local file is not a SPARQL endpoint. In the attempt where you <em>are</em> pointing to a SPARQL endpoint, that endpoint is not accepting SPARQL updates (because, understandably, they don't allow remote clients to update their data). </p>
<p>If you wish to stick with SPARQLWrapper, a (<em>very</em> clunky) solution would be to set up your own SPARQL endpoint service, configure it to allow remote updates, and then use that endpoint to validate your queries/updates (there's plenty of free SPARQL endpoint implementations available including RDF4J Server and Jena Fuseki, to name but two popular ones).</p>
<p>But as said, this is a clunky solution: if your goal is simply to parse the string to validate its syntax, you should use an actual SPARQL parser in Python. <a href="http://rdflib3.readthedocs.io/en/latest/apidocs/rdflib.plugins.sparql.html#module-rdflib.plugins.sparql.parser" rel="nofollow">RDFLib's SPARQL Parser</a> is probably a good choice here. I'm no Python expert but I imagine something along these lines:</p>
<pre><code>import rdflib.plugins.sparql.parser
parser.parseQuery("SELECT ...")
</code></pre>
<p>and </p>
<pre><code>parser.parseUpdate("INSERT ...")
</code></pre>
<p>can help you out.</p>
| 0 | 2016-10-07T02:06:09Z | [
"python",
"parsing",
"sparql",
"rdf"
]
|
Python WMI Network Adapter Configuration parameter issue [Windows 8.1] [Python 2.7] [WMI 1.4.9] | 39,776,521 | <p>When I try to do this</p>
<pre><code>SetDynamicDNSRegistration(True)
</code></pre>
<p>It returns '68' which I have looked up on the <a href="https://msdn.microsoft.com/en-us/library/aa393298(v=vs.85).aspx" rel="nofollow">MSDN WMI page</a> and it means "Invalid Input Parameter".</p>
<p><strong>Full Script</strong></p>
<pre><code>import wmi
nic_configs = wmi.WMI('').Win32_NetworkAdapterConfiguration(IPEnabled=True)
# First network adaptor
nic = nic_configs[0]
# IP address, subnetmask and gateway values should be unicode objects
ip = u'192.168.0.151'
subnetmask = u'255.255.255.0'
gateway = u'192.168.0.1'
dns = u'192.168.0.1'
# Set IP address, subnetmask and default gateway
# Note: EnableStatic() and SetGateways() methods require *lists* of values to be passed
a = nic.EnableStatic(IPAddress=[ip],SubnetMask=[subnetmask])
b = nic.SetGateways(DefaultIPGateway=[gateway])
c = nic.SetDNSServerSearchOrder([dns])
d = nic.SetDynamicDNSRegistration(True)
print(a)
print(b)
print(c)
print(d)
</code></pre>
<p>What is wrong? I'm sure "True" is the right Python syntax for the boolean TRUE... I don't even know anymore...</p>
| 1 | 2016-09-29T17:27:11Z | 39,797,654 | <p>Rather than a Python boolean, use its corresponding boolean integer. So instead of</p>
<pre><code>nic.SetDynamicDNSRegistration(True)
</code></pre>
<p>use</p>
<pre><code>nic.SetDynamicDNSRegistration(FullDNSRegistrationEnabled=1)
</code></pre>
| 0 | 2016-09-30T18:15:52Z | [
"python",
"wmi"
]
|
How to go one or more levels using os.chdir with '..''s as arguments | 39,776,616 | <p>How do I change path in Python using '..' to go down one or more levels as argument to os.chdir(). So that if I am on /home/usr/one and I want to go to 'home' directory a '../..' argument to chdir will do it. Do I wrap the argument in some other function? </p>
| 0 | 2016-09-29T17:33:08Z | 39,776,734 | <p>As you say in your question, if you are in the directory <code>/home/usr/one</code> <code>os.chdir('../../')</code> will bring you to <code>/home/</code>. </p>
<p>You can confirm this by calling:</p>
<pre><code>os.getcwd()
</code></pre>
<p>Before and after changing directories. This function will show you the current working directory. Also, there is no need to wrap the argument to <code>chdir()</code> in another function.</p>
<h1>Edit:</h1>
<p>Note that <code>os.chdir()</code> in a script will not change the directory you are in when you run the script from the terminal. In other words, if you are in <code>/home/usr/one</code> and run a script with <code>python myscript.py</code>, any directory changes made with <code>os.chdir()</code> within that script will not be refelected when the script finishes; you will still be in <code>/home/usr/one</code>.</p>
| 0 | 2016-09-29T17:39:40Z | [
"python",
"python-2.7",
"python-3.x"
]
|
How can I write my inventory program so that it reads and writes to a text file? | 39,776,705 | <p>Basically, I'm trying to write a fairly simple, user-friendly program for the repair shop I'm working at. We've been trying to figure out a good way to organize and track LCD usage, so I've been working on this thing as a possible option.</p>
<p>I need it to do a few other things, but my main concern right now is getting the dictionary to read from a text file, as well as write to it. For example, if someone updates the stock on an iPhone 4s, we should be able to close the program and have the new quantity show up in "check stock". As it stands now, as soon as you close the program, everything returns to default.</p>
<p>I've placed the code below in Gist - what can I do for this one? I'm still learning python and this is the first "real" thing I've really started working on, so there's still some concepts I'm having trouble with.</p>
<p><a href="https://gist.github.com/anonymous/8815a95b2431dbfcea41fdfa05381893" rel="nofollow">https://gist.github.com/anonymous/8815a95b2431dbfcea41fdfa05381893</a></p>
| -1 | 2016-09-29T17:37:59Z | 39,777,226 | <p>To put you on the right track, here is a minimal example that reads a JSON text file in the same directory, modifies it, and then writes it back.</p>
<p>First, here is the JSON file:</p>
<p>data.txt</p>
<pre><code>{"milk": 5, "orange juice": 3, "cookies": 1}
</code></pre>
<p>Here is a program that reads this json file using the <a href="https://docs.python.org/2/library/json.html" rel="nofollow"><code>json</code></a> library to an <code>inventory</code> Python dictionary, increments "milk," and saves it back:</p>
<pre><code>import json
with open('data.txt', 'r') as f:
inventory = json.loads(f.read())
inventory['milk'] += 1
with open('data.txt', 'w') as f:
f.writelines(json.dumps(inventory))
</code></pre>
<p>You can add all the logic you want between the "reading" and "writing" step.</p>
| 0 | 2016-09-29T18:10:51Z | [
"python",
"dictionary",
"inventory"
]
|
Python remove dictionaries from list | 39,776,786 | <p>Python listed dictionaries:</p>
<pre><code>_list_ = [{'key1': 'value1', 'key2': 'value2'}, {'key1': 'value3', 'key2': 'value4'}]
example_search_str1 = 'e1' # for value1 of key1
example_search_str2 = 'e3' # for value3 of key1
</code></pre>
<p>I want to delete listed dictionaries containing multiple example search strings. How to achieve this? Existing answers didn't help much. Python newbie here.</p>
| 0 | 2016-09-29T17:42:50Z | 39,776,822 | <p>Your question is a little unclear, but as I understand it, you want to remove dictionaries from a list if their 'key1' value is one of some number of strings.</p>
<pre><code>bad_strings = ['e1', 'e3']
new_list = [d for d in old_list if d['key1'] not in bad_strings]
</code></pre>
<p>EDIT:</p>
<p>Oh, I get it. I was close. You want to see if 'key1' value contains the forbidden strings. also doable.</p>
<pre><code>new_list = [d for d in old_list if not any(bad in d['key1'] for bad in bad_strings)]
</code></pre>
| 0 | 2016-09-29T17:45:15Z | [
"python",
"list",
"dictionary"
]
|
Python remove dictionaries from list | 39,776,786 | <p>Python listed dictionaries:</p>
<pre><code>_list_ = [{'key1': 'value1', 'key2': 'value2'}, {'key1': 'value3', 'key2': 'value4'}]
example_search_str1 = 'e1' # for value1 of key1
example_search_str2 = 'e3' # for value3 of key1
</code></pre>
<p>I want to delete listed dictionaries containing multiple example search strings. How to achieve this? Existing answers didn't help much. Python newbie here.</p>
| 0 | 2016-09-29T17:42:50Z | 39,777,082 | <p>Also for matching (key, unwanted_value) I would maybe try smthng like:</p>
<pre><code>list_of_dicts = [{'key1': 'value1', 'key2': 'value2'}, {'key1': 'value3', 'key2': 'value4'}]
bad_keys_vals = [('key1', 'value1'), ('key2', 'value2')]
def filter_dict_list(list_of_dicts, bad_keys_vals):
return list(filter(lambda d: any((d[key] != bad_val for key, bad_val in bad_keys_vals)), list_of_dicts))
print(filter_dict_list(list_of_dicts, bad_keys_vals))
>> [{'key2': 'value4', 'key1': 'value3'}]
</code></pre>
<p>But yes, this result in creating a new list, so you would probably need to overwright the old one.</p>
| 0 | 2016-09-29T18:01:06Z | [
"python",
"list",
"dictionary"
]
|
AttributeError: '__proxy__' object has no attribute 'regex' | 39,776,854 | <p>My urls.py looks like this; can anyone explain where the error (AttributeError: '<strong>proxy</strong>' object has not attribute 'regex') is coming from? Because the error message isn't giving me any place where the error is coming from, so I'm really confused. Thanks! </p>
<pre><code>from django.conf import settings
from django.conf.urls import patterns, include, url
from django.conf.urls.static import static
from django.core.urlresolvers import reverse_lazy
#from django.views.generic.simple import direct_to_template
from django.views.generic import TemplateView
from django.contrib.staticfiles.urls import staticfiles_urlpatterns
from django.contrib import admin
admin.autodiscover()
urlpatterns = patterns('',
url(r"^$", TemplateView.as_view(template_name = "homepage.html")),
reverse_lazy("homepage.html"),
url(r'^grappelli/', include('grappelli.urls')), # grappelli URLS
url(r"^admin/", include(admin.site.urls)),
url(r"^account/", include("account.urls")),
# url(r"^search/", include("haystack.urls")),
# WIBO URLs
url(r'^cards/', include('cards.urls')),
url(r'^contacts/', include('contacts.urls')),
url(r'^invoice/', include('invoice.urls')),
url(r'^employee/',include('employee.urls')),
url(r'^sapub/request/$', 'wibo.views.sapub_request', name='jobrequeseturl'),
url(r'^wibo/logout-all-users/$', 'wibo.views.logout_all_users', name='logoutallurl'),
url(r'^wibo/cardmigrationextra00091/$', 'wibo.views.cards_migration_extras_0009_1', name='cardsmigrationextra0009url'),
url(r'^wibo/cardmigrationextra00092/$', 'wibo.views.cards_migration_extras_0009_2', name='cardsmigrationextra0009url'),
url(r'^wibo/cardmigrationextra00093/$', 'wibo.views.cards_migration_extras_0009_3', name='cardsmigrationextra0009url'),
url(r"^reports/", include('reports.urls')),
#url(r"^printsmart/$",direct_to_template,{"template":"printsmart_request.html"}, name="printsmarturl"),
url(r"^printsmart/$", TemplateView.as_view(template_name="printsmart_request.html")),
url(r'^select2/', include('django_select2.urls')),
)
urlpatterns += staticfiles_urlpatterns()
#reverse(urlpatterns)
</code></pre>
<hr>
<pre><code>Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/wsgiref/handlers.py", line 85, in run
self.result = application(self.environ, self.start_response)
File "/Library/Python/2.7/site-packages/django/core/handlers/wsgi.py", line 189, in __call__
response = self.get_response(request)
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py", line 218, in get_response
response = self.handle_uncaught_exception(request, resolver, sys.exc_info())
File "/Library/Python/2.7/site-packages/django/core/handlers/base.py", line 268, in handle_uncaught_exception
return callback(request, **param_dict)
File "/Library/Python/2.7/site-packages/django/utils/decorators.py", line 110, in _wrapped_view
response = view_func(request, *args, **kwargs)
File "/Library/Python/2.7/site-packages/django/views/defaults.py", line 45, in server_error
return http.HttpResponseServerError(template.render())
File "/Library/Python/2.7/site-packages/django/template/backends/django.py", line 74, in render
return self.template.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py", line 209, in render
return self._render(context)
File "/Library/Python/2.7/site-packages/django/test/utils.py", line 96, in instrumented_test_render
return self.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/Library/Python/2.7/site-packages/django/template/debug.py", line 79, in render_node
return node.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py", line 135, in render
return compiled_parent._render(context)
File "/Library/Python/2.7/site-packages/django/test/utils.py", line 96, in instrumented_test_render
return self.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/Library/Python/2.7/site-packages/django/template/debug.py", line 79, in render_node
return node.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py", line 135, in render
return compiled_parent._render(context)
File "/Library/Python/2.7/site-packages/django/test/utils.py", line 96, in instrumented_test_render
return self.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/Library/Python/2.7/site-packages/django/template/debug.py", line 79, in render_node
return node.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py", line 65, in render
result = block.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/Library/Python/2.7/site-packages/django/template/debug.py", line 79, in render_node
return node.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py", line 65, in render
result = block.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/Library/Python/2.7/site-packages/django/template/debug.py", line 79, in render_node
return node.render(context)
File "/Library/Python/2.7/site-packages/django/template/loader_tags.py", line 65, in render
result = block.nodelist.render(context)
File "/Library/Python/2.7/site-packages/django/template/base.py", line 903, in render
bit = self.render_node(node, context)
File "/Library/Python/2.7/site-packages/django/template/debug.py", line 79, in render_node
return node.render(context)
File "/Library/Python/2.7/site-packages/django/template/defaulttags.py", line 493, in render
url = reverse(view_name, args=args, kwargs=kwargs, current_app=current_app)
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py", line 579, in reverse
return force_text(iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs)))
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py", line 433, in _reverse_with_prefix
self._populate()
File "/Library/Python/2.7/site-packages/django/core/urlresolvers.py", line 298, in _populate
p_pattern = pattern.regex.pattern
AttributeError: '__proxy__' object has no attribute 'regex'
[29/Sep/2016 13:23:13]"GET / HTTP/1.1" 500 59
</code></pre>
| 0 | 2016-09-29T17:47:34Z | 39,777,086 | <p>You have a stray <code>reverse_lazy()</code> in your urlpatterns:</p>
<pre><code>urlpatterns = patterns('',
url(r"^$", TemplateView.as_view(template_name = "homepage.html")),
reverse_lazy("homepage.html"),
</code></pre>
| 1 | 2016-09-29T18:01:10Z | [
"python",
"regex",
"django"
]
|
Make the Python Turtle screen self adjust to show what's been drawn? | 39,776,865 | <p>I've been trying to find an answer to this all over the internet, and just can't find anything that works. </p>
<p>Essentially, I'm building my very first program which is a <code>digital spirograph</code>. One of the features of this is that you can have the turtle randomly draw a shape using fairly chaotic variables. </p>
<p>My issue is that when I do this, the turtle almost always draws out of the borders of the turtle window, therefore not allowing the user to see the full completed drawing at the end.</p>
<ul>
<li>Is there an easy way to have the screen resize to the bounds of what has been drawn after the turtle finishes? </li>
</ul>
<p>I'm not sure if I should list my code as it's a few hundred lines long at this point, but if that is needed let me know. </p>
<p>Edit: Here is my code as it currently is written -</p>
<pre><code>import turtle
import random
print("Random Mode? y/n")
crazy = raw_input()
crazy = str(crazy)
if crazy == 'y' :
print("Would you like your random selection to be chaotic? y/n")
chaos = raw_input()
chaos = str(chaos)
if chaos == 'y' :
passes_r = random.randint(4,15)
angle_r1 = random.randint(1,180)
angle_r2 = random.randint(1,180)
angle_r3 = random.randint(1,180)
angle_r4 = random.randint(1,180)
angle_r5 = random.randint(1,180)
angle_r6 = random.randint(1,180)
angle_r7 = random.randint(1,180)
angle_r8 = random.randint(1,180)
chaos_r1 = random.randint(0,360)
chaos_r2 = random.randint(0,360)
chaos_r3 = random.randint(0,360)
chaos_r4 = random.randint(0,360)
chaos_r5 = random.randint(0,360)
chaos_r6 = random.randint(0,360)
chaos_r7 = random.randint(0,360)
chaos_r8 = random.randint(0,360)
shape_r = ['arrow', 'turtle', 'circle', 'square', 'triangle', 'classic']
turtle.shape(random.choice(shape_r))
turtle.speed(0)
print("Calculating shape based on random input: ") + str(angle_r1) + ", " + str(angle_r2) + ", " + str(angle_r3) + ", " + str(angle_r4) + ", " + str(angle_r5) + ", " + str(angle_r6) + ", " + str(angle_r7) + ", " + str(angle_r8)
for _ in range(passes_r):
turtle.color('red')
turtle.left(angle_r1)
for _ in range(4):
turtle.forward(100)
turtle.left(chaos_r1)
turtle.color('blue')
turtle.left(angle_r2)
for _ in range(4):
turtle.forward(100)
turtle.left(chaos_r2)
turtle.color('green')
turtle.left(angle_r3)
for _ in range(4):
turtle.forward(100)
turtle.left(chaos_r3)
turtle.color('yellow')
turtle.left(angle_r4)
for _ in range(4):
turtle.forward(100)
turtle.left(chaos_r4)
turtle.left(angle_r5)
for _ in range(4):
turtle.forward(100)
turtle.left(chaos_r5)
turtle.color('green')
turtle.left(angle_r6)
for _ in range(4):
turtle.forward(100)
turtle.left(chaos_r6)
turtle.color('blue')
turtle.left(angle_r7)
for _ in range(4):
turtle.forward(100)
turtle.left(chaos_r7)
turtle.color('red')
turtle.left(angle_r8)
for _ in range(4):
turtle.forward(100)
turtle.left(chaos_r8)
elif chaos == 'n' :
passes_r = random.randint(4,15)
angle_r1 = random.randint(1,180)
angle_r2 = random.randint(1,180)
angle_r3 = random.randint(1,180)
angle_r4 = random.randint(1,180)
angle_r5 = random.randint(1,180)
angle_r6 = random.randint(1,180)
angle_r7 = random.randint(1,180)
angle_r8 = random.randint(1,180)
shape_r = ['arrow', 'turtle', 'circle', 'square', 'triangle', 'classic']
turtle.shape(random.choice(shape_r))
turtle.speed(0)
print("Calculating shape based on random input: ") + str(angle_r1) + ", " + str(angle_r2) + ", " + str(angle_r3) + ", " + str(angle_r4) + ", " + str(angle_r5) + ", " + str(angle_r6) + ", " + str(angle_r7) + ", " + str(angle_r8)
for _ in range(passes_r):
turtle.color('red')
turtle.left(angle_r1)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('blue')
turtle.left(angle_r2)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('green')
turtle.left(angle_r3)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('yellow')
turtle.left(angle_r4)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.left(angle_r5)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('green')
turtle.left(angle_r6)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('blue')
turtle.left(angle_r7)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('red')
turtle.left(angle_r8)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
elif crazy == 'n' :
print("Enter number of repeats -")
passes = raw_input() # passes is called in line 23 for range
passes = int(passes)
print("Enter Shape: Arrow, Turtle, Circle, Square, Triangle, or Classic")
shape = raw_input().lower() # Selects the shape of the turtle
print("Enter Speed: (# 0 through 10: 0 is the fastest)")
user_speed = raw_input()
user_speed = int(user_speed)
print("Enter Angle 1 (# between 1 and 180)")
angle_1 = raw_input()
angle_1 = int(angle_1)
print("Enter Angle 2 (# between 1 and 180)")
angle_2 = raw_input()
angle_2 = int(angle_2)
print("Enter Angle 3 (# between 1 and 180)")
angle_3 = raw_input()
angle_3 = int(angle_3)
print("Enter Angle 4 (# between 1 and 180)")
angle_4 = raw_input()
angle_4 = int(angle_4)
print("Enter Angle 5 (# between 1 and 180)")
angle_5 = raw_input()
angle_5 = int(angle_5)
print("Enter Angle 6 (# between 1 and 180)")
angle_6 = raw_input()
angle_6 = int(angle_6)
print("Enter Angle 7 (# between 1 and 180)")
angle_7 = raw_input()
angle_7 = int(angle_7)
print("Enter Angle 8 (# between 1 and 180)")
angle_8 = raw_input()
angle_8 = int(angle_8)
print("Auto size y or n?")
auto_size = raw_input().lower()
auto_size = str(auto_size)
if auto_size == 'y' :
auto_size = str('auto')
turtle.resizemode(auto_size)
elif auto_size == 'n' :
auto_size = str('noresize')
print("what size? Enter a number from 1 to 10.")
user_size = raw_input()
user_size = int(user_size)
turtle.pensize(user_size)
turtle.shape(shape)
turtle.speed(user_speed)
#turtle.resizemode(auto_size)
print("Calculating shape based on user input: ") + str(angle_1) + ", " + str(angle_2) + ", " + str(angle_3) + ", " + str(angle_4) + ", " + str(angle_5) + ", " + str(angle_6) + ", " + str(angle_7) + ", " + str(angle_8)
for _ in range(passes):
turtle.color('red')
turtle.left(angle_1)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('blue')
turtle.left(angle_2)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('green')
turtle.left(angle_3)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('yellow')
turtle.left(angle_4)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.left(angle_5)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('green')
turtle.left(angle_6)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('blue')
turtle.left(angle_7)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.color('red')
turtle.left(angle_8)
for _ in range(4):
turtle.forward(100)
turtle.left(90)
turtle.exitonclick()
</code></pre>
| 0 | 2016-09-29T17:48:23Z | 39,777,581 | <p>resize the window using the following code in python</p>
<p>By the way your question is duplicated I think.</p>
<p><a href="http://stackoverflow.com/questions/831894/python-imaging-resize-turtle-graphics-window">Python imaging, resize Turtle Graphics window</a></p>
<pre><code>setup( width = 200, height = 200, startx = None, starty = None)
</code></pre>
| 0 | 2016-09-29T18:33:04Z | [
"python"
]
|
Skew a diagonal gradient to be vertical | 39,776,890 | <p>I have a not-quite linear gradient at some angle to the horizontal as an image. Here's some toy data:</p>
<pre><code>g = np.ones((5,20))
for x in range(g.shape[0]):
for y in range(g.shape[1]):
g[x,y] += (x+y)*0.1+(y*0.01)
</code></pre>
<p><a href="http://i.stack.imgur.com/ytcp6.png" rel="nofollow"><img src="http://i.stack.imgur.com/ytcp6.png" alt="diagonal gradient"></a></p>
<p>I want to essentially correct the skew in the gradient so that it is horizontal, i.e. the gradient increases to the right and all vertical slices are constant.</p>
<p>This will of course produce a parallelogram with a larger x-axis than the input image. Returning a masked Numpy array would be ideal. Here's a (terrible) cartoon to quickly illustrate.</p>
<p><a href="http://i.stack.imgur.com/QVtG7.png" rel="nofollow"><img src="http://i.stack.imgur.com/QVtG7.png" alt="enter image description here"></a></p>
<p>Any idea how to achieve this? Thanks!</p>
| 3 | 2016-09-29T17:49:27Z | 39,806,897 | <p>You can intepolate to determine the skewness and interpolate again to correct it. </p>
<pre><code>import numpy as np
from scipy.ndimage.interpolation import map_coordinates
m, n = g.shape
j_shift = np.interp(g[:,0], g[0,:], np.arange(n))
pad = int(np.max(j_shift))
i, j = np.indices((m, n + pad))
z = map_coordinates(g, [i, j - j_shift[:,None]], cval=np.nan)
</code></pre>
<p>This works on the example image, but you have to do some additional checks to make it function on other gradients. It does not work on gradients that are nonlinear in the x-direction though. Demo:</p>
<p><a href="http://i.stack.imgur.com/hYyAb.png" rel="nofollow"><img src="http://i.stack.imgur.com/hYyAb.png" alt="demo"></a></p>
<p>Full script:</p>
<pre><code>import numpy as np
from scipy.ndimage.interpolation import map_coordinates
def fix(g):
x = 1 if g[0,0] < g[0,-1] else -1
y = 1 if g[0,0] < g[-1,0] else -1
g = g[::y,::x]
m, n = g.shape
j_shift = np.interp(g[:,0], g[0,:], np.arange(n))
pad = int(np.max(j_shift))
i, j = np.indices((m, n + pad))
z = map_coordinates(g, [i, j - j_shift[:,None]], cval=np.nan)
return z[::y,::x]
import matplotlib.pyplot as plt
i, j = np.indices((50,100))
g = 0.01*i**2 + j
plt.figure(figsize=(6,5))
plt.subplot(211)
plt.imshow(g[::-1], interpolation='none')
plt.title('original')
plt.subplot(212)
plt.imshow(fix(g[::-1]), interpolation='none')
plt.title('fixed')
plt.tight_layout()
</code></pre>
| 1 | 2016-10-01T12:59:39Z | [
"python",
"arrays",
"numpy",
"transform"
]
|
How to compile cython module for multiple platforms on windows? | 39,776,911 | <p>Is there way to compile specified cython files for multiple platforms? If so, how to do it?</p>
<p>I need compile module for linux and windows both by one tool (also would be nice if this also can be compiled to other popular platforms).</p>
<p>This is distutils' settings?</p>
| 1 | 2016-09-29T17:51:04Z | 39,798,610 | <p>I'm not sure that it will completely answer your question but if you want to distribute a python package (containing c/cython code) to various platform you can use some continuous integration services to build <code>wheels</code> on specific platforms and then distribute them.</p>
<p>For example you can use <a href="https://travis-ci.org/" rel="nofollow">Travis CI</a> (for Linux and OS X) and <a href="https://www.appveyor.com/" rel="nofollow">Appveyor</a> (for Windows) to build your project (on a set of chosen python versions), then update the created wheels alongside the code of your package to PyPI.</p>
<p>After that, an user doing <code>pip install your_package</code> will fetch the wheel of your project and so avoid compilation.</p>
<p><a href="https://packaging.python.org/distributing/#wheels" rel="nofollow">Python documentation about wheels</a><br>
<a href="https://packaging.python.org/appveyor/" rel="nofollow">Python documentation about supporting windows thanks to appveyor</a></p>
| 0 | 2016-09-30T19:15:58Z | [
"python",
"compilation",
"cross-platform",
"cross-compiling",
"cython"
]
|
How to get files from Remote windows server to local Windows machine directory? | 39,776,926 | <p>I am using a tool which will create files in remote windows 2012 servers continuously. I need to get those files and place it in a local directory.</p>
<pre><code>import os
import time
def copy_logs():
os.system(".\pscp.exe -pw test123 C:/Users/Administrator/Desktop/tr* Administrator@1.1.1.1:/")
time.sleep(7200)
while True:
copy_logs()
</code></pre>
<p>I have used 'pscp' inside python script to copy files. But I am unable to specify space/destination directory to copy the files into.</p>
<p>Is there anyway to achieve this with Python?</p>
| 0 | 2016-09-29T17:52:15Z | 39,777,004 | <p>The best solution I know, is to use <a href="http://www.fabfile.org/" rel="nofollow">Fabric</a>. See: <a href="http://stackoverflow.com/questions/5314711">How do I copy a directory to a remote machine using Fabric?</a>.</p>
| 0 | 2016-09-29T17:56:57Z | [
"python",
"python-2.7"
]
|
Create or modify DataFrame using another DataFrame | 39,777,018 | <p>I currently have a Pandas DataFrame looking like this:</p>
<pre><code> DATESTAMP price name pct_chg
0 2006-01-02 62.987301 a 0.000000
1 2006-01-03 61.990700 a -0.015822
2 2006-01-04 62.987301 a 0.016077
3 2006-01-05 62.987301 a 0.000000
4 2006-01-06 61.990700 a -0.015822
6 2006-01-04 100.1 b 0.000000
7 2006-01-05 100.5 b -0.015822
8 2006-01-06 100.7 b 0.016077
9 2006-01-07 100.8 b 0.016090
</code></pre>
<p>The problem is that different items (specified with unique column <code>name</code>) have different time of origination as well as being alive for different amount of time</p>
<ul>
<li>Above item <code>a</code> starts at <code>2016-01-02</code> and ends at <code>2016-01-06</code></li>
<li>Above item <code>b</code> starts at <code>2006-01-04</code> and ends at <code>2006-01-07</code>.</li>
</ul>
<p>I would like to summarize the column <code>pct_chg</code> in a new DataFrame, having <code>DATESTAMP</code> as index and columns as of <code>name</code>. I would also like the new DataFrame to have the index in such a manner that it starts with the "oldest" existing date-record (in this case <code>2006-01-02</code>) and ends at the "newest" (in this case <code>2006-01-07</code>).</p>
<p>The result would look like</p>
<pre><code> a b
2006-01-02 0.000000 NaN
2006-01-03 -0.015822 NaN
2006-01-04 0.016077 0.000000
2006-01-05 0.000000 -0.015822
2006-01-06 -0.015822 0.016077
2006-01-07 NaN 0.016090
</code></pre>
| 2 | 2016-09-29T17:57:49Z | 39,777,083 | <p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow"><code>set_index</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>print (df.set_index(['DATESTAMP','name'])['pct_chg'].unstack())
name a b
DATESTAMP
2006-01-02 0.000000 NaN
2006-01-03 -0.015822 NaN
2006-01-04 0.016077 0.000000
2006-01-05 0.000000 -0.015822
2006-01-06 -0.015822 0.016077
2006-01-07 NaN 0.016090
</code></pre>
<p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot.html" rel="nofollow"><code>pivot</code></a>:</p>
<pre><code>print (df.pivot(index='DATESTAMP', columns='name', values='pct_chg'))
name a b
DATESTAMP
2006-01-02 0.000000 NaN
2006-01-03 -0.015822 NaN
2006-01-04 0.016077 0.000000
2006-01-05 0.000000 -0.015822
2006-01-06 -0.015822 0.016077
2006-01-07 NaN 0.016090
</code></pre>
| 2 | 2016-09-29T18:01:07Z | [
"python",
"pandas",
"dataframe",
"pivot",
"reshape"
]
|
subprocess.check_output(), zgrep, and match limit | 39,777,039 | <p>Context: I'm trying to find a github repository of a python package. To do that, I'm zgrep'ping package archive for github urls. And it works fine until I limit output by 1 result:</p>
<pre><code># works, returns a lot of results
subprocess.check_output(["zgrep", "-oha", "github", 'Django-1.10.1.tgz']) # works, a lot of results
# add -m1 to limit output, returns status 2 (doesn't work)
subprocess.check_output(["zgrep", "-m1", "-oha", "github", 'Django-1.10.1.tgz']) # works, a lot of results
# same command, different file - works
subprocess.check_output(["zgrep", "-m1", "-oha", "github", 'grabber.py'])
</code></pre>
<p>From the command line, all three commands work fine. Any ideas?</p>
<p>Traceback:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/subprocess.py", line 574, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['zgrep', '-m1', '-oha', 'github', 'pkgs/Django-1.10.1.tar.gz']' returned non-zero exit status 2
</code></pre>
<p>Command line:</p>
<pre><code>$ zgrep -m1 -oha "github.com/[^/]\+/django" pkgs/Django-1.10.1.tar.gz
github.com/django/django
</code></pre>
| 0 | 2016-09-29T17:58:51Z | 39,778,977 | <p>So, the reason is: zgrep is a shell script, which simply pipes the archive through gzip and egrep. If we limit number of results, egrep terminates the pipe, so gzip exits and complaints. In a console we never see it, but subprocess somehow catches this signal and raises an exception.</p>
<p>Solution: write mini-version of zgrep that doesn't complain</p>
<pre><code>gunzip < $FILE 2> /dev/null | egrep -m1 -ohia $PATTERN
</code></pre>
| 0 | 2016-09-29T19:59:21Z | [
"python",
"bash",
"subprocess",
"zgrep"
]
|
Writing to a mac csv not working using python file writter | 39,777,207 | <p>I have a python code where it compares two files and returns the common lines and writes them to a result file. I am using a MAC machine.</p>
<p>script.py</p>
<pre><code>with open('temp1.csv', 'r') as file1:
with open('serialnumbers.txt', 'r') as file2:
same = set(file1).intersection(file2)
print same
with open('results.csv', 'w') as file_out:
for line in same:
file_out.write(line)
print line
</code></pre>
<p>temp1.csv</p>
<pre><code>M11435TDS144
M11543TH4292
SN005
M11509TD9937
M11543TH4258
SN005
SN006
SN007
</code></pre>
<p>serialnumbers.txt</p>
<pre><code>G1A114042400571
M11251TH1230
M11543TH4258
M11435TDS144
M11543TH4292
M11509TD9937
</code></pre>
<p>The output of the above script on mac is </p>
<blockquote>
<p>set([])</p>
</blockquote>
<p>If I run the same script on windows it is working fine. I found out that this is a csv problem on mac. How can I resolve this issue?</p>
| 0 | 2016-09-29T18:09:53Z | 39,777,453 | <p>The end delimiters of the two files are different.</p>
<ul>
<li>The .csv file probably has Windows end of line: "\r\n",</li>
<li>The .txt file probably has Posix end of line: "\n".</li>
</ul>
<p>So, in binary mode, lines always differ.</p>
<p>You ought to read the two files in text mode, like this:</p>
<pre><code>import io
with io.open('temp1.csv', 'r') as file1:
with io.open('serialnumbers.txt', 'r') as file2:
same = set(file1).intersection(file2)
print(same)
</code></pre>
<p>You'll get:</p>
<pre><code>set([u'M11543TH4258\n', u'M11509TD9937\n', u'M11543TH4292\n', u'M11435TDS144\n'])
</code></pre>
<p>Also notice that CSV files are usually encoded using ISO-8859-1 or cp1252 encoding (legacy encoding from Windows).</p>
<h3>to drop the newlines</h3>
<pre><code>with io.open('temp1.csv', 'r') as file1:
with io.open('serialnumbers.txt', 'r') as file2:
same = set(line.strip() for line in file1).intersection(line.strip() for line in file2)
print(same)
</code></pre>
| 2 | 2016-09-29T18:25:02Z | [
"python",
"csv",
"file-writing",
"macbook"
]
|
Python: Break down large file, filter based on criteria, and put all data into new csv file | 39,777,240 | <p>I have a super large csv.gzip file that has 59 mill rows. I want to filter that file for certain rows based on certain criteria and put all those rows in a new master csv file. As of now, I broke the gzip file into 118 smaller csv files and saved them on my computer. I did that with the following code:</p>
<pre><code>import pandas as pd
num = 0
df = pd.read_csv('google-us-data.csv.gz', header = None,
compression = 'gzip', chunksize = 500000,
names = ['a','b','c','d','e','f','g','h','i','j','k','l','m'],
error_bad_lines = False, warn_bad_lines = False)
for chunk in df:
num = num + 1
chunk.to_csv('%ggoogle us'%num ,sep='\t', encoding='utf-8'
</code></pre>
<p>The code above worked perfectly and I now have a folder with my 118 small files. I then wrote code to go through the 118 files one by one, extract rows that matched certain conditions, and append them all to a new csv file that I've created and named 'google final us'. Here is the code: </p>
<pre><code>import pandas as pd
import numpy
for i in range (1,118)
file = open('google final us.csv','a')
df = pd.read_csv('%ggoogle us'%i, error_bad_lines = False,
warn_bad_lines = False)
df_f = df.loc[(df['a']==7) & (df['b'] == 2016) & (df['c'] =='D') &
df['d'] =='US')]
file.write(df_f)
</code></pre>
<p>Unfortunately, the code above is giving me the below error: </p>
<pre><code>KeyError Traceback (most recent call last)
C:\Users\...\Anaconda3\lib\site-packages\pandas\indexes\base.py in
get_loc(self, key, method, tolerance)
1875 try:
-> 1876 return self._engine.get_loc(key)
1877 except KeyError:
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4027)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:3891)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12408)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12359)()
KeyError: 'a'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-9-0ace0da2fbc7> in <module>()
3 file = open('google final us.csv','a')
4 df = pd.read_csv('1google us')
----> 5 df_f = df.loc[(df['a']==7) & (df['b'] == 2016) &
(df['c'] =='D') & (df['d'] =='US')]
6 file.write(df_f)
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\frame.py in
__getitem__(self, key)
1990 return self._getitem_multilevel(key)
1991 else:
-> 1992 return self._getitem_column(key)
1993
1994 def _getitem_column(self, key):
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\frame.py in
_getitem_column(self, key)
1997 # get column
1998 if self.columns.is_unique:
-> 1999 return self._get_item_cache(key)
2000
2001 # duplicate columns & possible reduce dimensionality
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\generic.py in
_get_item_cache(self, item)
1343 res = cache.get(item)
1344 if res is None:
-> 1345 values = self._data.get(item)
1346 res = self._box_item_values(item, values)
1347 cache[item] = res
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\internals.py in
get(self, item, fastpath)
3223
3224 if not isnull(item):
-> 3225 loc = self.items.get_loc(item)
3226 else:
3227 indexer = np.arange(len(self.items))
[isnull(self.items)]
C:\Users\...\Anaconda3\lib\site-packages\pandas\indexes\base.py in
get_loc(self, key, method, tolerance)
1876 return self._engine.get_loc(key)
1877 except KeyError:
-> 1878 return
self._engine.get_loc(self._maybe_cast_indexer(key))
1879
1880 indexer = self.get_indexer([key], method=method,
tolerance=tolerance)
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4027)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:3891)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12408)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12359)()
KeyError: 'a'
</code></pre>
<p>Any ideas what's going wrong? I've read numerous other stackoverflow posts (eg. <a href="http://stackoverflow.com/questions/38215009/create-dataframes-from-unique-value-pairs-by-filtering-across-multiple-columns">Create dataframes from unique value pairs by filtering across multiple columns</a> or <a href="http://stackoverflow.com/questions/33839540/how-can-i-break-down-a-large-csv-file-into-small-files-based-on-common-records-b">How can I break down a large csv file into small files based on common records by python</a>), but still not sure how to do this. Also, if you have a better way to extract data than this method - please let me know!</p>
| 1 | 2016-09-29T18:12:10Z | 39,778,035 | <p>When you use file.write(df_f) you are effectively saving a string representation of the DataFrame, which is meant for humans to look at. By default that representation will truncate rows and columns so that large frames can be displayed on the screen in a sensible manner. As a result column "a" may get chopped.</p>
<pre><code>with open('google final us.csv','a') as file:
for i in range(1, 118):
headers = i == 1
...
df_f.to_csv(file, headers=headers)
</code></pre>
<p>I did not test the above snippet, but you should get an idea how to get going now.</p>
<p>There other issues with this code, which you may want to correct:</p>
<ol>
<li><p>Open the file to write before the loop, close it after. Best to use context manager.</p></li>
<li><p>If the entire data fits in memory why go through a trouble to split it into 118 files? Simply filter it and save the resulting DataFrame using df.to_csv() method.</p></li>
<li>Instead of pandas consider using csv.DictReader and filter the lines on the fly.</li>
</ol>
<p>Lastly, if this a one-time job, why even write code for something that you could accomplish with a grep command (on Unix-like systems)?</p>
| 0 | 2016-09-29T18:58:44Z | [
"python",
"csv",
"pandas"
]
|
Python: Break down large file, filter based on criteria, and put all data into new csv file | 39,777,240 | <p>I have a super large csv.gzip file that has 59 mill rows. I want to filter that file for certain rows based on certain criteria and put all those rows in a new master csv file. As of now, I broke the gzip file into 118 smaller csv files and saved them on my computer. I did that with the following code:</p>
<pre><code>import pandas as pd
num = 0
df = pd.read_csv('google-us-data.csv.gz', header = None,
compression = 'gzip', chunksize = 500000,
names = ['a','b','c','d','e','f','g','h','i','j','k','l','m'],
error_bad_lines = False, warn_bad_lines = False)
for chunk in df:
num = num + 1
chunk.to_csv('%ggoogle us'%num ,sep='\t', encoding='utf-8'
</code></pre>
<p>The code above worked perfectly and I now have a folder with my 118 small files. I then wrote code to go through the 118 files one by one, extract rows that matched certain conditions, and append them all to a new csv file that I've created and named 'google final us'. Here is the code: </p>
<pre><code>import pandas as pd
import numpy
for i in range (1,118)
file = open('google final us.csv','a')
df = pd.read_csv('%ggoogle us'%i, error_bad_lines = False,
warn_bad_lines = False)
df_f = df.loc[(df['a']==7) & (df['b'] == 2016) & (df['c'] =='D') &
df['d'] =='US')]
file.write(df_f)
</code></pre>
<p>Unfortunately, the code above is giving me the below error: </p>
<pre><code>KeyError Traceback (most recent call last)
C:\Users\...\Anaconda3\lib\site-packages\pandas\indexes\base.py in
get_loc(self, key, method, tolerance)
1875 try:
-> 1876 return self._engine.get_loc(key)
1877 except KeyError:
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4027)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:3891)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12408)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12359)()
KeyError: 'a'
During handling of the above exception, another exception occurred:
KeyError Traceback (most recent call last)
<ipython-input-9-0ace0da2fbc7> in <module>()
3 file = open('google final us.csv','a')
4 df = pd.read_csv('1google us')
----> 5 df_f = df.loc[(df['a']==7) & (df['b'] == 2016) &
(df['c'] =='D') & (df['d'] =='US')]
6 file.write(df_f)
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\frame.py in
__getitem__(self, key)
1990 return self._getitem_multilevel(key)
1991 else:
-> 1992 return self._getitem_column(key)
1993
1994 def _getitem_column(self, key):
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\frame.py in
_getitem_column(self, key)
1997 # get column
1998 if self.columns.is_unique:
-> 1999 return self._get_item_cache(key)
2000
2001 # duplicate columns & possible reduce dimensionality
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\generic.py in
_get_item_cache(self, item)
1343 res = cache.get(item)
1344 if res is None:
-> 1345 values = self._data.get(item)
1346 res = self._box_item_values(item, values)
1347 cache[item] = res
C:\Users\...\Anaconda3\lib\site-packages\pandas\core\internals.py in
get(self, item, fastpath)
3223
3224 if not isnull(item):
-> 3225 loc = self.items.get_loc(item)
3226 else:
3227 indexer = np.arange(len(self.items))
[isnull(self.items)]
C:\Users\...\Anaconda3\lib\site-packages\pandas\indexes\base.py in
get_loc(self, key, method, tolerance)
1876 return self._engine.get_loc(key)
1877 except KeyError:
-> 1878 return
self._engine.get_loc(self._maybe_cast_indexer(key))
1879
1880 indexer = self.get_indexer([key], method=method,
tolerance=tolerance)
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:4027)()
pandas\index.pyx in pandas.index.IndexEngine.get_loc (pandas\index.c:3891)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12408)()
pandas\hashtable.pyx in pandas.hashtable.PyObjectHashTable.get_item
(pandas\hashtable.c:12359)()
KeyError: 'a'
</code></pre>
<p>Any ideas what's going wrong? I've read numerous other stackoverflow posts (eg. <a href="http://stackoverflow.com/questions/38215009/create-dataframes-from-unique-value-pairs-by-filtering-across-multiple-columns">Create dataframes from unique value pairs by filtering across multiple columns</a> or <a href="http://stackoverflow.com/questions/33839540/how-can-i-break-down-a-large-csv-file-into-small-files-based-on-common-records-b">How can I break down a large csv file into small files based on common records by python</a>), but still not sure how to do this. Also, if you have a better way to extract data than this method - please let me know!</p>
| 1 | 2016-09-29T18:12:10Z | 39,807,939 | <pre><code>import pandas
import glob
csvFiles = glob.glob(path + "/split files/*.csv")
list_ = []
for files in csvFiles:
df = pandas.read_csv(files, index_col=None)
df_f = df[(df['a']==7) & (df['b'] == 2016) & (df['c'] =='D') & df['d']=='US')]
list_.append(df_f)
frame = pandas.concat(list_, ignore_index=True)
frame.to_csv("Filtered Appended File")
</code></pre>
<p>Keep all the files in split files folder in working directory...</p>
<p>This should work... by reading all required files in a directory </p>
<p>reading an csv takes alot of memory... so breaking them and working on them is a possible solution ... seems to be you are on right track with that...</p>
| 0 | 2016-10-01T14:46:24Z | [
"python",
"csv",
"pandas"
]
|
weird behavior of multiprocessing scipy optimization inside of a function | 39,777,299 | <p>Here is a simple code that runs well. Even if the function minimize wraps the scipy.optimize.minimize it does not complain about pickling
</p>
<pre><code>import numpy as np
from scipy import optimize
from multiprocessing import Pool
def square(x):
return np.sum(x**2+ 2*x)
def minimize(args):
f,x = args
res = optimize.minimize(f, x, method = 'L-BFGS-B')
return res.x
x = np.random.rand(8,10)
args = [(square,x[i]) for i in range(8)]
p = Pool(8)
p.map(minimize,args)
</code></pre>
<p>However, if try the following it fails with the pickling error</p>
<p><code>PicklingError: Can't pickle <type 'function'>: attribute lookup __builtin__.function failed</code></p>
<pre class="lang-py prettyprint-override"><code>def run():
def square(x):
return np.sum(x**2+ 2*x)
def minimize(args):
f,x = args
res = optimize.minimize(f, x, method = 'L-BFGS-B')
return res.x
x = np.random.rand(8,10)
args = [(square,x[i]) for i in range(8)]
p = Pool(8)
p.map(minimize,args)
run()
</code></pre>
<p>I want to make a module to use scipy minimize in parallel with many population of initial guesses. However, as shown in the example, when I make it a module, it fails.</p>
| 1 | 2016-09-29T18:15:03Z | 39,780,889 | <p>The problem is that Python cannot pickle nested functions, and in your second example, you're trying to pass the nested <code>minimize</code> and <code>square</code> functions to your child process, which requires pickling. </p>
<p>If there's no reason that you must nest those two functions, moving them to the top-level of the module will fix the issue. You can also see <a href="http://stackoverflow.com/questions/12019961/python-pickling-nested-functions">this question</a> for techniques to pickle nested functions.</p>
| 0 | 2016-09-29T22:18:35Z | [
"python",
"optimization",
"parallel-processing",
"scipy",
"multiprocessing"
]
|
subprocess.Popen: 'OSError: [Errno 13] Permission denied' only on Linux | 39,777,345 | <blockquote>
<p>Code and logs have changed a lot (due to a major rewrite) since the question was asked.</p>
</blockquote>
<p>When my code (given below) executes on Windows (both my laptop and AppVeyor CI), it does what it's supposed to do. But on Linux (VM on TravisCI), it throws me a permission denied error.</p>
<hr>
<p><strong>Error:</strong></p>
<pre><code>$ sudo python3 test.py
Testing espeak4py
Testing wait4prev
Traceback (most recent call last):
File "test.py", line 10, in <module>
mySpeaker.say('Hello, World!')
File "/home/travis/build/sayak-brm/espeak4py/espeak4py/__init__.py", line 35, in say
self.prevproc = subprocess.Popen(cmd, executable=self.executable, cwd=os.path.dirname(os.path.abspath(__file__)))
File "/usr/lib/python3.2/subprocess.py", line 745, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.2/subprocess.py", line 1361, in _execute_child
raise child_exception_type(errno_num, err_msg)
OSError: [Errno 13] Permission denied
The command "sudo python3 test.py" exited with 1.
</code></pre>
<hr>
<p><strong>Code:</strong></p>
<p>espeak4py/<strong>init</strong>.py:</p>
<pre><code>#! python3
import subprocess
import os
import platform
class Speaker:
def __init__(self, voice="en", wpm=120, pitch=80):
self.prevproc = None
self.voice = voice
self.wpm = wpm
self.pitch = pitch
if platform.system() == 'Windows': self.executable = os.path.dirname(os.path.abspath(__file__)) + "/espeak.exe"
else: self.executable = os.path.dirname(os.path.abspath(__file__)) + "/espeak"
def generateCmd(self, phrase):
cmd = [
self.executable,
"--path=.",
"-v", self.voice,
"-p", self.pitch,
"-s", self.wpm,
phrase
]
cmd = [str(x) for x in cmd]
return cmd
def say(self, phrase, wait4prev=False):
cmd=self.generateCmd(phrase)
if wait4prev:
try: self.prevproc.wait()
except AttributeError: pass
else:
try: self.prevproc.terminate()
except AttributeError: pass
self.prevproc = subprocess.Popen(cmd, executable=self.executable, cwd=os.path.dirname(os.path.abspath(__file__)))
</code></pre>
<p>test.py:</p>
<pre><code>#! python3
import espeak4py
import time
print('Testing espeak4py\n')
print('Testing wait4prev')
mySpeaker = espeak4py.Speaker()
mySpeaker.say('Hello, World!')
time.sleep(1)
mySpeaker.say('Interrupted!')
time.sleep(3)
mySpeaker.say('Hello, World!')
time.sleep(1)
mySpeaker.say('Not Interrupted.', wait4prev=True)
time.sleep(5)
print('Testing pitch')
myHighPitchedSpeaker = espeak4py.Speaker(pitch=120)
myHighPitchedSpeaker.say('I am a demo of the say function')
time.sleep(5)
print('Testing wpm')
myFastSpeaker = espeak4py.Speaker(wpm=140)
myFastSpeaker.say('I am a demo of the say function')
time.sleep(5)
print('Testing voice')
mySpanishSpeaker = espeak4py.Speaker(voice='es')
mySpanishSpeaker.say('Hola. Como estas?')
print('Testing Completed.')
</code></pre>
<hr>
<p>I don't understand why it works only on one platform and not the other.</p>
<p>Travis CI Logs: <a href="https://travis-ci.org/sayak-brm/espeak4py" rel="nofollow">https://travis-ci.org/sayak-brm/espeak4py</a></p>
<p>AppVeyor Logs: <a href="https://ci.appveyor.com/project/sayak-brm/espeak4py" rel="nofollow">https://ci.appveyor.com/project/sayak-brm/espeak4py</a></p>
<p>GitHub: <a href="https://sayak-brm.github.io/espeak4py" rel="nofollow">https://sayak-brm.github.io/espeak4py</a></p>
<hr>
<p>I got the outputs of <code>ls -l</code> as @zvone recommended:</p>
<pre><code>$ ls -l
total 48
-rw-rw-r-- 1 travis travis 500 Sep 29 20:14 appveyor.yml
drwxrwxr-x 3 travis travis 4096 Sep 29 20:14 espeak4py
-rw-rw-r-- 1 travis travis 32400 Sep 29 20:14 LICENSE.md
-rw-rw-r-- 1 travis travis 2298 Sep 29 20:14 README.md
-rw-rw-r-- 1 travis travis 0 Sep 29 20:14 requirements.txt
-rw-rw-r-- 1 travis travis 759 Sep 29 20:14 test.py
$ ls -l espeak4py
total 592
-rw-rw-r-- 1 travis travis 276306 Sep 29 20:14 espeak
drwxrwxr-x 5 travis travis 4096 Sep 29 20:14 espeak-data
-rw-rw-r-- 1 travis travis 319488 Sep 29 20:14 espeak.exe
-rw-rw-r-- 1 travis travis 1125 Sep 29 20:14 __init__.py
</code></pre>
| 0 | 2016-09-29T18:18:09Z | 39,779,631 | <p>This is pretty much off topic on SO, but here's your problem...</p>
<p>This is the executable you are trying to run:</p>
<pre><code>-rw-rw-r-- 1 travis travis 276306 Sep 29 20:14 espeak
</code></pre>
<p>Its permissions are <code>rw-</code> read+write for owner (travis), <code>rw-</code> read+write for group (travis), and <code>r--</code> read for others. There is no permission to execute for anyone.</p>
<p>You have to give <code>x</code> (execute) permission to the user under which the script is running. Or give it to everyone:</p>
<pre><code>chmod 775 espeak
</code></pre>
<p>After that, <code>ls- l</code> should say:</p>
<pre><code>-rwxrwxr-x 1 travis travis 276306 Sep 29 20:14 espeak
</code></pre>
| 1 | 2016-09-29T20:42:10Z | [
"python",
"linux",
"python-3.x",
"cross-platform",
"travis-ci"
]
|
Possible to change directory and have change persist when script finishes? | 39,777,348 | <p>In trying to answer <a href="http://stackoverflow.com/q/39776616/3642398">a question for another user</a>, I came across something that piqued my curiosity:</p>
<pre><code>import os
os.chdir('..')
</code></pre>
<p>Will change the working directory as far as Python is concerned, so if I am in <code>/home/username/</code>, and I run <code>os.chdir('..')</code>, any subsequent code will work as though I am in <code>/home/</code>. For example, if I then do:</p>
<pre><code>import glob
files = glob.glob('*.py')
</code></pre>
<p><code>files</code> will be a list of <code>.py</code> files in <code>/home/</code> rather than in <code>/home/username/</code>. However, as soon as the script exits, I will be back in <code>/home/username/</code>, or whichever directory I ran the script from originally.</p>
<p>I have found the same thing happens with shell scripts. If I have the following script:</p>
<pre><code>#!/bin/bash
cd /tmp
touch foo.txt
</code></pre>
<p>Running the script from <code>/home/username/</code> will create a file <code>foo.txt</code> in <code>/tmp/</code>, but when the script finishes, I will still be in <code>/home/username/</code> not <code>/tmp/</code>. </p>
<p>I am curious if there is some fundamental reason why the working directory is not changed "permanently" in these cases, and if there <em>is</em> a way to change it permanently, e.g., to run a script with <code>~$ python myscript.py</code>, and have the terminal that script was run from end up in a different directory when the script finishes executing. </p>
| 4 | 2016-09-29T18:18:20Z | 39,777,583 | <p>There is no way to do that because calling Python or bash will run everything within their own context (that ends when the script ends).</p>
<p>You could achieve those results by using <code>source</code>, since that will actually execute your (shell) script in the current shell. i.e., call your example script with <code>source foomaker.bash</code> instead of <code>bash foomaker.bash</code></p>
| 2 | 2016-09-29T18:33:24Z | [
"python"
]
|
convert code from boto to bot3 | 39,777,353 | <p>I have a code which uploads base64 image to amazon s3 using boto version 2.38.0</p>
<pre><code>conn = boto.connect_s3(AWS_ACCESS_KEYXXX, AWS_SECRET_KEYXXX)
bucket = conn.get_bucket(AWS_BUCKET_NAMEXXX)
k = Key(bucket)
k.key = s3_file_name
k.set_metadata('Content-Type', 'image/jpeg')
k.set_contents_from_file(<binary data of image base 64>)
</code></pre>
<p>I want to convert this code so that it works for boto3 version as well.</p>
<pre><code>s3 = boto3.resource('s3')
res = s3.Object('<bucket name>', key_name).put(Body=binary_data, Metadata={'Content-Type': 'image/png'})
print "res : ", res
</code></pre>
<p>How can I generate image public url from this ? </p>
<p>Can any one please help me out with this ?</p>
| 0 | 2016-09-29T18:18:44Z | 39,797,492 | <p>In order to generate public URL you can use following code</p>
<pre><code>k.set_acl('public-read') # make file public
url = k.generate_url(expires_in=0, query_auth=False)
</code></pre>
<p>If you want to generate temporary link then you can change expires_in parameter </p>
<p><strong>Hope it helps !!</strong></p>
| 1 | 2016-09-30T18:04:05Z | [
"python",
"boto",
"boto3"
]
|
Selecting checboxes based on the parameters | 39,777,403 | <p>Let's say I have bunch of check boxes listed as below:</p>
<ol>
<li>All</li>
<li>Test1</li>
<li>Test3</li>
<li>Test 4</li>
</ol>
<p>When the user goes to the application sometimes all are selected and sometimes only b and c are selected. I want to write a script which can do two things:</p>
<ol>
<li>Select All fields(this has be to default)</li>
</ol>
<p>If the All field is not selected, it will select it.</p>
<ol start="2">
<li>Select only one field</li>
</ol>
<p>first checks this overwrites the default which is all checkbox field. If every field is unchecked, it select the specific field.</p>
<p>this is what I tried, please bare with me on this one:</p>
<pre><code>def checkbox(checkbox):
# checkbox variable could be a single item or multiple items user wants to check
if self.driver.find_element_by_id('type').is_selected() == False and checkbox = False:
self.execute_script_click("id", "type")
elif self.driver.find_element_by_id('type').is_selected() == False and checkbox = True:
# click twice to make sure nothing is selected
self.execute_script_click("id", "type")
self.execute_script_click("id", "type")
for i in checkbox:
# click is a function that finds an element and clicks the button
self.click(10, "id", i)
</code></pre>
| -2 | 2016-09-29T18:21:51Z | 39,790,971 | <p>This is what I understand from your problem description:</p>
<p>If <code>All</code> is selected do not do anything.
If <code>All</code> is not selected, there is a flag which determines what operation to do. (In this case I will use List of Elements is empty? flag)</p>
<ol>
<li>If false --> select <code>All</code> checkbox</li>
<li><p>If true --> select the field provided by the variable <code>checkbox</code> and uncheck <code>All</code></p>
<pre><code>public void checkboxSelect(List<WebElement> checkboxList):
// If All is not selected by default
if (driver.findElement(By.id('type')).isSelected() == false) {
// if check box list is empty, Select All checkbox
if (checkboxList.isEmpty()) {
driver.findElement(By.id('type')).click();
// else (check box list is not empty)
} else {
// Check all the elements in the check box list
for (WebElement checkbox : checkboxList) {
if (!checkbox.isSelected()) {
checkbox.click();
}
}
}
}
</code></pre></li>
</ol>
<p>The snippet above is written in Java. Would'nt be difficult to convert to Python. </p>
| 0 | 2016-09-30T11:56:23Z | [
"python",
"selenium"
]
|
In MongoDB, how do I find distinct values of a large, sharded collection? | 39,777,491 | <p>I have a large mongodb collection:</p>
<ul>
<li>with 3 shards, </li>
<li>Totalling 300M records (at least)</li>
<li>Shard key is (field1:1,field2:1)</li>
<li>There are other non-indexed fields. </li>
<li>Field1 is a ~200-characters string </li>
<li>Field2 is an int. </li>
<li>There are about 10M distinct values of Field1, with more added all the time.</li>
</ul>
<p><strong>PART 1: DISTINCT VALUES</strong></p>
<p>I need to find all the distinct values of field1. </p>
<p>Calling db.myCollection.distinct("field1") fails because there are more than 16MB of data in the result set.</p>
<p>Since the shard key is an index, this should be a covered query.</p>
<p><strong>PART 2: RETURN SORTED RESULTS</strong></p>
<p>Presuming there's an answer for the above, I'd like to make this recover from failures, that is, return results in sorted order. It's not vital.</p>
<p>The goal of sorted output: if the query fails halfway thru, I can resume from where I left off by adding the query specifier of field1:{$gt:lastGoodValue}. </p>
<p>So: Is this possible? Easy? Do I have to aggregate or map-reduce? Currently, I'm iterating over all 300M records and it shoves a lot of data around unnecessarily.</p>
| 0 | 2016-09-29T18:27:14Z | 39,784,773 | <p>A collection distinct command (<a href="https://docs.mongodb.com/manual/reference/command/distinct/#dbcmd.distinct%20link" rel="nofollow">doc link</a>) returns a single variable, an array. This variable is sent as a BSON document, which has 16MB max size limit in MongoDB.</p>
<p>Having the result set in an array is convenient for some cases but if it isn't sure to fit in 16MB then you wont be able to make use of it, as you found.</p>
<h2>Part 1 answer</h2>
<p>Instead you can <a href="https://docs.mongodb.com/manual/reference/operator/aggregation/group/#retrieve-distinct-values" rel="nofollow" title="retrieve distinct values">retrieve distinct values</a> using a $group stage in a aggregation command. You can also use MapReduce, but aggregation has better performance so I'll focus on that.</p>
<pre><code>db.myCollectoin.aggregate( [ { $group : { _id : "$myField" } } ] )
</code></pre>
<p>This will change the result from being a single array variable to being a cursor, the same as normal query / find command. So the way you iterate the distinct values client-side will be different, but you can keep on fetching more and more values until the cursor is finished.</p>
<p>Use the same command whether you have a cluster, replica set, or a standalone mongod. An important performance consideration is whether or not the field(s) being distinctly grouped are indexed, but as you mention this field is the leading field in a shard key, we know that it is.</p>
<h2>Part 2 answer</h2>
<p>Yes, you can sort it. Add a $sort stage after the $group.</p>
<pre><code>db.myCollection.aggregate( [
{ $group : { _id : "$myField" } },
{ $sort: { "_id": 1 } }
] )
</code></pre>
<p>If you have to restart the query again from a certain point you would add a <a href="https://docs.mongodb.com/manual/reference/operator/aggregation/match/%20%22$match%22%20stage" rel="nofollow">$match stage</a> as the first operation in the aggregation pipeline. E.g. <code>{ $match: {"myField": { "$gt": "AbCdEf...."} } },</code></p>
<p><em><strong>Note for new users of aggregation</strong></em>: the second "_id" in the $sort stage above is the "_id" field output by the $group stage, i.e. the distinct "myField" values. It is not a sort by the "_id" values in underlying collection.</p>
<p>Using a $project stage can rename that middle-stage "_id" key name if you'd prefer to.</p>
<pre><code>db.myCollection.aggregate( [
{ $group : { _id : "$myField" } },
{ $project : {
"_id" : false, /*stop it appearing as "_id" */
"myField" : "$_id" /*put original field name "myField" back on*/
} },
{ $sort: { "myField": 1 } }
] )
</code></pre>
| 1 | 2016-09-30T06:08:50Z | [
"python",
"mongodb-query"
]
|
How to speed up three dimensional sum for Madelung Constant? | 39,777,534 | <p>For my Computational Physics class we have to compute the <a href="https://en.wikipedia.org/wiki/Madelung_constant" rel="nofollow">Madelung Constant</a> for NaCl. My code to do this uses three nested for loops and therefore runs very slowly. I was wondering if there was a way to use arrays or some other method to increase the speed of computation. Thanks </p>
<pre><code>from math import sqrt
l= int(input("The lattice size is :"))
M = 0.0
for i in range(-L,L+1):
for j in range(-L,L+1):
for k in range(-L,L+1):
if not (i==j==k==0):
M += ((-1)**(i+j+k+1))/sqrt(i*i +j*j +k*k)
print("The Madelung constant for NaCl with lattice size",l,"is",M)
</code></pre>
| 2 | 2016-09-29T18:30:02Z | 39,779,327 | <p>Since you've noted in a comment that you may use <code>numpy</code>, I suggest doing that. You can construct a 3d grid for your integers, and compute each term simultaneously, thereby vectorizing your calculation. You only need to watch out for the singular case where every integer is 0, for instance using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow"><code>numpy.where</code></a>:</p>
<pre><code>import numpy as np
ran = np.arange(-L,L+1)
i,j,k = np.meshgrid(ran,ran,ran)
M = np.where((i!=0) | (j!=0) | (k!=0),
(-1)**(i+j+k+1)/np.sqrt(i**2+j**2+k**2),
0).sum()
</code></pre>
<p><code>ran</code> is a numpy array with the same elements as in <code>range()</code> (if cast to a list in python 3). <code>meshgrid</code> then constructs three 3d arrays, which together span the 3d space where you need to perform your sum.</p>
<p>Note that for large domains this approach has much higher memory need. This is ususal with vectorization: you can spare CPU time at the cost of increased memory need.</p>
| 1 | 2016-09-29T20:22:27Z | [
"python",
"performance",
"numpy",
"vectorization"
]
|
How to speed up three dimensional sum for Madelung Constant? | 39,777,534 | <p>For my Computational Physics class we have to compute the <a href="https://en.wikipedia.org/wiki/Madelung_constant" rel="nofollow">Madelung Constant</a> for NaCl. My code to do this uses three nested for loops and therefore runs very slowly. I was wondering if there was a way to use arrays or some other method to increase the speed of computation. Thanks </p>
<pre><code>from math import sqrt
l= int(input("The lattice size is :"))
M = 0.0
for i in range(-L,L+1):
for j in range(-L,L+1):
for k in range(-L,L+1):
if not (i==j==k==0):
M += ((-1)**(i+j+k+1))/sqrt(i*i +j*j +k*k)
print("The Madelung constant for NaCl with lattice size",l,"is",M)
</code></pre>
| 2 | 2016-09-29T18:30:02Z | 39,786,927 | <p>Here's an approach using open meshes with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.ogrid.html" rel="nofollow"><code>np.ogrid</code></a> instead of creating actual meshes, which based on <a href="http://stackoverflow.com/a/39667342/3293881"><code>this other post</code></a> must be pretty efficient -</p>
<pre><code># Create open meshes for efficiency purposes
I,J,K = np.ogrid[-L:L+1,-L:L+1,-L:L+1]
# Perform all computations using those open meshes in a vectorized manner
all_vals = ((-1)**(I+J+K+1))/np.sqrt(I**2 +J**2+ K**2)
# Corresponding to "not (i==j==k==0)", which would be satisfied by
# just one combination and that will occur at location (L,L,L),
# let's set it as zero as a means to sum later on without adding for that elem
all_vals[L,L,L] = 0
M_out = all_vals.sum()
</code></pre>
| 2 | 2016-09-30T08:25:56Z | [
"python",
"performance",
"numpy",
"vectorization"
]
|
Pressing Enter to skip print? | 39,777,553 | <p>I'm making a sort of text based game in python and I was wondering if there was a way to skip print's while playing by just pressing enter. </p>
<p>Let's say I have this code.</p>
<pre><code> print("BORING")
time.sleep(5)
print("ANNOYED")
time.sleep(5)
</code></pre>
<p>and I just want to skip the wait by pressing the Enter key.
The main reason I'm asking is because I use</p>
<pre><code>def print_slow(str):
for letter in str:
stdout.write(letter)
stdout.flush()
time.sleep(0.1)
</code></pre>
<p>a lot to have the prints type the text, but I want players to be able to skip to the next line of code by pressing Enter. Or a really cool feature if anyone can help me is when I press enter it speeds up the text. I know python isn't the best for creating games but I felt it would be a fun challenge to just attempt a text based game with normal functions.</p>
<p>Thanks!</p>
| 1 | 2016-09-29T18:31:00Z | 39,777,687 | <p>You'll need to handle this asynchronously, because you want to be handling two things at the same time. You want to both listen to some kind of input and print out something. </p>
<p>Create an asynchronous task -> let it prent</p>
<p>AND at the same time listen for input() (or another way of registering keypresses). </p>
<p>If input is evoked, cancel the asynchronous task and let player continue in the game.</p>
| 0 | 2016-09-29T18:39:01Z | [
"python"
]
|
Pressing Enter to skip print? | 39,777,553 | <p>I'm making a sort of text based game in python and I was wondering if there was a way to skip print's while playing by just pressing enter. </p>
<p>Let's say I have this code.</p>
<pre><code> print("BORING")
time.sleep(5)
print("ANNOYED")
time.sleep(5)
</code></pre>
<p>and I just want to skip the wait by pressing the Enter key.
The main reason I'm asking is because I use</p>
<pre><code>def print_slow(str):
for letter in str:
stdout.write(letter)
stdout.flush()
time.sleep(0.1)
</code></pre>
<p>a lot to have the prints type the text, but I want players to be able to skip to the next line of code by pressing Enter. Or a really cool feature if anyone can help me is when I press enter it speeds up the text. I know python isn't the best for creating games but I felt it would be a fun challenge to just attempt a text based game with normal functions.</p>
<p>Thanks!</p>
| 1 | 2016-09-29T18:31:00Z | 39,778,557 | <p>For speeding up your text, you could map the text to a function that prints it all (or iterates through it and prints one letter/word at a time... depending on what it is you're trying to accomplish). This function could be called on keypress using the tkinter library:</p>
<pre><code>import tkinter
def speed_up(event=None):
print('Speeding up!')
main = Tk()
opt = Button(main, text = 'Press Enter', command = speed_up)
opt.pack()
main.bind('<Enter>', speed_up)
</code></pre>
<p>This could also be used to skip text.</p>
| 0 | 2016-09-29T19:31:51Z | [
"python"
]
|
Is there any way to load a template dynamically inside of a static navbar template based on the current url? | 39,777,664 | <p>Hi I'm new to django and I'm trying to make a site with it. My project structure is currently like so: </p>
<pre><code>project/
manage.py
project/
__init__.py
settings.py
urls.py
myapps/
__init__.py
main/
__init__,models,views,urls,etc...
static/
main/
templates/
main/
</code></pre>
<p>I'm using html5boilerplate with all the css, javascript and images are located in </p>
<pre><code>myapps/main/static/main/
</code></pre>
<p>And all of my templates reside in </p>
<pre><code>myapps/main/templates/main/
</code></pre>
<p>With unique names. When the user hits the main page they load up a slightly modified index.html (renamed to boilerplate.html) that came with html5bp:</p>
<pre><code><!-- boilerplate.html -->
{% load static %}
...
<body>
<!--[if lt IE 8]>
...
<![endif]-->
<!-- Add your site or application content here -->
{% include "main/index.html" %}
<script src="https://code.jquery.com/jquery-1.12.0.min.js"></script>
<script src="{% static 'main/js/plugins.js' %}"></script>
<script src="{% static 'main/js/main.js' %}"></script>
<!-- Google Analytics: change UA-XXXXX-X to be your site's ID. -->
<script>
...
</script>
</body>
...
</code></pre>
<p>I'm using index.html as my static sidebar but I have it in a separate template in case I want to change it down the line and it separates the boring stuff from the fun stuff (who likes looking at bp code?). </p>
<pre><code><!-- index.html -->
<div id="sidenav" class="sidenav">
...
</div>
<div id="main">
{% include /*SOMETHING HERE*/ %}
</div>
</code></pre>
<p>Now whenever a user is redirected to a new page I just want to resolve the destination url and change the include tag to include a completely different template. So I figured I'd have to pass the requested url to a view function and return a template from that, but I am unsure how to do this or if it would work. Sorry if this is a repeat question from somewhere, I did look and couldn't find anything relevant.</p>
| 0 | 2016-09-29T18:37:14Z | 39,777,911 | <p>You're thinking about this the wrong way round: you should use template inheritance. Your view should render the specific template for that URL, but each template should in turn extend index.html which provides the sidebar and the rest of the structure.</p>
| 1 | 2016-09-29T18:50:53Z | [
"python",
"django",
"redirect",
"django-templates"
]
|
How to update nltk package so that it does not break email into 3 different tokens? | 39,777,806 | <p>When I type following code:
<code>tokens = word_tokenize("a@b.com")</code></p>
<p>It gets broken into these 3 tokens: 'a' , '@' , 'b.com'</p>
<p>What I want to do, is to keep it as a single token 'a@b.com'.</p>
| 1 | 2016-09-29T18:44:33Z | 39,780,323 | <p><em>DISCLAIMER: There are a lot of email regexps out there. I am not trying to match all email formats in this question, just showing an example</em>.</p>
<p>A regex approach with <code>RegexpTokenizer</code> (<a href="http://stackoverflow.com/questions/39777806/how-to-update-nltk-package-so-that-it-does-not-break-email-into-3-different-toke#comment66852179_39777806">mentioned above</a> by <a href="http://stackoverflow.com/users/1698431/lenz">lenz</a>) can work:</p>
<pre><code>from nltk.tokenize.regexp import RegexpTokenizer
line="My email: a@bc.com is not accessible."
pattern = r'\S+@[^\s.]+\.[a-zA-Z]+|\w+|[^\w\s]'
tokeniser=RegexpTokenizer(pattern)
tokeniser.tokenize(line)
# => ['My', 'email', ':', 'a@bc.com', 'is', 'not', 'accessible', '.']
</code></pre>
<p>The regex matches:</p>
<ul>
<li><code>\S+@[^\s.]+\.[a-zA-Z]+</code> - text looking like email:
<ul>
<li><code>\S+</code> - 1 or more non-whitespace chars</li>
<li><code>@</code> - a <code>@</code> symbol</li>
<li><code>[^\s.]+</code> - 1 or more chars other than whitespaces and <code>.</code></li>
<li><code>\.</code> - a literal dot</li>
<li><code>[a-zA-Z]+</code> - 1 or more ASCII letters</li>
</ul></li>
<li><code>|</code> - or </li>
<li><code>\w+</code> - 1 or more word chars (letters, digits, or underscores)</li>
<li><code>|</code> - or</li>
<li><code>[^\w\s]</code> - a single (add <code>+</code> after it to match a sequence of 1 or more) occurrence of a char other than a word and whitespace char.</li>
</ul>
<p>See the <a href="https://regex101.com/r/bDfeYV/1" rel="nofollow">online regex demo</a>.</p>
| 1 | 2016-09-29T21:28:35Z | [
"python",
"regex",
"nlp",
"nltk"
]
|
django-auth-ldap not finding groups | 39,777,854 | <p>I am using a fork of django-auth-ldap (django-auth-ldap-ng==1.7.6) with pyldap==2.4.25.1 to connect to ldap. I am able to successfully log in but I am running into errors. I am getting a DN_SYNTAX error even though I am able to still log in with ldap credentials and map attributes. Here is the main error code:</p>
<pre><code>INFO "GET /login HTTP/1.1" 200 3141
DEBUG (0.000) SELECT "auth_user"."id", "auth_user"."password","auth_user"."last_login", "auth_user"."is_superuser", "auth_user"."username", "auth_user"."first_name", "auth_user"."last_name", "auth_user"."email", "auth_user"."is_staff", "auth_user"."is_active", "auth_user"."date_joined" FROM "auth_user" WHERE "auth_user"."username" LIKE 'zorpho' ESCAPE '\'; args=('zorpho',)
DEBUG Populating Django user zorpho
ERROR search_s('zorpho@MyCompany.local', 0, '(objectClass=*)') raised INVALID_DN_SYNTAX({'desc': 'Invalid DN syntax', 'info': "0000208F: NameErr: DSID-03100225, problem 2006 (BAD_NAME), data 8350, best match of:\n\t'zorpho@MyCompany.local'\n"},)
DEBUG search_s('zorpho@MyCompany.local', 0, '(objectClass=*)') returned 0 objects:
WARNING zorpho@MyCompany.local does not have a value for the attribute mail
ERROR search_s('zorpho@MyCompany.local', 0, '(objectClass=*)') raised INVALID_DN_SYNTAX({'desc': 'Invalid DN syntax', 'info': "0000208F: NameErr: DSID-03100225, problem 2006 (BAD_NAME), data 8350, best match of:\n\t'zorpho@MyCompany.local'\n"},)
DEBUG search_s('zorpho@MyCompany.local', 0, '(objectClass=*)') returned 0 objects:
WARNING zorpho@MyCompany.local does not have a value for the attribute sn
ERROR search_s('zorpho@MyCompany.local', 0, '(objectClass=*)') raised INVALID_DN_SYNTAX({'desc': 'Invalid DN syntax', 'info': "0000208F: NameErr: DSID-03100225, problem 2006 (BAD_NAME), data 8350, best match of:\n\t'zorpho@MyCompany.local'\n"},)
DEBUG search_s('zorpho@MyCompany.local', 0, '(objectClass=*)') returned 0 objects:
WARNING zorpho@MyCompany.local does not have a value for the attribute givenName
DEBUG zorpho@MyCompany.local is not a member of cn=administrators,ou=security groups,ou=mybusiness,dc=mycompany,dc=local
DEBUG search_s('OU=MyBusiness,DC=MyCompany,DC=local', 2, '(&(objectClass=groupOfNames)(member=zorpho@MyCompany.local))') returned 0 objects:
</code></pre>
<p>Here is my settings.py setup:</p>
<pre><code># Set up LDAP Authentication
import ldap
from django_auth_ldap.config import LDAPSearch, ActiveDirectoryGroupType, GroupOfNamesType
AUTHENTICATION_BACKENDS = (
'django_auth_ldap.backend.LDAPBackend',
'django.contrib.auth.backends.ModelBackend',
)
# Connect to the LDAP server
AUTH_LDAP_SERVER_URI = "ldap://IP Address"
AUTH_LDAP_BIND_AS_AUTHENTICATING_USER = True
AUTH_LDAP_BIND_DN = "Administrator"
AUTH_LDAP_BIND_PASSWORD = "Password"
# How do we find users?
AUTH_LDAP_USER_DN_TEMPLATE = "%(user)s@MyCompany.local"
AUTH_LDAP_USER_SEARCH = LDAPSearch("OU=MyBusiness,DC=MyCompany,DC=local",
ldap.SCOPE_SUBTREE, "(sAMAccountName=%(user)s)")
# new code 3/3
AUTH_LDAP_FIND_GROUP_PERMS = True
# User groups refresh after login/logout new code 3/3
AUTH_LDAP_ALWAYS_UPDATE_USER = True
# How do we find groups?
AUTH_LDAP_GROUP_SEARCH = LDAPSearch("OU=MyBusiness,DC=MyCompany,DC=local",
ldap.SCOPE_SUBTREE, "(objectClass=groupOfNames)")
AUTH_LDAP_GROUP_TYPE = GroupOfNamesType(name_attr="cn")
#AUTH_LDAP_GROUP_TYPE = ActiveDirectoryGroupType()
# Map attributes to the user
AUTH_LDAP_USER_ATTR_MAP = {
"first_name": "givenName",
"last_name": "sn",
"email": "mail",
}
AUTH_LDAP_USER_FLAGS_BY_GROUP = {
"is_staff": "CN=Administrators,OU=Security Groups,OU=MyBusiness,DC=MyCompany,DC=local",
}
# Use LDAP groups as Django groups
AUTH_LDAP_MIRROR_GROUPS = True
# Misc options
AUTH_LDAP_CONNECTION_OPTIONS = {
ldap.OPT_REFERRALS: 0
}
</code></pre>
<p>I have tried using the "ActiveDirectoryGroupType" but that didn't seem to make a difference. Any suggestions?</p>
| 0 | 2016-09-29T18:47:27Z | 39,861,232 | <p>I was able to solve this with the solution provided here:</p>
<p><a href="https://bitbucket.org/psagers/django-auth-ldap/issues/21/cant-bind-and-search-on-activedirectory" rel="nofollow">https://bitbucket.org/psagers/django-auth-ldap/issues/21/cant-bind-and-search-on-activedirectory</a></p>
<p>It is a work around as it is an active directory hack.</p>
| 0 | 2016-10-04T20:31:06Z | [
"python",
"django",
"ldap",
"django-auth-ldap"
]
|
How can I sort a nested list accordign to each element in nested lists | 39,777,961 | <p>I have a model Page which stores an optional parent page (instance of itself). In my views I made a function which will return a nested list of all pages and their parents.</p>
<p>For example, my page architecture is</p>
<pre><code>CCC
AAA
DDD
KKK
EEE
ZZZ
BBB
</code></pre>
<p>So DDD has the parent page AAA, which has it's own parent page CCC. The CCC is top page and has no parent page.</p>
<p>The function will first get a queryset of all instances of Pages and sort them alphabetically. Then it will proceed to recursively generate a "full parent architecture" list, where each element on that list is another list of all parent pages, including the page itself. From the example above if we take a slice of list for page DDD, it would return [CCC, AAA, DDD].</p>
<p>My function currently returns a list like this for the above stated example:</p>
<pre><code>[
[CCC, AAA],
[ZZZ, BBB],
[CCC],
[CCC, AAA, DDD],
[CCC, AAA, KKK, EEE],
[CCC, AAA, KKK],
[ZZZ],
]
</code></pre>
<p>As you can see from the that list, all elements are sorted alphabetically according to the last element on that list. Now I want to display all those parent pages on my front end to basically look like a sitemap and show the proper parent architecture of all Pages on my site that is sorted alphabetically according to each element in each nested list. The end result would be:</p>
<pre><code>[
[CCC],
[CCC, AAA],
[CCC, AAA, DDD],
[CCC, AAA, KKK],
[CCC, AAA, KKK, EEE],
[ZZZ],
[ZZZ, BBB],
]
</code></pre>
<p>To put it simple, I want to go through each first element on each list and sort it alphabetically, then each second element and sort them as well, then third, then forth, and so on. Is there a way to do this?</p>
<p>EDIT:
To avoid confusion here's my view:</p>
<pre><code>def page_list(request):
# Fetch all pages and sort them alphabetically
queryset = Page.objects.all().order_by("title")
output = []
# Generate list of pages and their parents
for page in queryset:
output.append(get_list_of_parents(page))
context = {
"title": "Page List",
"page_list": output,
}
return render(request, template + '/page_list.html', context)
# Get an array of all parent instances of a Page model
def get_list_of_parents(page, list=None):
current_page = page
parent_list = []
if list is not None:
parent_list = list
parent_list.append(current_page)
if current_page.parent is not None:
parent_list = get_list_of_parents(current_page.parent, parent_list)
else:
# if this is the last parent page, reverse the order of list to display list in form of parent path to child
parent_list.reverse()
return parent_list
</code></pre>
| 0 | 2016-09-29T18:54:14Z | 39,778,336 | <p>Using a tree of Pages:</p>
<p>I define this simple tree structure:</p>
<pre><code>class Page(object):
def __init__(self, name, children=None):
self.name = name
self.children = children or []
def display(self, indent=""):
child_repr = [child.display(indent=indent + " ") for child in self.children]
return indent + self.name + "\n" + "".join(child_repr)
def __str__(self):
return self.display()
</code></pre>
<p>The <code>display</code> and <code>__str__</code> methods are used for printing.</p>
<p>I can build your tree as a list of Pages:</p>
<pre><code>tree = [
Page("CCC", [
Page("AAA", [
Page("DDD"),
Page("KKK", [
Page("EEE")])])]),
Page("ZZZ", [
Page("BBB")])]
</code></pre>
<p>I can display the structure like this:</p>
<pre><code>for item in tree:
print(item)
</code></pre>
<p>I get:</p>
<pre><code>CCC
AAA
DDD
KKK
EEE
ZZZ
BBB
</code></pre>
<p>To traverse the tree, I define the following method:</p>
<pre><code> def traverse(self, result, stack=None):
stack = stack or []
stack.append(self.name)
result.append(list(stack))
for child in self.children:
child.traverse(result, stack)
stack.pop()
</code></pre>
<p>Where <code>result</code> is the list to fill.</p>
<p>Usage:</p>
<pre><code>result = []
for item in tree:
item.traverse(result)
import pprint
pprint.pprint(result)
</code></pre>
<p>I'll get:</p>
<pre><code>[['CCC'],
['CCC', 'AAA'],
['CCC', 'AAA', 'DDD'],
['CCC', 'AAA', 'KKK'],
['CCC', 'AAA', 'KKK', 'EEE'],
['ZZZ'],
['ZZZ', 'BBB']]
</code></pre>
<p>Great!</p>
| 0 | 2016-09-29T19:18:27Z | [
"python",
"django",
"sorting",
"order",
"nested-lists"
]
|
How can I sort a nested list accordign to each element in nested lists | 39,777,961 | <p>I have a model Page which stores an optional parent page (instance of itself). In my views I made a function which will return a nested list of all pages and their parents.</p>
<p>For example, my page architecture is</p>
<pre><code>CCC
AAA
DDD
KKK
EEE
ZZZ
BBB
</code></pre>
<p>So DDD has the parent page AAA, which has it's own parent page CCC. The CCC is top page and has no parent page.</p>
<p>The function will first get a queryset of all instances of Pages and sort them alphabetically. Then it will proceed to recursively generate a "full parent architecture" list, where each element on that list is another list of all parent pages, including the page itself. From the example above if we take a slice of list for page DDD, it would return [CCC, AAA, DDD].</p>
<p>My function currently returns a list like this for the above stated example:</p>
<pre><code>[
[CCC, AAA],
[ZZZ, BBB],
[CCC],
[CCC, AAA, DDD],
[CCC, AAA, KKK, EEE],
[CCC, AAA, KKK],
[ZZZ],
]
</code></pre>
<p>As you can see from the that list, all elements are sorted alphabetically according to the last element on that list. Now I want to display all those parent pages on my front end to basically look like a sitemap and show the proper parent architecture of all Pages on my site that is sorted alphabetically according to each element in each nested list. The end result would be:</p>
<pre><code>[
[CCC],
[CCC, AAA],
[CCC, AAA, DDD],
[CCC, AAA, KKK],
[CCC, AAA, KKK, EEE],
[ZZZ],
[ZZZ, BBB],
]
</code></pre>
<p>To put it simple, I want to go through each first element on each list and sort it alphabetically, then each second element and sort them as well, then third, then forth, and so on. Is there a way to do this?</p>
<p>EDIT:
To avoid confusion here's my view:</p>
<pre><code>def page_list(request):
# Fetch all pages and sort them alphabetically
queryset = Page.objects.all().order_by("title")
output = []
# Generate list of pages and their parents
for page in queryset:
output.append(get_list_of_parents(page))
context = {
"title": "Page List",
"page_list": output,
}
return render(request, template + '/page_list.html', context)
# Get an array of all parent instances of a Page model
def get_list_of_parents(page, list=None):
current_page = page
parent_list = []
if list is not None:
parent_list = list
parent_list.append(current_page)
if current_page.parent is not None:
parent_list = get_list_of_parents(current_page.parent, parent_list)
else:
# if this is the last parent page, reverse the order of list to display list in form of parent path to child
parent_list.reverse()
return parent_list
</code></pre>
| 0 | 2016-09-29T18:54:14Z | 39,778,668 | <p>Take advantage of helper functions in itertools and do your sorting with a simple and highly efficient one-line comparison function.</p>
<pre><code>>>> import pprint
>>> from itertools import dropwhile, izip_longest
>>> pages = [['CCC', 'AAA'],
... ['ZZZ', 'BBB'],
... ['CCC'],
... ['CCC', 'AAA', 'DDD'],
... ['CCC', 'AAA', 'DDD', 'EEE'],
... ['CCC', 'AAA', 'KKK'],
... ['ZZZ']]
>>> pprint.pprint(sorted(pages, cmp=lambda a, b: cmp(*next(dropwhile(lambda x: not cmp(x[0], x[1]), izip_longest(a, b))))))
[['CCC'],
['CCC', 'AAA'],
['CCC', 'AAA', 'DDD'],
['CCC', 'AAA', 'DDD', 'EEE'],
['CCC', 'AAA', 'KKK'],
['ZZZ'],
['ZZZ', 'BBB']]
>>>
</code></pre>
<p>This works not only for strings, but also Page instances, as long as they are comparable, i.e. that you have defined your <code>__cmp__</code> method in your Page class.</p>
| 0 | 2016-09-29T19:39:25Z | [
"python",
"django",
"sorting",
"order",
"nested-lists"
]
|
Django 1.9 doesnt filter query results | 39,777,995 | <p>I have a form where the options of one of my select input are related to another table.</p>
<p>Everything works perfectly, but know i want to exclude some options so i have the following kode in <em>forms.py</em>:</p>
<pre><code>class employeesForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super(employeesForm, self).__init__(*args, **kwargs)
self.fields['num_employee'].required = False
self.fields['id_ignition'].required = False
self.fields['picture'].required = False
self.fields['source'].required = False
class Meta:
model = Employees
ccOptions = CostCenter.objects.all().values_list('id_costCenter', 'area')
jcOptions = JobCode.objects.all().values_list('id_jobCode', 'gmpName')
ntOptions = Nations.objects.all().values_list('nationality')
esOptions = EmpSources.objects.all().values_list('source')
from django.db.models import Q
# in this line
stOptions = Status.objects.exclude(Q(id_status=2) | Q(id_status=3)).values_list('status')
fields = [
'num_employee',
'fullName',
'shortName',
'gender',
'birthday',
'initday',
'id_status',
'nationality',
'picture',
'source',
]
widgets = {
'num_employee': forms.NumberInput(attrs={'class': 'form-control', 'name': 'num_employee'}),
'fullName': forms.TextInput(attrs={'class': 'form-control', 'name': 'fullName', 'placeholder': 'Angel Rafael Ortega Vazquez'}),
'shortName': forms.TextInput(attrs={'class': 'form-control', 'name': 'shortName', 'placeholder': 'Rafael Ortega'}),
'gender': forms.CheckboxInput(attrs={'class': 'form-control', 'name': 'gender'}),
'birthday': forms.DateInput(attrs={'class': 'form-control', 'name': 'birthday'}),
'initday': forms.DateInput(attrs={'class': 'form-control', 'name': 'initday'}),
# also here
'id_status': forms.Select(choices=stOptions, attrs={'class': 'form-control', 'name': 'id_status'}),
'nationality': forms.Select(choices=ntOptions, attrs={'class': 'form-control', 'name': 'nationality'}),
'picture': forms.ClearableFileInput(attrs={'class': 'form-control', 'name': 'picture'}),
'source': forms.Select(choices=esOptions, attrs={'class': 'form-control', 'name': 'source'}),
}
</code></pre>
<p>The problem is that even with the exclude option, my select stills rendering the complete table. </p>
<p>I even tried filtering only one parameter but the result is the same.
I also deleted my browser cache just to discard this possibility, but it didn't changed.</p>
| 0 | 2016-09-29T18:55:52Z | 39,780,494 | <p>Related objects in ModelForms are <code>ModelChoiceField</code>s. You need to specify the <code>queryset=...</code> attribute.</p>
<p>Example:</p>
<pre><code>class Department(models.Model):
name = models.CharField()
active = models.BooleanField()
class Employee(models.Model):
name = models.CharField()
department = models.ForeignKey(Department)
class EmployeeForm(forms.ModelForm):
class Meta:
model = Employee
exclude = []
def __init__(self, *args, **kwargs):
super(EmployeeForm, self).__init__(*args, **kwargs)
self.fields['department'].queryset = Department.objects.filter(active=True)
</code></pre>
| 1 | 2016-09-29T21:42:49Z | [
"python",
"django-forms",
"filtering",
"django-1.9"
]
|
Concatenate nested python lists column-wise (similar to numpy.hstack() ) | 39,778,025 | <p>I am trying to concatenate two nested lists column-wise in Python. This is similar to numpy.hstack() but is in base Python with Python lists. I am able to do this in the following way. Is there a better, potentially faster way of concatenating nested lists column-wise?</p>
<pre><code>list_a = [[2, 2], [4, 4], [6, 6]]
list_b = [[1, 1], [3, 3], [5, 5]]
# list_b as array-like
#[2, 2]
#[4, 4]
#[6, 6]
# list_c as array_like
#[1, 1]
#[3, 3]
#[5, 5]
for x,y in zip(list_a,list_b):
x = x + y
# list_a after concatenation
#[2, 2, 1, 1]
#[4, 4, 3, 3]
#[6, 6, 5, 5]
</code></pre>
| 0 | 2016-09-29T18:58:18Z | 39,778,026 | <p>List comprehension might not be faster, but it is cleaner.</p>
<pre><code>[a + b for a,b in zip(list_a,list_b)]
</code></pre>
<p>or</p>
<pre><code>[a.extend(b) for a,b in zip(list_a,list_b)]
</code></pre>
| 1 | 2016-09-29T18:58:18Z | [
"python",
"list"
]
|
retrieve data sqlite3 into list python | 39,778,032 | <p>I want to retrieve data from sqlite3 table, this is my code, but I only get an empty list. I checked my query on the sqlite3 and it works fine there.</p>
<pre><code>import sqlite3
conn = sqlite3.connect("testee.db")
c = conn.cursor()
myquery = ("SELECT stock FROM testproduct WHERE store=3;")
c.execute(myquery)
templist=list(c.fetchall())
</code></pre>
<p>But templist is empty.</p>
| 0 | 2016-09-29T18:58:25Z | 39,778,140 | <p>I found out the error just now. The database file in that directory was empty. I copied the filled in file to the directory python is running and it works fine.</p>
| 0 | 2016-09-29T19:05:45Z | [
"python",
"sqlite",
"sqlite3",
"cursor",
"fetchall"
]
|
Scrapy - processing items with pipeline | 39,778,086 | <p>I'm running <code>scrapy</code>from a <code>python script</code>.</p>
<p>I was told that in <code>scrapy</code>, <code>responses</code> are built in <code>parse()</code>and further processed in <code>pipeline.py</code>. </p>
<p>this is how my <code>framework</code> is set so far:</p>
<p><strong>python script</strong></p>
<pre><code>def script(self):
process = CrawlerProcess(get_project_settings())
response = process.crawl('pitchfork_albums', domain='pitchfork.com')
process.start() # the script will block here until the crawling is finished
</code></pre>
<p><strong>spiders</strong></p>
<pre><code>class PitchforkAlbums(scrapy.Spider):
name = "pitchfork_albums"
allowed_domains = ["pitchfork.com"]
#creates objects for each URL listed here
start_urls = [
"http://pitchfork.com/reviews/best/albums/?page=1",
"http://pitchfork.com/reviews/best/albums/?page=2",
"http://pitchfork.com/reviews/best/albums/?page=3"
]
def parse(self, response):
for sel in response.xpath('//div[@class="album-artist"]'):
item = PitchforkItem()
item['artist'] = sel.xpath('//ul[@class="artist-list"]/li/text()').extract()
item['album'] = sel.xpath('//h2[@class="title"]/text()').extract()
yield item
</code></pre>
<p><strong>items.py</strong></p>
<pre><code>class PitchforkItem(scrapy.Item):
artist = scrapy.Field()
album = scrapy.Field()
</code></pre>
<p><strong>settings.py</strong></p>
<pre><code>ITEM_PIPELINES = {
'blogs.pipelines.PitchforkPipeline': 300,
}
</code></pre>
<p><strong>pipelines.py</strong> </p>
<pre><code>class PitchforkPipeline(object):
def __init__(self):
self.file = open('tracks.jl', 'wb')
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
for i in item:
return i['album'][0]
</code></pre>
<p>if I just <code>return item</code> in <code>pipelines.py</code>, I get data like so (one <code>response</code> for each <code>html</code>page):</p>
<pre><code>{'album': [u'Sirens',
u'I Had a Dream That You Were Mine',
u'Sunergy',
u'Skeleton Tree',
u'My Woman',
u'JEFFERY',
u'Blonde / Endless',
u' A Mulher do Fim do Mundo (The Woman at the End of the World) ',
u'HEAVN',
u'Blank Face LP',
u'blackSUMMERS\u2019night',
u'Wildflower',
u'Freetown Sound',
u'Trans Day of Revenge',
u'Puberty 2',
u'Light Upon the Lake',
u'iiiDrops',
u'Teens of Denial',
u'Coloring Book',
u'A Moon Shaped Pool',
u'The Colour in Anything',
u'Paradise',
u'HOPELESSNESS',
u'Lemonade'],
'artist': [u'Nicolas Jaar',
u'Hamilton Leithauser',
u'Rostam',
u'Kaitlyn Aurelia Smith',
u'Suzanne Ciani',
u'Nick Cave & the Bad Seeds',
u'Angel Olsen',
u'Young Thug',
u'Frank Ocean',
u'Elza Soares',
u'Jamila Woods',
u'Schoolboy Q',
u'Maxwell',
u'The Avalanches',
u'Blood Orange',
u'G.L.O.S.S.',
u'Mitski',
u'Whitney',
u'Joey Purp',
u'Car Seat Headrest',
u'Chance the Rapper',
u'Radiohead',
u'James Blake',
u'White Lung',
u'ANOHNI',
u'Beyonc\xe9']}
</code></pre>
<p>what I would like to do in <code>pipelines.py</code>is to be able to fetch individual <code>songs</code> for each <code>item</code>, like so:</p>
<pre><code>[u'Sirens']
</code></pre>
<p>please help?</p>
| 0 | 2016-09-29T19:01:33Z | 39,778,588 | <p>I suggest that you build well structured <code>item</code> in spider. In Scrapy Framework work flow, spider is used to built well-formed item, e.g., parse html, populate item instances and pipeline is used to do operations on item, e.g., filter item, store item.</p>
<p>For your application, if I understand correctly, each item should be an entry to describe an album. So when paring html, you'd better build such kind of item, instead of massing everything into item.</p>
<p>So in your <code>spider.py</code>, <code>parse</code> function, you should </p>
<ol>
<li><strong>Put <code>yield item</code> statement in the <code>for</code> loop, NOT OUTSIDE</strong>. In this way, each album will generate an item.</li>
<li><strong>Be careful about relative xpath selector in Scrapy</strong>. If you want to use relative xpath selector to specify self-and-descendant, use <code>.//</code> instead of <code>//</code>, and to specify self, use <code>./</code> instead of <code>/</code>.</li>
<li><p>Ideally album title should be a scalar, album artist should be a list, so try <code>extract_first</code> to make album title to be a scalar.</p>
<pre><code>def parse(self, response):
for sel in response.xpath('//div[@class="album-artist"]'):
item = PitchforkItem()
item['artist'] = sel.xpath('./ul[@class="artist-list"]/li/text()').extract_first()
item['album'] = sel.xpath('./h2[@class="title"]/text()').extract()
yield item
</code></pre></li>
</ol>
<p>Hope this would be helpful.</p>
| 0 | 2016-09-29T19:33:43Z | [
"python",
"scrapy"
]
|
Modbus device halts communication without a reason | 39,778,116 | <p>My company has bought several room thermostats with modbus capability. Those are from China. When I communicate with them on Modbus RTU, interesting things happen. Devices sometimes dont respond to me at all. Then in following polls, they sometimes respond. But in the end, after some time, be it one hour or two days, they totally stop responding and I am getting some errors from modbus master. I used serial port analyzer to see whats going on. I did all hardware checkpoints (termination resistor etc). And I tried many modbus masters. But even with only one thermostat connected, issue happens. Specifically to this issue, I developed my own modbus master with Python. The good part with that is, I use minimalmodbus library and I have the chance to close port after each poll. But still, even if my modbus master program keeps polling, thermostats totally stop responding. Somehow I have to solve that issue with programming. I cant always restart thermostats electrically to solve that communication issue. </p>
<p>Here is my code :</p>
<pre><code>import os
import sys
import sqlite3
import time
import datetime
import minimalmodbus
minimalmodbus.CLOSE_PORT_AFTER_EACH_CALL=True
PORT_NAME = '/com5'
SLAVE1_ADDRESS = 1
SLAVE2_ADDRESS = 2
SLAVE3_ADDRESS = 3
BAUDRATE = 9600 # baud (pretty much bits/s). Use 2400 or 38400 bits/s.
TIMEOUT = 0.4 # seconds. At least 0.2 seconds required for 2400 bits/s.
MODE = minimalmodbus.MODE_RTU
sqlite_file = 'ModbusTable.db'
instrument1 = minimalmodbus.Instrument(PORT_NAME, SLAVE1_ADDRESS, MODE)
instrument1.serial.baudrate = BAUDRATE
instrument1.serial.timeout = TIMEOUT
instrument1.debug = False
instrument1.precalculate_read_size = True
instrument2 = minimalmodbus.Instrument(PORT_NAME, SLAVE2_ADDRESS, MODE)
instrument2.serial.baudrate = BAUDRATE
instrument2.serial.timeout = TIMEOUT
instrument2.debug = False
instrument2.precalculate_read_size = True
instrument3 = minimalmodbus.Instrument(PORT_NAME, SLAVE3_ADDRESS, MODE)
instrument3.serial.baudrate = BAUDRATE
instrument3.serial.timeout = TIMEOUT
instrument3.debug = False
instrument3.precalculate_read_size = True
print ('Okuyor')
while 1:
# Connecting to the database file
conn = sqlite3.connect(sqlite_file)
c = conn.cursor()
try:
values1 = instrument1.read_registers(40005, 2)
except IOError:
print ('Timestamp: {:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.now())+"Failed to read from instrument1")
# C) Updates the pre-existing entry
c.execute("""UPDATE ModbusData SET SetPoint = ? ,ActualTemp = ? WHERE ID= ? """,
(values1[0],values1[1],1))
time.sleep(0.5)
try:
values2 = instrument2.read_registers(40005, 2)
except IOError:
print ('Timestamp: {:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.now())+"Failed to read from instrument2")
# C) Updates the pre-existing entry
c.execute("""UPDATE ModbusData SET SetPoint = ? ,ActualTemp = ? WHERE ID= ? """,
(values2[0],values2[1],2))
time.sleep(0.5)
try:
values3 = instrument3.read_registers(40005, 2)
except IOError:
print ('Timestamp: {:%Y-%m-%d %H:%M:%S}'.format(datetime.datetime.now())+"Failed to read from instrument3")
# C) Updates the pre-existing entry
c.execute("""UPDATE ModbusData SET SetPoint = ? ,ActualTemp = ? WHERE ID= ? """,
(values3[0],values3[1],3))
time.sleep(0.5)
# Committing changes and closing the connection to the database file
conn.commit()
conn.close()
</code></pre>
<p>And here is how thermostats perform :</p>
<p><a href="http://i.stack.imgur.com/GRt17.png" rel="nofollow"><img src="http://i.stack.imgur.com/GRt17.png" alt="enter image description here"></a></p>
<p>So I will appreciate so much if you will come up with your ideas.</p>
| -2 | 2016-09-29T19:04:37Z | 39,821,010 | <p>By the look of your screenshot, I think that you are not waiting long enough for the thermostat's reply.
First request: the reply from "instrument1" is not received in the time frame you have configured. So your program sends a request to "instrument2", but receives the late reply for "instrument1". The rest of your log is all about that: one request receiving the reply from the previous request.</p>
| 1 | 2016-10-02T19:48:56Z | [
"python",
"modbus"
]
|
Understanding behaviour of self with static variables | 39,778,235 | <p>I am confused with the behaviour of self when it comes to dealing with static variables in python.From what I understand is that static variables can be accessed by either using <code>classname.variablename</code> or <code>self.variablename</code>. However changing the value of that variable differs. I realized that if i change the static variable value by <code>classname.variablename=SomeValue</code> the instance variable reflects that value however if I change the value of static variable using <code>self.variablename=SomeValue</code> the static variable does not change when access like <code>classname.variablename</code> from what I understand is that when I assign a value like <code>self.variablename=SomeValue</code> then an instance variable is created. Can somebody please shed a little light on this behaviour.</p>
<p>Example 1:</p>
<pre><code>class bean:
mycar="SomeCar"
def test(self):
bean.mycar = "yup"
print(self.mycar) #prints yup
</code></pre>
<p>Example 2:</p>
<pre><code>class bean:
mycar="SomeCar"
def test(self):
self.mycar = "pup"
print(bean.mycar) #SomeCar
</code></pre>
| -1 | 2016-09-29T19:12:28Z | 39,778,326 | <p><code>self</code> in python is an ordinary name that binds the reference to the <em>instance</em> calling a class method. It is passed as the first argument to a method and by convention, it is bound to the name 'self'. When <code>self.variable = value</code> is called, you are setting the value of an instance variable; a variable unique to that <em>particular</em> <code>bean</code>. </p>
<p>For example, <code>self.name = "Fred"</code> might name my mother's bean, but I named my own bean "George" when I called <code>self.name</code> from <em>my</em> bean. </p>
<p>On the other hand, <code>bean.name = "Yousef"</code> names <em>all</em> beans. My mother's bean is now named "Yousef", and so is mine. </p>
<p>If my Dad has a bean as well, he'll be surprised to find out that it too, is named "Yousef" when he calls <code>bean.name</code>. But he can still use <code>self.name</code> to give his bean its own (possibly unique) name.</p>
<h2>Example Code:</h2>
<pre><code>class bean:
name = "Yousef" # All beans have this name with `bean.name`
moms = bean()
mine = bean()
dads = bean()
beans = [moms, mine, dads]
# Primitive tabular output function
def output(bean_list):
print("-bean-", "\t", "-self-")
for b in bean_list:
print(bean.name, "\t", b.name)
print("") # Separate output sets with a newline
# Print the names with only the class attribute set
output(beans)
# Python magic using zip to apply names simultaneously
# Mom's bean is "Fred", mine is "George"
# My dad is weird and named his "Ziggaloo"
for b, n in zip(beans, ["Fred", "George", "Ziggaloo"]):
b.name = n
# Print the names after applying `self.name`
output(beans)
</code></pre>
<h2>Python 3.4 Output:</h2>
<pre><code>-bean- -self-
Yousef Yousef
Yousef Yousef
Yousef Yousef
-bean- -self-
Yousef Fred
Yousef George
Yousef Ziggaloo
</code></pre>
| 1 | 2016-09-29T19:17:59Z | [
"python",
"static-variables"
]
|
Understanding behaviour of self with static variables | 39,778,235 | <p>I am confused with the behaviour of self when it comes to dealing with static variables in python.From what I understand is that static variables can be accessed by either using <code>classname.variablename</code> or <code>self.variablename</code>. However changing the value of that variable differs. I realized that if i change the static variable value by <code>classname.variablename=SomeValue</code> the instance variable reflects that value however if I change the value of static variable using <code>self.variablename=SomeValue</code> the static variable does not change when access like <code>classname.variablename</code> from what I understand is that when I assign a value like <code>self.variablename=SomeValue</code> then an instance variable is created. Can somebody please shed a little light on this behaviour.</p>
<p>Example 1:</p>
<pre><code>class bean:
mycar="SomeCar"
def test(self):
bean.mycar = "yup"
print(self.mycar) #prints yup
</code></pre>
<p>Example 2:</p>
<pre><code>class bean:
mycar="SomeCar"
def test(self):
self.mycar = "pup"
print(bean.mycar) #SomeCar
</code></pre>
| -1 | 2016-09-29T19:12:28Z | 39,778,776 | <p>Both classes and instances can have attributes.</p>
<p>A class attribute is assigned to a class object. People sometimes call this a <em>"static variable"</em>.</p>
<p>An instance attribute is assigned to an instance (<em>"instance variable"</em>).</p>
<p>When an attribute of an object is <strong>read</strong>, a number of things happen (see <a href="https://docs.python.org/3/howto/descriptor.html" rel="nofollow">Descriptor HowTo Guide</a>), but the short version is:</p>
<ol>
<li>Try to read the attribute from the instance</li>
<li>If that fails, try to read it from the class</li>
</ol>
<p>When it is <strong>written</strong>, then there is no such mechanism. It is written where it is written ;)</p>
<p>See in example:</p>
<pre><code>class A(object):
pass
a = A()
print A.value # fails - there is no "value" attribute
print a.value # fails - there is no "value" attribute
A.value = 7
print A.value # prints 7
print a.value # also prints 7 - there is no attribute on instance, but there is on class
a.value = 11
print A.value # prints 7
print a.value # prints 11 - now there is an attribute on the instance
a2 = A()
print a2.value # prints 7 - this instance has no "value", but the class has
</code></pre>
<p><strong>self?</strong></p>
<p>BTW, the <code>self</code> argument (in the question) is an instance, just like <code>a</code> is here.</p>
| 1 | 2016-09-29T19:46:37Z | [
"python",
"static-variables"
]
|
How to change value inside function | 39,778,404 | <p>Very new to coding. I'm trying to learn myself Python by doing a little project I came up with. I realize this will be a slow process, but I just had a question about loops.</p>
<p>This is what I'm trying to do:</p>
<p>-The user inputs a list of numbers, and if a number in the list is more than 360, the function will subtract 360 from it until it is below 360.</p>
<p>-Once it's below 360:</p>
<ul>
<li>If it's above 270, it will subtract the number from 360.</li>
<li>If it's above 180, it will subtract 180 from the number.</li>
<li>If it's above 90, it will subtract the number from 180.</li>
</ul>
<p>-It should then print the values. If this process sounds familiar, I'm trying to convert an azimuth to a bearing. Right now I'm just focused on getting the numerical value, then I'll add the direction.</p>
<p>This is my code:</p>
<pre><code> def bearing(x):
for i in range(len(x)):
while x[i]>=360:
x[i]-=360
if x[i]>270:
x[i]==360-x[i]
elif x[i]>180:
x[i]-=180
elif x[i]>90:
x[i]==180-x[i]
print (x)
</code></pre>
<p>The while loop works fine, but it stops there. I'm not sure if my indenting is wrong, or I'm using the wrong commands, but any help would be greatly appreciated.</p>
| 2 | 2016-09-29T19:23:11Z | 39,778,627 | <p>You don't have to use <code>range(len(x))</code>. Instead, you can iterate over the elements in <code>x</code> explicitly.</p>
<pre><code>def bearing(x):
for item in x:
item = item % 360 #This gets the remainder of dividing item by 360
if item > 270:
print(360 - item)
elif item > 180:
print(item - 180)
elif item > 90:
print(180 - item)
else:
print(item)
</code></pre>
| 0 | 2016-09-29T19:36:19Z | [
"python",
"loops"
]
|
How to change value inside function | 39,778,404 | <p>Very new to coding. I'm trying to learn myself Python by doing a little project I came up with. I realize this will be a slow process, but I just had a question about loops.</p>
<p>This is what I'm trying to do:</p>
<p>-The user inputs a list of numbers, and if a number in the list is more than 360, the function will subtract 360 from it until it is below 360.</p>
<p>-Once it's below 360:</p>
<ul>
<li>If it's above 270, it will subtract the number from 360.</li>
<li>If it's above 180, it will subtract 180 from the number.</li>
<li>If it's above 90, it will subtract the number from 180.</li>
</ul>
<p>-It should then print the values. If this process sounds familiar, I'm trying to convert an azimuth to a bearing. Right now I'm just focused on getting the numerical value, then I'll add the direction.</p>
<p>This is my code:</p>
<pre><code> def bearing(x):
for i in range(len(x)):
while x[i]>=360:
x[i]-=360
if x[i]>270:
x[i]==360-x[i]
elif x[i]>180:
x[i]-=180
elif x[i]>90:
x[i]==180-x[i]
print (x)
</code></pre>
<p>The while loop works fine, but it stops there. I'm not sure if my indenting is wrong, or I'm using the wrong commands, but any help would be greatly appreciated.</p>
| 2 | 2016-09-29T19:23:11Z | 39,778,777 | <p>Instead of using assignment (<code>=</code>) you're using comparison (<code>==</code>). Try this:</p>
<pre><code>def bearing(x):
for i in range(len(x)):
while x[i]>=360:
x[i]-=360
if x[i]>270:
x[i]=360-x[i]
elif x[i]>180:
x[i]-=180
elif x[i]>90:
x[i]=180-x[i]
print (x)
</code></pre>
<p>You may also shorten the while loop into a single <code>% 360</code>.</p>
<p>This is a cleaner function that does the same:</p>
<pre><code>def bearing(x):
for i, v in enumerate(x):
v = v % 360
if v > 270:
v = 360-v
elif v > 180:
v -= 180
elif v > 90:
v = 180-v
x[i] = v
print (x)
</code></pre>
| 0 | 2016-09-29T19:46:41Z | [
"python",
"loops"
]
|
How to change value inside function | 39,778,404 | <p>Very new to coding. I'm trying to learn myself Python by doing a little project I came up with. I realize this will be a slow process, but I just had a question about loops.</p>
<p>This is what I'm trying to do:</p>
<p>-The user inputs a list of numbers, and if a number in the list is more than 360, the function will subtract 360 from it until it is below 360.</p>
<p>-Once it's below 360:</p>
<ul>
<li>If it's above 270, it will subtract the number from 360.</li>
<li>If it's above 180, it will subtract 180 from the number.</li>
<li>If it's above 90, it will subtract the number from 180.</li>
</ul>
<p>-It should then print the values. If this process sounds familiar, I'm trying to convert an azimuth to a bearing. Right now I'm just focused on getting the numerical value, then I'll add the direction.</p>
<p>This is my code:</p>
<pre><code> def bearing(x):
for i in range(len(x)):
while x[i]>=360:
x[i]-=360
if x[i]>270:
x[i]==360-x[i]
elif x[i]>180:
x[i]-=180
elif x[i]>90:
x[i]==180-x[i]
print (x)
</code></pre>
<p>The while loop works fine, but it stops there. I'm not sure if my indenting is wrong, or I'm using the wrong commands, but any help would be greatly appreciated.</p>
| 2 | 2016-09-29T19:23:11Z | 39,778,941 | <p>You may or may not be familiar with the modulus operator (<code>%</code>). What that does is return the remainder from division, which is basically caffeinated subtraction anyway. For example, <code>5 % 3</code> would return <code>2</code> since <code>5 / 3 == 1 remainder 2</code>. Hopefully you can see how <code>user_input % 360</code> would take care of your initial subtraction to <code>< 360</code>.</p>
<p>Translating this to a list of numbers: <code>list_below_360 = [x % 360 for x in user_input_list]</code>.</p>
<p>So now, let's assume we have properly formatted user input that is a list of integers all below 360.</p>
<pre><code>for i in range(len(list_below_360)):
if list_below_360[i] > 270:
list_below_360[i] = 360 - list_below_360[i]
#notice the use of a single equals sign here,
#a double equals sign does comparison, not assignment
elif list_below_360[i] > 180:
list_below_360[i] -= 180
elif list_below_360[i] > 90:
list_below_360[i] = 90 - list_below_360[i]
print(list_below_360)
</code></pre>
<p>Hope this helps.</p>
| 0 | 2016-09-29T19:56:59Z | [
"python",
"loops"
]
|
Cache file handle to netCDF files in python | 39,778,435 | <p>Is there a way to cache python file handles? I have a function which takes a netCDF file path as input, opens it, extracts some data from the netCDF file and closes it. It gets called a lot of times, and the overhead of opening the file each time is high.</p>
<p>How can I make it faster by maybe caching the file handle? Perhaps there is a python library to do this</p>
| 6 | 2016-09-29T19:25:04Z | 39,821,062 | <p>What about this?</p>
<pre><code>filehandle = None
def get_filehandle(filename):
if filehandle is None or filehandle.closed():
filehandle = open(filename, "r")
return filehandle
</code></pre>
<p>You may want to encapsulate this into a class to prevent other code from messing with the <code>filehandle</code> variable.</p>
| -1 | 2016-10-02T19:54:59Z | [
"python",
"netcdf"
]
|
Cache file handle to netCDF files in python | 39,778,435 | <p>Is there a way to cache python file handles? I have a function which takes a netCDF file path as input, opens it, extracts some data from the netCDF file and closes it. It gets called a lot of times, and the overhead of opening the file each time is high.</p>
<p>How can I make it faster by maybe caching the file handle? Perhaps there is a python library to do this</p>
| 6 | 2016-09-29T19:25:04Z | 39,917,248 | <p>Yes, you can use following python libraries:</p>
<ul>
<li><a href="https://pypi.python.org/pypi/dill" rel="nofollow">dill</a> (required)</li>
<li><a href="https://pypi.python.org/pypi/python-memcached" rel="nofollow">python-memcached</a> (optional)</li>
</ul>
<p>Let's follow the example. You have two files:</p>
<pre><code># save.py - it puts deserialized file handler object to memcached
import dill
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
file_handler = open('data.txt', 'r')
mc.set("file_handler", dill.dumps(file_handler))
print 'saved!'
</code></pre>
<p>and </p>
<pre><code># read_from_file.py - it gets deserialized file handler object from memcached,
# then serializes it and read lines from it
import dill
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
file_handler = dill.loads(mc.get("file_handler"))
print file_handler.readlines()
</code></pre>
<p>Now if you run:</p>
<pre><code>python save.py
python read_from_file.py
</code></pre>
<p>you can get what you want. </p>
<p><strong>Why it works?</strong></p>
<p>Because you didn't close the file (<code>file_handler.close()</code>), so object still exist in memory (has not been garbage collected, because of <a href="https://docs.python.org/2/library/weakref.html" rel="nofollow">weakref</a>) and you can use it. Even in different process.</p>
<p><strong>Solution</strong></p>
<pre><code>import dill
import memcache
mc = memcache.Client(['127.0.0.1:11211'], debug=0)
serialized = mc.get("file_handler")
if serialized:
file_handler = dill.loads(serialized)
else:
file_handler = open('data.txt', 'r')
mc.set("file_handler", dill.dumps(file_handler))
print file_handler.readlines()
</code></pre>
| 2 | 2016-10-07T12:24:15Z | [
"python",
"netcdf"
]
|
DRY way to declare several similar form fields | 39,778,443 | <p>Let's say I'm trying to declare a (django) Form class with several FileFields:</p>
<pre><code>class = MyForm(forms.Form):
file_0 = forms.FileField()
file_1 = forms.FileField()
...
</code></pre>
<p>I have about 20 sequential inputs to declare - what's the best way to avoid typing this all out like a chump?</p>
| 3 | 2016-09-29T19:25:26Z | 39,778,489 | <p>Use a loop:</p>
<pre><code>files = [forms.FileInput() for i in range(20)]
</code></pre>
| 1 | 2016-09-29T19:27:40Z | [
"python",
"django"
]
|
DRY way to declare several similar form fields | 39,778,443 | <p>Let's say I'm trying to declare a (django) Form class with several FileFields:</p>
<pre><code>class = MyForm(forms.Form):
file_0 = forms.FileField()
file_1 = forms.FileField()
...
</code></pre>
<p>I have about 20 sequential inputs to declare - what's the best way to avoid typing this all out like a chump?</p>
| 3 | 2016-09-29T19:25:26Z | 39,778,711 | <p>You can use Django dynamic Form generation</p>
<pre><code>from django import forms
class MyForm(forms.Form):
def __init__(self, *args, **kwargs):
super(MyForm, self).__init__(*args, **kwargs)
for i in range(20):
self.fields["file_%d" % i] = forms.FileInput()
</code></pre>
<p>See Docs <a href="https://code.djangoproject.com/wiki/CookBookNewFormsDynamicFields" rel="nofollow">here</a>.</p>
| 3 | 2016-09-29T19:42:23Z | [
"python",
"django"
]
|
Fancy time series grouping operations with Pandas dataframe | 39,778,471 | <p>I'm updating an implementation I have to use <em>pandas</em> and take advantage of its functionality and I would appreciate some help. I have a pandas dataframe of events that looks like this:</p>
<pre><code> ID Start End
0 243552 2010-12-12 23:00:53 2010-12-12 23:37:14
1 243621 2010-12-12 23:25:58 2010-12-13 02:20:40
2 243580 2010-12-12 23:39:19 2010-12-13 07:22:39
3 243579 2010-12-12 23:42:53 2010-12-13 05:40:14
4 243491 2010-12-12 23:43:53 2010-12-13 07:48:14
...
...
</code></pre>
<p>Dtypes are <code>int64</code> for <em>ID</em>, and <code>datetime64[ns]</code> for <em>Start</em> and <em>End</em>. Note that the dataframe is sorted in the <em>Start</em> column but it won't necessarily be sorted in the <em>End</em> column.</p>
<p>I want to analyze this data for some time range between input timestamps <em>t1</em> and <em>t2</em> for periods of equal timespan input by the user, and produce a new dataframe indexed by the timestamps of these periods.</p>
<p>What I would like to do is to group the data for each period producing 5 columns: <em>Start_count</em>, <em>End_count</em>, <em>Span_avg</em>, <em>Start_inter_avg</em> and <em>End_inter_avg</em>. Considering, for example, a 10 min periods grouping I want to get this:</p>
<pre><code> Start_count End_count Span_avg Start_inter_avg End_inter_avg
Period
2010-12-12 23:10:00 1 0 00:36:21 00:00:00 00:00:00
2010-12-12 23:20:00 0 0 0 00:00:00 00:00:00
2010-12-12 23:30:00 1 0 02:54:42 00:00:00 00:00:00
2010-12-12 23:40:00 1 1 07:43:20 00:00:00 00:00:00
2010-12-12 23:50:00 2 0 07:00:51 00:01:00 00:00:00
...
...
</code></pre>
<p>Where dtypes would be: <code>int64</code> for <em>Start_count</em> and <em>End_count</em>, and <code>timedelta64[ns]</code> for <em>Span_avg</em>, <em>Start_inter_avg</em> and <em>End_inter_avg</em>. The columns of the dataframe I want to produce are:</p>
<ul>
<li><em>Start_count</em>: the number of timestamps from the <em>Start</em> column of the original dataframe that fall under the period of the timespan <code>]Period - 10 min, Period]</code>;</li>
<li><em>End_count</em>: same as <em>Start_count</em> but considering the <em>End</em> column;</li>
<li><em>Span_average</em>: computed as follows: 1st) look at the entries from the dataframe and select those which have the <em>Start</em> values contained inside <code>]Period - 10 min, Period]</code>, 2nd) in each of these entries compute the difference <em>End</em>-<em>Start</em>, 3rd) average these values.</li>
<li><em>Start_inter_avg</em>: computed like this: 1st) look at the entries from the dataframe and select those which have the <em>Start</em> values contained inside <code>]Period - 10 min, Period]</code>, and sort them (well, they're already sorted), 2nd) compute the timedelta difference between consecutive timestamps, 3rd) average these differences. (so if in a certain period there are 3 <em>Start</em> timestamps, [a,b,c], there would be 2 timedelta differences, [b-a, c-b] and the final value would be equal to ((b-a)+(c-b))/2).</li>
<li><em>End_inter_avg</em>: should be computed in the same way as <em>Start_inter_avg</em> but using the data from the <em>End</em> column. (note that pre-sorting is mandatory now).</li>
</ul>
<p>For example, the resulting table when grouping by 30 minutes' periods should be:</p>
<pre><code> Start_count End_count Span_avg Start_inter_avg End_inter_avg
Period
2010-12-12 23:30:00 2 0 01:45:31.500 00:25:05 00:00:00
2010-12-13 00:00:00 3 1 07:15:00.666 00:02:17 00:00:00
...
...
</code></pre>
<p>You can experiment with this <em>test.csv</em> file:</p>
<pre><code>ID,Start,End
243552,2010-12-12 23:00:53,2010-12-12 23:37:14
243621,2010-12-12 23:25:58,2010-12-13 02:20:40
243580,2010-12-12 23:39:19,2010-12-13 07:22:39
243579,2010-12-12 23:42:53,2010-12-13 05:40:14
243491,2010-12-12 23:43:53,2010-12-13 07:48:14
243490,2010-12-12 23:43:58,2010-12-13 01:18:40
243465,2010-12-13 00:07:53,2010-12-13 07:26:14
243515,2010-12-13 00:35:58,2010-12-13 03:41:40
243572,2010-12-13 00:46:58,2010-12-13 03:47:40
243520,2010-12-13 01:15:53,2010-12-13 05:14:14
243609,2010-12-13 01:29:53,2010-12-13 08:10:14
243482,2010-12-13 01:44:19,2010-12-13 05:57:39
243563,2010-12-13 01:49:53,2010-12-13 06:04:14
243414,2010-12-13 02:06:16,2010-12-13 02:46:48
243441,2010-12-13 02:15:16,2010-12-13 03:11:48
243548,2010-12-13 02:33:58,2010-12-13 02:49:40
243447,2010-12-13 05:01:42,2010-12-13 21:55:21
243531,2010-12-13 05:53:25,2010-12-13 07:49:59
243583,2010-12-13 05:53:25,2010-12-13 09:00:59
243593,2010-12-13 06:06:25,2010-12-13 09:50:59
243460,2010-12-13 06:14:42,2010-12-13 18:14:44
243596,2010-12-13 06:15:10,2010-12-13 21:47:25
243575,2010-12-13 06:22:42,2010-12-13 20:51:21
243514,2010-12-13 06:24:14,2010-12-13 08:34:07
243421,2010-12-13 06:31:14,2010-12-13 10:57:07
243471,2010-12-13 06:35:23,2010-12-13 14:11:13
243518,2010-12-13 06:36:48,2010-12-13 17:35:39
243565,2010-12-13 06:37:43,2010-12-13 17:16:22
243564,2010-12-13 06:48:16,2010-12-13 16:18:15
243424,2010-12-13 06:48:48,2010-12-13 16:19:39
243437,2010-12-13 06:58:46,2010-12-13 17:11:30
243573,2010-12-13 07:00:14,2010-12-13 09:46:07
243585,2010-12-13 07:01:35,2010-12-13 09:01:38
243483,2010-12-13 07:02:16,2010-12-13 16:36:15
243425,2010-12-13 07:04:21,2010-12-13 16:03:50
243570,2010-12-13 07:07:48,2010-12-13 08:51:04
243507,2010-12-13 07:10:03,2010-12-13 15:58:48
243535,2010-12-13 07:10:23,2010-12-13 11:31:13
243502,2010-12-13 07:13:21,2010-12-13 19:06:50
243525,2010-12-13 07:13:21,2010-12-13 19:34:50
243486,2010-12-13 07:13:56,2010-12-13 17:49:38
243451,2010-12-13 07:15:58,2010-12-13 17:34:03
243485,2010-12-13 07:17:35,2010-12-13 09:40:38
243487,2010-12-13 07:19:01,2010-12-13 10:39:35
243522,2010-12-13 07:19:25,2010-12-13 18:03:02
243481,2010-12-13 07:19:48,2010-12-13 11:08:04
243545,2010-12-13 07:20:42,2010-12-13 20:38:44
243492,2010-12-13 07:23:07,2010-12-13 17:38:42
243611,2010-12-13 07:23:23,2010-12-13 12:58:13
243508,2010-12-13 07:25:25,2010-12-13 18:29:02
243620,2010-12-13 07:25:46,2010-12-13 17:51:30
243466,2010-12-13 07:27:40,2010-12-13 19:05:58
243582,2010-12-13 07:29:29,2010-12-13 20:08:10
243568,2010-12-13 07:31:17,2010-12-13 15:30:37
243461,2010-12-13 07:32:24,2010-12-13 20:47:52
243623,2010-12-13 07:33:10,2010-12-13 10:34:20
243498,2010-12-13 07:33:25,2010-12-13 16:22:02
243427,2010-12-13 07:33:48,2010-12-13 20:00:39
243526,2010-12-13 07:34:10,2010-12-13 09:46:20
243472,2010-12-13 07:36:10,2010-12-13 20:36:25
243479,2010-12-13 07:36:48,2010-12-13 19:30:39
243494,2010-12-13 07:39:07,2010-12-13 17:03:42
243433,2010-12-13 07:39:35,2010-12-13 09:19:38
243503,2010-12-13 07:40:06,2010-12-13 13:53:08
243429,2010-12-13 07:40:35,2010-12-13 10:54:38
243422,2010-12-13 07:43:23,2010-12-13 10:35:10
243618,2010-12-13 07:46:19,2010-12-13 11:56:40
243445,2010-12-13 07:48:14,2010-12-13 10:15:07
243554,2010-12-13 07:49:14,2010-12-13 09:11:57
243542,2010-12-13 07:49:17,2010-12-13 18:53:37
243501,2010-12-13 07:50:40,2010-12-13 19:29:58
243529,2010-12-13 07:51:18,2010-12-13 17:14:15
243457,2010-12-13 07:53:55,2010-12-13 15:33:27
243613,2010-12-13 07:53:58,2010-12-13 17:00:03
243562,2010-12-13 07:54:01,2010-12-13 14:17:09
243571,2010-12-13 07:54:48,2010-12-13 18:39:39
243541,2010-12-13 07:58:53,2010-12-13 16:02:23
243510,2010-12-13 07:59:10,2010-12-13 19:04:51
243470,2010-12-13 07:59:46,2010-12-13 17:06:30
243448,2010-12-13 07:59:48,2010-12-13 18:38:39
243606,2010-12-13 08:03:21,2010-12-13 18:07:50
243430,2010-12-13 08:04:08,2010-12-13 17:49:41
243495,2010-12-13 08:04:25,2010-12-13 18:15:02
243591,2010-12-13 08:07:08,2010-12-13 17:33:54
243551,2010-12-13 08:07:10,2010-12-13 18:18:25
243459,2010-12-13 08:10:14,2010-12-13 10:53:07
243558,2010-12-13 08:11:00,2010-12-13 11:56:01
243605,2010-12-13 08:13:20,2010-12-13 16:38:14
243452,2010-12-13 08:15:23,2010-12-13 13:50:13
243446,2010-12-13 08:17:06,2010-12-13 14:00:08
243516,2010-12-13 08:17:20,2010-12-13 15:03:14
243450,2010-12-13 08:18:17,2010-12-13 16:21:37
243473,2010-12-13 08:19:22,2010-12-13 12:07:49
243438,2010-12-13 08:20:10,2010-12-13 19:34:25
243464,2010-12-13 08:21:03,2010-12-13 14:44:48
243536,2010-12-13 08:21:29,2010-12-13 17:32:15
243476,2010-12-13 08:21:58,2010-12-13 17:34:03
243595,2010-12-13 08:24:19,2010-12-13 11:38:40
243532,2010-12-13 08:27:10,2010-12-13 20:28:25
243497,2010-12-13 08:27:20,2010-12-13 14:12:14
</code></pre>
<p><strong>Attempt at a solution (answers part of the question)</strong></p>
<p>This is my attempt at a solution. I only do the first 3 columns, I get <em>Start_count</em> and <em>End_count</em> with <code>float64</code> dtype, I index data by the first boundary of the period timestamp (differently from what I ask, but ok), and overall I wonder if it could be done in a simpler, shorter and more elegant way.</p>
<pre><code># Loading and parsing
data = pd.read_csv('test')
data.Start = pd.to_datetime(data.Start, format='%Y-%m-%d %H:%M:%S')
data.End = pd.to_datetime(data.End, format='%Y-%m-%d %H:%M:%S')
interval = 10 # minutes
Start_count = pd.Series(1, index=data.Start)
Start_count = Start_count.resample(str(interval)+'t').count()
# End_count series doesn't have the same length as Start_count
End_count = pd.Series(1, index=data.End)
End_count = End_count.resample(str(interval)+'t').count()
# This is an ugly way of going around encountered issues and doing what I wanted
Span = pd.Series(np.float64( (data.End - data.Start) / np.timedelta64(1,'s') ), index=data.Start)
Span_mean = Span.resample(str(interval)+'t').mean()
Span_mean = pd.to_timedelta(Span_mean, unit='s')
# When merging all series in a dataframe it seems that alignment is properly done
new_dataframe = pd.DataFrame(({'Start_count' : Start_count, 'End_count' : End_count, 'Span_avg' : Span_mean}))
new_dataframe.fillna(0,inplace=True)
new_dataframe.index.rename('Periods',inplace=True)
new_dataframe.head() # Shows:
End_count Span_avg Start_count
Periods
2010-12-12 23:00:00 0.0 00:36:21 1.0
2010-12-12 23:10:00 0.0 00:00:00 0.0
2010-12-12 23:20:00 0.0 02:54:42 1.0
2010-12-12 23:30:00 1.0 07:43:20 1.0
2010-12-12 23:40:00 0.0 05:12:08 3.0
</code></pre>
| 0 | 2016-09-29T18:21:29Z | 39,790,370 | <p>It's a difficult problem, but here is the solution:</p>
<pre><code>import pandas as pd
period = "10min"
df = pd.read_csv("test.csv", parse_dates=[1, 2])
span = df.End - df.Start
start_period = df.Start.dt.floor(period)
end_period = df.End.dt.floor(period)
start_count = start_period.value_counts(sort=False)
end_count = end_period.value_counts(sort=False)
span_average = pd.to_timedelta(
span.dt.total_seconds().groupby(start_period).mean().round(),
unit="s").rename("Span_average")
def average_span(s):
if len(s) > 1:
return (s.max() - s.min()).total_seconds() / (len(s) - 1)
else:
return 0
start_inter_avg = pd.to_timedelta(
df.Start.groupby(start_period).agg(average_span).round(),
unit="s").rename("Start_inter_avg")
end_inter_avg = pd.to_timedelta(
df.End.groupby(end_period).agg(average_span).round(),
unit="s").rename("End_inter_avg")
res = pd.concat([start_count, end_count, span_average, start_inter_avg, end_inter_avg],
axis=1).resample(period).asfreq().fillna(0)
</code></pre>
<p>the output:</p>
<pre><code> Start End Span_average Start_inter_avg End_inter_avg
2010-12-12 23:00:00 1.0 0.0 00:36:21 00:00:00 00:00:00
2010-12-12 23:10:00 0.0 0.0 00:00:00 00:00:00 00:00:00
2010-12-12 23:20:00 1.0 0.0 02:54:42 00:00:00 00:00:00
2010-12-12 23:30:00 1.0 1.0 07:43:20 00:00:00 00:00:00
2010-12-12 23:40:00 3.0 0.0 05:12:08 00:00:32 00:00:00
2010-12-12 23:50:00 0.0 0.0 00:00:00 00:00:00 00:00:00
2010-12-13 00:00:00 1.0 0.0 07:18:21 00:00:00 00:00:00
2010-12-13 00:10:00 0.0 0.0 00:00:00 00:00:00 00:00:00
</code></pre>
| 1 | 2016-09-30T11:22:03Z | [
"python",
"pandas"
]
|
How to make auto increment field in django which starts from 10000 | 39,778,600 | <p>I want to make an auto increment field in Django which starts from bigger value , I found use <strong>Autofield</strong> in models but It starts from 1 .</p>
<pre><code>class University(models.Model):
id = models.AutoField(primary_key=True)
university_name = models.CharField(max_length=250, unique=True)
university_slug = models.SlugField(max_length=250, unique=True)
def __unicode__(self):
return self.university_name
</code></pre>
<p>How can do this ? Any helpful suggestion will be appreciated ?</p>
| 0 | 2016-09-29T19:34:49Z | 39,779,380 | <p>Rather then trying in front end , I suggest you to create sequences in backend (in Database) with starting higher numbers Sequence value.</p>
<p>Hope this will solve your purposes.</p>
| 0 | 2016-09-29T20:26:10Z | [
"python",
"mysql",
"django",
"django-models"
]
|
How to make auto increment field in django which starts from 10000 | 39,778,600 | <p>I want to make an auto increment field in Django which starts from bigger value , I found use <strong>Autofield</strong> in models but It starts from 1 .</p>
<pre><code>class University(models.Model):
id = models.AutoField(primary_key=True)
university_name = models.CharField(max_length=250, unique=True)
university_slug = models.SlugField(max_length=250, unique=True)
def __unicode__(self):
return self.university_name
</code></pre>
<p>How can do this ? Any helpful suggestion will be appreciated ?</p>
| 0 | 2016-09-29T19:34:49Z | 39,781,679 | <p>The simplest is to catch the <a href="https://docs.djangoproject.com/en/1.10/ref/signals/#post-migrate" rel="nofollow">post migrate signal</a></p>
<pre><code>from django.apps import AppConfig
from django.db.models.signals import post_migrate
def my_callback(sender, **kwargs):
if sender.name = 'myapp'
try:
University.objects.create(pk=999, ...)
University.objects.delete()
except IntegrityError:
pass
class MyAppConfig(AppConfig):
...
def ready(self):
post_migrate.connect(my_callback, sender=self)
</code></pre>
<p>What we are doing here is creating a record and deleting it immediately. On mysql that changes the next value in the auto increment. It doesn't matter that the record has been deleted. The next asigned number will be 1000. </p>
| 1 | 2016-09-29T23:48:06Z | [
"python",
"mysql",
"django",
"django-models"
]
|
Caesar Cipher shift by two letters | 39,778,606 | <pre><code>def main():
cc = (input("Enter Message to Encrypt\n"))#user input
shift = int(2) #shift length
a=["a","b","c","d","e","f","g","h","i","j","k","l",
"m","n","o","p","q","r","s","t","u","v","w","x","y","z"] #reference list
newa={} #new shifted reference list
for i in range (0,len(a)):
newa [a[i]]=a[(i+shift)%len(a)]
#adds shifted 2 alaphabet into newalaphabet
#% moodulus used to wrap
for i in cc: #iterates through cc
if i in a:
a[i]=cc[i]
a[i]=newa[i]
main()
</code></pre>
<p>So I need input from the user #cc</p>
<p>the shift needs to be two</p>
<p>I used an alphabet list</p>
<p>then shift the alphabet by two to create newa</p>
<p>but I do not know how to apply the new alphabet to my user's input</p>
| 1 | 2016-09-29T19:35:06Z | 39,778,694 | <p>Iterate through the string <code>cc</code> and replace all the alphabets using the <code>get</code> method of <code>newa</code>. Characters that are not in the dictionary are left as is, by passing them as the default to <code>newa.get</code> when the key is missing: </p>
<pre><code>newa = {}
for i, x in enumerate(a):
newa[x] = a[(i+shift) % len(a)]
encrypted_text = ''.join(newa.get(i, i) for i in cc)
</code></pre>
<p>Python's builtin <a href="https://docs.python.org/2/library/functions.html#enumerate" rel="nofollow"><code>enumerate</code></a> can be used in place of <code>range(len(a))</code> in this case where you need the items in <code>a</code> and their respective indices.</p>
| 0 | 2016-09-29T19:41:34Z | [
"python",
"encryption",
"caesar-cipher"
]
|
Caesar Cipher shift by two letters | 39,778,606 | <pre><code>def main():
cc = (input("Enter Message to Encrypt\n"))#user input
shift = int(2) #shift length
a=["a","b","c","d","e","f","g","h","i","j","k","l",
"m","n","o","p","q","r","s","t","u","v","w","x","y","z"] #reference list
newa={} #new shifted reference list
for i in range (0,len(a)):
newa [a[i]]=a[(i+shift)%len(a)]
#adds shifted 2 alaphabet into newalaphabet
#% moodulus used to wrap
for i in cc: #iterates through cc
if i in a:
a[i]=cc[i]
a[i]=newa[i]
main()
</code></pre>
<p>So I need input from the user #cc</p>
<p>the shift needs to be two</p>
<p>I used an alphabet list</p>
<p>then shift the alphabet by two to create newa</p>
<p>but I do not know how to apply the new alphabet to my user's input</p>
| 1 | 2016-09-29T19:35:06Z | 39,778,704 | <p>Use mapping for every char, then join them back to create the encrypted message:</p>
<pre><code>''.join(map(lambda x: chr((ord(x) - 97 + shift) % 26 + 97) if x in alphabet else x, cc.lower()))
</code></pre>
<p>Integrate it like that:</p>
<pre><code>import string
alphabet = string.ascii_lowercase
cc = input('Enter string to encode: ')
shift = 2 # could be any number
encrypted = ''.join(map(lambda x: chr((ord(x) - 97 + shift) % 26 + 97) if x in alphabet else x, cc.lower()))
</code></pre>
<p><code>cc.lower()</code> for the letters to be all same case (to map using constant <code>ord</code>)</p>
<p><code>chr((ord(x) - 97 + shift) % 26 + 97)</code> :</p>
<ul>
<li>get the value of the number minus 97 (0 for <code>a</code>, 1 for <code>b</code>, etc.).</li>
<li>apply the shift (<code>a</code> turns to <code>c</code>, etc.).</li>
<li>modulate by 26 to prevent letters like <code>z</code> from exceeding (25 + 2 = 27, 27 % 26 = 1 = <code>b</code>).</li>
<li>add 97 to bring the letter back to ascii standard (97 for <code>a</code>, 98 for <code>b</code>, etc.)</li>
</ul>
<p><code>if x in alphabet else x</code> cover for signs that are not letter (if you want to ignore spaces and punctuation use <code>if x in alphabet else ''</code> instead).</p>
| 0 | 2016-09-29T19:42:12Z | [
"python",
"encryption",
"caesar-cipher"
]
|
Caesar Cipher shift by two letters | 39,778,606 | <pre><code>def main():
cc = (input("Enter Message to Encrypt\n"))#user input
shift = int(2) #shift length
a=["a","b","c","d","e","f","g","h","i","j","k","l",
"m","n","o","p","q","r","s","t","u","v","w","x","y","z"] #reference list
newa={} #new shifted reference list
for i in range (0,len(a)):
newa [a[i]]=a[(i+shift)%len(a)]
#adds shifted 2 alaphabet into newalaphabet
#% moodulus used to wrap
for i in cc: #iterates through cc
if i in a:
a[i]=cc[i]
a[i]=newa[i]
main()
</code></pre>
<p>So I need input from the user #cc</p>
<p>the shift needs to be two</p>
<p>I used an alphabet list</p>
<p>then shift the alphabet by two to create newa</p>
<p>but I do not know how to apply the new alphabet to my user's input</p>
| 1 | 2016-09-29T19:35:06Z | 39,778,719 | <p>Use a dictionary to map inputs to outputs</p>
<pre><code>shifted_a = a[-shift:] + a[:-shift]
cipher = {a[i]: shifted_a[i] for i in range(len(a))}
output = ''.join(cipher[char] for char in cc)
</code></pre>
| 0 | 2016-09-29T19:42:51Z | [
"python",
"encryption",
"caesar-cipher"
]
|
Caesar Cipher shift by two letters | 39,778,606 | <pre><code>def main():
cc = (input("Enter Message to Encrypt\n"))#user input
shift = int(2) #shift length
a=["a","b","c","d","e","f","g","h","i","j","k","l",
"m","n","o","p","q","r","s","t","u","v","w","x","y","z"] #reference list
newa={} #new shifted reference list
for i in range (0,len(a)):
newa [a[i]]=a[(i+shift)%len(a)]
#adds shifted 2 alaphabet into newalaphabet
#% moodulus used to wrap
for i in cc: #iterates through cc
if i in a:
a[i]=cc[i]
a[i]=newa[i]
main()
</code></pre>
<p>So I need input from the user #cc</p>
<p>the shift needs to be two</p>
<p>I used an alphabet list</p>
<p>then shift the alphabet by two to create newa</p>
<p>but I do not know how to apply the new alphabet to my user's input</p>
| 1 | 2016-09-29T19:35:06Z | 39,779,010 | <p>I would just build transition table and use it to decode string.</p>
<pre><code>import string
shift = 2
letters = string.ascii_lowercase + string.ascii_uppercase
transtable = str.maketrans({letters[i]: letters[(i + shift) % len(letters)]
for i in range(len(letters))})
cc = input('Enter string to encode: ')
print(cc.translate(transtable))
</code></pre>
| 0 | 2016-09-29T20:01:38Z | [
"python",
"encryption",
"caesar-cipher"
]
|
Caesar Cipher shift by two letters | 39,778,606 | <pre><code>def main():
cc = (input("Enter Message to Encrypt\n"))#user input
shift = int(2) #shift length
a=["a","b","c","d","e","f","g","h","i","j","k","l",
"m","n","o","p","q","r","s","t","u","v","w","x","y","z"] #reference list
newa={} #new shifted reference list
for i in range (0,len(a)):
newa [a[i]]=a[(i+shift)%len(a)]
#adds shifted 2 alaphabet into newalaphabet
#% moodulus used to wrap
for i in cc: #iterates through cc
if i in a:
a[i]=cc[i]
a[i]=newa[i]
main()
</code></pre>
<p>So I need input from the user #cc</p>
<p>the shift needs to be two</p>
<p>I used an alphabet list</p>
<p>then shift the alphabet by two to create newa</p>
<p>but I do not know how to apply the new alphabet to my user's input</p>
| 1 | 2016-09-29T19:35:06Z | 39,779,355 | <p>I'll throw my solution in there. It should be pretty clear how it works...</p>
<pre><code>import string
index_lookup = {letter: index for index, letter in enumerate(string.ascii_lowercase)}
def caesar_letter(l, shift=2):
new_index = index_lookup[l] + shift
return string.ascii_lowercase[new_index % len(index_lookup)]
def caesar_word(s):
return ''.join([caesar_letter(letter) for letter in s])
</code></pre>
<p>I think the above is better for readability but if you're opposed to imports...</p>
<pre><code>index_lookup = {chr(idx): idx - ord('a') for idx in range(ord('a'), ord('z')+1)}
</code></pre>
<p>...</p>
<pre><code>In [5]: caesar_word('abcdefghijklmnopqrstuvwxyz')
Out[5]: 'cdefghijklmnopqrstuvwxyzab'
</code></pre>
| 0 | 2016-09-29T20:23:32Z | [
"python",
"encryption",
"caesar-cipher"
]
|
Template View - kwargs and **kwargs | 39,778,657 | <p>I am reading about Template views through a tutorial and some of the code kind of confused me. The author used this code sample</p>
<pre><code>from django.utils.timezone import now
class AboutUsView(TemplateView):
template_name = 'about_us.html'
def get_context_data(self, **kwargs):
context = super(AboutUsView, self).get_context_data(**kwargs)
if now().weekday() < 5 and 8 < now().hour < 18:
context['open'] = True
else:
context['open'] = False
return context
</code></pre>
<p>The thing that confused me syntactically was this statement</p>
<pre><code> context = super(AboutUsView, self).get_context_data(**kwargs)
</code></pre>
<p>if we already are receiving <code>**kwargs</code> then why are we passing it to the super function with ** (double start). I think we should pass it as </p>
<pre><code> context = super(AboutUsView, self).get_context_data(kwargs)
</code></pre>
<p>this is the contextMixin which is receiving this call.</p>
<pre><code>class ContextMixin(object):
"""
A default context mixin that passes the keyword arguments received by
get_context_data as the template context.
"""
def get_context_data(self, **kwargs):
if 'view' not in kwargs:
kwargs['view'] = self
return kwargs
</code></pre>
<p>From what I have read is that the use of <code>**kwargs</code> pretty much means that kwargs is currently a dictionary and needs to be converted to named-value. If that is correct then how can kwargs be a dictionary when its parameter is actually **kwargs. I hope my question makes sense. Please let me know if you would want me to rephrase this.</p>
| 3 | 2016-09-29T19:38:24Z | 39,778,758 | <p>In a function declaration, <code>**kwargs</code> will take all unspecified keyword arguments and convert them into a dictionary.</p>
<pre><code>>>> test_dict = {'a':1, 'b':2}
>>> def test(**kwargs):
... print (kwargs)
...
>>> test(**test_dict)
{'b': 2, 'a': 1}
</code></pre>
<p>Note that the dictionary object has to be converted using <code>**</code> when it is passed to the function (<code>test(**test_dict)</code>) and when it is received by the function. It is impossible to do the following:</p>
<pre><code>>>> test(test_dict)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: test() takes 0 positional arguments but 1 was given
</code></pre>
<p>So, in your example, the first <code>**kwargs</code> unpacks the keyword arguments into a dictionary, and then the second packs them back up to be sent to the parent.</p>
<p>A function with <code>**kwargs</code> in the signature can either received an unpacked dictionary or unspecified keyword arguments. Here's an example of the second case:</p>
<pre><code>>>> def test(arg1, **kwargs):
... print (kwargs)
...
>>> test('first', a=1, b=2)
{'b': 2, 'a': 1}
</code></pre>
| 3 | 2016-09-29T19:45:42Z | [
"python",
"django"
]
|
Template View - kwargs and **kwargs | 39,778,657 | <p>I am reading about Template views through a tutorial and some of the code kind of confused me. The author used this code sample</p>
<pre><code>from django.utils.timezone import now
class AboutUsView(TemplateView):
template_name = 'about_us.html'
def get_context_data(self, **kwargs):
context = super(AboutUsView, self).get_context_data(**kwargs)
if now().weekday() < 5 and 8 < now().hour < 18:
context['open'] = True
else:
context['open'] = False
return context
</code></pre>
<p>The thing that confused me syntactically was this statement</p>
<pre><code> context = super(AboutUsView, self).get_context_data(**kwargs)
</code></pre>
<p>if we already are receiving <code>**kwargs</code> then why are we passing it to the super function with ** (double start). I think we should pass it as </p>
<pre><code> context = super(AboutUsView, self).get_context_data(kwargs)
</code></pre>
<p>this is the contextMixin which is receiving this call.</p>
<pre><code>class ContextMixin(object):
"""
A default context mixin that passes the keyword arguments received by
get_context_data as the template context.
"""
def get_context_data(self, **kwargs):
if 'view' not in kwargs:
kwargs['view'] = self
return kwargs
</code></pre>
<p>From what I have read is that the use of <code>**kwargs</code> pretty much means that kwargs is currently a dictionary and needs to be converted to named-value. If that is correct then how can kwargs be a dictionary when its parameter is actually **kwargs. I hope my question makes sense. Please let me know if you would want me to rephrase this.</p>
| 3 | 2016-09-29T19:38:24Z | 39,778,764 | <p>Here at your function definition, it is accepting multiple arguments, and parsing them into a dict. <code>def get_context_data(self, **kwargs):</code> </p>
<p>So now, kwargs is a dictionary object. So if you pass it to <code>.get_context_data(kwargs)</code> it would have to expect only a single incoming argument, and treat it as a dictionary. </p>
<p>So when you do <code>**kwargs</code> a second time, you are blowing up the dictionary back into keyword arguments that will expand into that functions call.</p>
| 2 | 2016-09-29T19:45:53Z | [
"python",
"django"
]
|
pandas reset_index after groupby.value_counts() | 39,778,686 | <p>I am trying to groupby a column and compute value counts on another column.</p>
<pre><code>import pandas as pd
dftest = pd.DataFrame({'A':[1,1,1,1,1,1,1,1,1,2,2,2,2,2],
'Amt':[20,20,20,30,30,30,30,40, 40,10, 10, 40,40,40]})
print(dftest)
</code></pre>
<p>dftest looks like</p>
<pre><code> A Amt
0 1 20
1 1 20
2 1 20
3 1 30
4 1 30
5 1 30
6 1 30
7 1 40
8 1 40
9 2 10
10 2 10
11 2 40
12 2 40
13 2 40
</code></pre>
<p>perform grouping</p>
<pre><code>grouper = dftest.groupby('A')
df_grouped = grouper['Amt'].value_counts()
</code></pre>
<p>which gives</p>
<pre><code> A Amt
1 30 4
20 3
40 2
2 40 3
10 2
Name: Amt, dtype: int64
</code></pre>
<p>what I want is to keep top two rows of each group</p>
<p>Also, I was perplexed by an error when I tried to <code>reset_index</code></p>
<pre><code>df_grouped.reset_index()
</code></pre>
<p>which gives following error</p>
<blockquote>
<p>df_grouped.reset_index()
ValueError: cannot insert Amt, already exists</p>
</blockquote>
| 2 | 2016-09-29T19:41:03Z | 39,778,707 | <p>You need parameter <code>name</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow"><code>reset_index</code></a>, because <code>Series</code> name is same as name of one of levels of <code>MultiIndex</code>:</p>
<pre><code>df_grouped.reset_index(name='count')
</code></pre>
<p>Another solution is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.rename.html" rel="nofollow"><code>rename</code></a> <code>Series</code> name:</p>
<pre><code>print (df_grouped.rename('count').reset_index())
A Amt count
0 1 30 4
1 1 20 3
2 1 40 2
3 2 40 3
4 2 10 2
</code></pre>
<hr>
<p>More common solution instead <code>value_counts</code> is aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a>:</p>
<pre><code>df_grouped1 = dftest.groupby(['A','Amt']).size().rename('count').reset_index()
print (df_grouped1)
A Amt count
0 1 20 3
1 1 30 4
2 1 40 2
3 2 10 2
4 2 40 3
</code></pre>
| 3 | 2016-09-29T19:42:16Z | [
"python",
"pandas",
"dataframe",
"data-manipulation",
"data-science"
]
|
Optional Synchronous Interface to Asynchronous Functions | 39,778,901 | <p>I'm writing a library which is using Tornado Web's <code>tornado.httpclient.AsyncHTTPClient</code> to make requests which gives my code a <code>async</code> interface of:</p>
<pre><code>async def my_library_function():
return await ...
</code></pre>
<p>I want to make this interface optionally serial if the user provides a kwarg - something like: <code>serial=True</code>. Though you can't obviously call a function defined with the <code>async</code> keyword from a normal function without <code>await</code>. This would be ideal - though almost certain imposible in the language at the moment:</p>
<pre><code>async def here_we_go():
result = await my_library_function()
result = my_library_function(serial=True)
</code></pre>
<p>I'm not been able to find anything online where someones come up with a nice solution to this. I don't want to have to reimplement basically the same code without the <code>awaits</code> splattered throughout.</p>
<p>Is this something that can be solved or would it need support from the language?</p>
<hr>
<h1>Solution (though use Jesse's instead - explained below)</h1>
<p>Jesse's solution below is pretty much what I'm going to go with. I did end up getting the interface I originally wanted by using a decorator. Something like this:</p>
<pre><code>import asyncio
from functools import wraps
def serializable(f):
@wraps(f)
def wrapper(*args, asynchronous=False, **kwargs):
if asynchronous:
return f(*args, **kwargs)
else:
# Get pythons current execution thread and use that
loop = asyncio.get_event_loop()
return loop.run_until_complete(f(*args, **kwargs))
return wrapper
</code></pre>
<p>This gives you this interface:</p>
<pre><code>result = await my_library_function(asynchronous=True)
result = my_library_function(asynchronous=False)
</code></pre>
<p>I sanity checked this on python's async mailing list and I was lucky enough to have Guido respond and he politely shot it down for this reason:</p>
<blockquote>
<p>Code smell -- being able to call the same function both asynchronously
and synchronously is highly surprising. Also it violates the rule of
thumb that the value of an argument shouldn't affect the return type.</p>
</blockquote>
<p>Nice to know it's possible though if not considered a great interface. Guido essentially suggested Jesse's answer and introducing the wrapping function as a helper util in the library instead of hiding it in a decorator.</p>
| 1 | 2016-09-29T19:54:50Z | 39,781,803 | <p>When you want to call such a function synchronously, use <a href="https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.AbstractEventLoop.run_until_complete" rel="nofollow">run_until_complete</a>:</p>
<pre><code>asyncio.get_event_loop().run_until_complete(here_we_go())
</code></pre>
<p>Of course, if you do this often in your code, you should come up with an abbreviation for this statement, perhaps just:</p>
<pre><code>def sync(fn, *args, **kwargs):
return asyncio.get_event_loop().run_until_complete(fn(*args, **kwargs))
</code></pre>
<p>Then you could do:</p>
<pre><code>result = sync(here_we_go)
</code></pre>
| 1 | 2016-09-30T00:07:59Z | [
"python",
"asynchronous",
"tornado",
"synchronous"
]
|
UTF8 Character in URL String | 39,778,909 | <p>I wrote a little Python script that parses a website.
I got a "ä" character in form of <code>\u00e4</code> in a url from a link like <code>http://foo.com/h\u00e4ppo</code>, and I need <code>http://foo.com/häppo</code>.</p>
| 0 | 2016-09-29T19:55:16Z | 39,779,478 | <p>The character <code>\u00e4</code> which you have is already correct. That is in fact <code>ä</code>.</p>
<p>Sometimes, the representation (<code>repr</code>) of a string will display it in the escaped form, just as backslash <code>\</code> will be display as escaped <code>\\</code>. That part is fine.</p>
<h2>The Actual Problem</h2>
<p>The actual problem is that you cannot use ä in URL. Only a small subset o ASCII characters is valid in URLS (see <a href="http://stackoverflow.com/q/1547899/389289">Which characters make a URL invalid?</a>).</p>
<p>So, you have to escape parts of your URL.</p>
<pre><code>>>> urllib.parse.quote('ä')
'%C3%A4'
>>> urllib.parse.quote('\u00e4') # same thing
'%C3%A4'
</code></pre>
<p>But be careful not to escape the whole URL, only parts of it which are actual strings to be escaped. For example, this is wrong:</p>
<pre><code>>>> urllib.parse.quote('https://www.google.com/?q=\u00e4')
'https%3A//www.google.com/%3Fq%3D%C3%A4'
</code></pre>
<p>You'll want to do:</p>
<pre><code>>>> 'https://www.google.com/?q=' + urllib.parse.quote('\u00e4')
'https://www.google.com/?q=%C3%A4'
</code></pre>
<p>Try it and see what happens: <a href="https://www.google.com/?q=%C3%A4" rel="nofollow">https://www.google.com/?q=%C3%A4</a></p>
| 0 | 2016-09-29T20:32:15Z | [
"python",
"url",
"encoding"
]
|
UTF8 Character in URL String | 39,778,909 | <p>I wrote a little Python script that parses a website.
I got a "ä" character in form of <code>\u00e4</code> in a url from a link like <code>http://foo.com/h\u00e4ppo</code>, and I need <code>http://foo.com/häppo</code>.</p>
| 0 | 2016-09-29T19:55:16Z | 39,779,578 | <p>Unluckily this depends heavily on the encoding of the site you parsed, as well as your local IO encoding. </p>
<p>I'm not really sure if you can translate it after parsing, and if it's really worth the work. If you have the chance to parse it again you can try using python's <code>decode()</code> function, like:</p>
<p><code>text.decode('utf8')
</code></p>
<p>Besides that, check that the encoding used above is the same that in your local environment. This is specially important on Windows environments, since they use <code>cp1252</code> as their standard encoding.</p>
<p>In Mac and Linux: <code>export PYTHONIOENCODING=utf8</code>
In Windows: <code>set PYTHONIOENCODING=utf8</code></p>
<p>It's not much, but I hope it helps.</p>
| 0 | 2016-09-29T20:38:48Z | [
"python",
"url",
"encoding"
]
|
Connect to Oracle database using pyodbc | 39,778,968 | <p>I would like to connect to an Oracle database with python through pyodbc. I have installed oracle driver and I tried the following script:</p>
<pre><code>import pyodbc
connectString = """
DRIVER={Oracle in OraClient12Home1};
SERVER=some_oracle_db.com:1521;
SID=oracle_test;
UID=user_name;
PWD=user_pass
"""
cnxn = pyodbc.connect(connectString)
</code></pre>
<p>I got the following error message:</p>
<pre><code>cnxn = pyodbc.connect(connectString)
Error: ('HY000', '[HY000] [Oracle][ODBC][Ora]ORA-12560: TNS:protocol adapter error\n (12560) (SQLDriverConnect)')
</code></pre>
<p>What's wrong here?</p>
| 2 | 2016-09-29T19:58:44Z | 39,779,248 | <p>Looks Like your missing a PORT</p>
<p>Try this way</p>
<p><strong>NOTE:</strong>
Depending on your Server the syntax can be different this will work for Windows without DSN using an SQL Server Driver.</p>
<pre><code>connectString = pyodbc.connect('DRIVER={SQL Server};SERVER=localhost;PORT=1433;DATABASE=testdb;UID=me;PWD=pass')
</code></pre>
<p>This is the connection, you still need a cursor and to use execute along with an SQL Statement.. </p>
| 0 | 2016-09-29T20:17:12Z | [
"python",
"oracle"
]
|
Connect to Oracle database using pyodbc | 39,778,968 | <p>I would like to connect to an Oracle database with python through pyodbc. I have installed oracle driver and I tried the following script:</p>
<pre><code>import pyodbc
connectString = """
DRIVER={Oracle in OraClient12Home1};
SERVER=some_oracle_db.com:1521;
SID=oracle_test;
UID=user_name;
PWD=user_pass
"""
cnxn = pyodbc.connect(connectString)
</code></pre>
<p>I got the following error message:</p>
<pre><code>cnxn = pyodbc.connect(connectString)
Error: ('HY000', '[HY000] [Oracle][ODBC][Ora]ORA-12560: TNS:protocol adapter error\n (12560) (SQLDriverConnect)')
</code></pre>
<p>What's wrong here?</p>
| 2 | 2016-09-29T19:58:44Z | 39,779,556 | <p>You have to specify server or hostname (or IP address in connection string for your database server is running.</p>
| 0 | 2016-09-29T20:37:20Z | [
"python",
"oracle"
]
|
Connect to Oracle database using pyodbc | 39,778,968 | <p>I would like to connect to an Oracle database with python through pyodbc. I have installed oracle driver and I tried the following script:</p>
<pre><code>import pyodbc
connectString = """
DRIVER={Oracle in OraClient12Home1};
SERVER=some_oracle_db.com:1521;
SID=oracle_test;
UID=user_name;
PWD=user_pass
"""
cnxn = pyodbc.connect(connectString)
</code></pre>
<p>I got the following error message:</p>
<pre><code>cnxn = pyodbc.connect(connectString)
Error: ('HY000', '[HY000] [Oracle][ODBC][Ora]ORA-12560: TNS:protocol adapter error\n (12560) (SQLDriverConnect)')
</code></pre>
<p>What's wrong here?</p>
| 2 | 2016-09-29T19:58:44Z | 39,779,729 | <p>Thank. Solved problem with cx_Oracle. </p>
| 0 | 2016-09-29T20:47:44Z | [
"python",
"oracle"
]
|
How to identify a string as being a byte literal? | 39,778,978 | <p>In Python 3, if I have a string such that:</p>
<pre><code>print(some_str)
</code></pre>
<p>yields something like this:</p>
<pre><code>b'This is the content of my string.\r\n'
</code></pre>
<p>I know it's a byte literal. </p>
<p>Is there a function that can be used to determine if that string is in byte literal format (versus having, say, the Unicode <code>'u'</code> prefix) without first interpreting? Or is there another best practice for handling this? I have a situation wherein getting a byte literal string needs to be dealt with differently than if it's in Unicode. In theory, something like this:</p>
<pre><code>if is_byte_literal(some_str):
// handle byte literal case
else:
// handle unicode case
</code></pre>
| 7 | 2016-09-29T19:59:29Z | 39,779,024 | <p>The easiest and, arguably, best way to do this would be by utilizing the built-in <a href="https://docs.python.org/3/library/functions.html#isinstance"><code>isinstance</code></a> with the <code>bytes</code> type:</p>
<pre><code>some_str = b'hello world'
if isinstance(some_str, bytes):
print('bytes')
elif isinstance(some_str, str):
print('str')
else:
# handle
</code></pre>
<p>Since, a byte literal will <em>always</em> be an instance of <code>bytes</code>, <code>isinstance(some_str, bytes)</code> will, of course, evaluate to <code>True</code>.</p>
| 13 | 2016-09-29T20:02:36Z | [
"python",
"string",
"python-3.x"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.