title
stringlengths 10
172
| question_id
int64 469
40.1M
| question_body
stringlengths 22
48.2k
| question_score
int64 -44
5.52k
| question_date
stringlengths 20
20
| answer_id
int64 497
40.1M
| answer_body
stringlengths 18
33.9k
| answer_score
int64 -38
8.38k
| answer_date
stringlengths 20
20
| tags
list |
---|---|---|---|---|---|---|---|---|---|
python to split a long string into different rows | 39,710,389 | <p><code>123|china|jack|342|usa|Nick|324345|spin|Amy</code></p>
<p>I want the end result like this(I know I need a new line for every 3 element):</p>
<pre><code>123,china,jack
342,usa,Nick
324345,spin,Amy
</code></pre>
<p>Thank you</p>
| -1 | 2016-09-26T19:00:48Z | 39,710,547 | <p>Try that:</p>
<pre><code>x = '123|china|jack|342|usa|Nick|324345|spin|Amy'
l = x.split('|')
new_l = [l[i:i+3] for i in range(0,len(l),3)]
</code></pre>
<p>This will give you:</p>
<pre><code>>>> for i in new_l:
... print ','.join(i)
...
123,china,jack
342,usa,Nick
324345,spin,Amy
</code></pre>
<blockquote>
<p>Update: to read the file</p>
</blockquote>
<p>To read the file as string do the following:</p>
<pre><code>with open('your_file.txt', 'r') as myfile:
x=myfile.read().replace('\n', '|')
</code></pre>
| 2 | 2016-09-26T19:09:53Z | [
"python"
]
|
python to split a long string into different rows | 39,710,389 | <p><code>123|china|jack|342|usa|Nick|324345|spin|Amy</code></p>
<p>I want the end result like this(I know I need a new line for every 3 element):</p>
<pre><code>123,china,jack
342,usa,Nick
324345,spin,Amy
</code></pre>
<p>Thank you</p>
| -1 | 2016-09-26T19:00:48Z | 39,711,099 | <p>It could be done with one <code>print</code> statement</p>
<pre><code>>>> d = '123|china|jack|342|usa|Nick|324345|spin|Amy'
>>> l = d.split('|')
>>> print(*list(map(lambda v: ','.join(v),[l[x: x+3] for x in range(0, len(l), 3)])), sep='\n')
123,china,jack
342,usa,Nick
324345,spin,Amy
</code></pre>
<p>So now step by step</p>
<pre><code>>>> d = '123|china|jack|342|usa|Nick|324345|spin|Amy'
>>> l = d.split('|') # splitting input string into list of str objects that separated with '|'
>>> l
['123', 'china', 'jack', '342', 'usa', 'Nick', '324345', 'spin', 'Amy']
>>> [l[x: x+3] for x in range(0, len(l), 3)] # combining elements by 3
[['123', 'china', 'jack'], ['342', 'usa', 'Nick'], ['324345', 'spin', 'Amy']]
>>> list(map(lambda v: ','.join(v),[l[x: x+3] for x in range(0, len(l), 3)])) # merging every sublist to str with ',' as separator
['123,china,jack', '342,usa,Nick', '324345,spin,Amy']
>>> print(*list(map(lambda v: ','.join(v),[l[x: x+3] for x in range(0, len(l), 3)])), sep='\n') # final print
123,china,jack
342,usa,Nick
324345,spin,Amy
</code></pre>
| 1 | 2016-09-26T19:42:38Z | [
"python"
]
|
How can I install a tweepy package via pip? | 39,710,454 | <p>Can't get pip to work; trying to install Tweepy per this <a href="http://mark-kay.net/2013/12/18/collecting-tweets-using-python/" rel="nofollow">article</a>.</p>
<p>This <a href="http://stackoverflow.com/questions/39060397/how-can-i-install-twilio-package-via-pip/39710119#39710119">thread</a> managed to solve it for someone installing a different package, but I've tried all the strategies listed there including changing PATH in my environment variables, and I get "syntax error" for each of these three attempts:</p>
<pre><code>pip install tweepy
python -m pip install tweepy
C:\Python27\Scripts\pip.exe install tweepy
</code></pre>
| 0 | 2016-09-26T19:03:58Z | 39,710,762 | <p>Are you using pip install command inside of the python prompt?
If yes then you need to type it directly into command prompt.Open command prompt and type.</p>
<pre><code>pip install tweepy
</code></pre>
| 0 | 2016-09-26T19:23:32Z | [
"python",
"pip",
"package-managers"
]
|
Python: Why does return not actually return anything that I can see | 39,710,576 | <p>I have the following code:</p>
<pre><code>def is_prime(n):
limit = (n**0.5) + 1
q = 2
p = 1
while p != 0 and q < limit:
p = n % q
q = q + 1
if p == 0 and n != 2:
return 'false'
else:
return 'true'
</code></pre>
<p>But when I send in an integer, there is nothing returned. The console simply moves on to a new command line. What's wrong here?</p>
<p><strong>EDIT 1:</strong>
The following are screenshots of different scenarios. I would like to make it such that I call the function with a particular number and the function will return 'true' or 'false' depending on the primality of the number sent into the function. I guess I don't really understand the <em>return</em> function very well.</p>
<p>Also, note that when I send in to test 9, it returns true, despite 9 being quite definitely a composite number...should the if/else bits be outside the while loop?</p>
<p>Key to below image:</p>
<p>1: this is the code as it is above and how I call it in the Spyder console</p>
<p>2: adding a print statement outside the function</p>
<p>3: this is a simple factorial function offered by the professor</p>
<p><a href="http://i.stack.imgur.com/LX4hy.jpg" rel="nofollow">image here</a></p>
<p><strong>EDIT 2:</strong></p>
<p>I made a quick change to the structure of the code. I don't really understand why this made it work, but putting the if/else statements outside the while loop made things result in expected true/false outputs</p>
<pre><code>def is_prime(n):
limit = (n**0.5)+1
q=2
p=1
while p!=0 and q<limit:
p = n%q
q = q+1
if p==0 and n!=2:
return 'false'
else:
return 'true'
</code></pre>
<p>Also, I call the function in the console using is_prime(int_of_choice)</p>
<p>Thanks for the helpful suggestions</p>
| -1 | 2016-09-26T19:11:55Z | 39,710,691 | <p>If you want to print something to the console you have to use a print statement. The return keyword means that you can use this value in a piece of code that calls this function. So to print something:</p>
<pre><code>print (x)
</code></pre>
<p>For more information about the print statement see: <a href="https://en.wikibooks.org/wiki/Python_Programming/Variables_and_Strings" rel="nofollow">https://en.wikibooks.org/wiki/Python_Programming/Variables_and_Strings</a></p>
| 1 | 2016-09-26T19:18:33Z | [
"python",
"python-2.7"
]
|
Python: Why does return not actually return anything that I can see | 39,710,576 | <p>I have the following code:</p>
<pre><code>def is_prime(n):
limit = (n**0.5) + 1
q = 2
p = 1
while p != 0 and q < limit:
p = n % q
q = q + 1
if p == 0 and n != 2:
return 'false'
else:
return 'true'
</code></pre>
<p>But when I send in an integer, there is nothing returned. The console simply moves on to a new command line. What's wrong here?</p>
<p><strong>EDIT 1:</strong>
The following are screenshots of different scenarios. I would like to make it such that I call the function with a particular number and the function will return 'true' or 'false' depending on the primality of the number sent into the function. I guess I don't really understand the <em>return</em> function very well.</p>
<p>Also, note that when I send in to test 9, it returns true, despite 9 being quite definitely a composite number...should the if/else bits be outside the while loop?</p>
<p>Key to below image:</p>
<p>1: this is the code as it is above and how I call it in the Spyder console</p>
<p>2: adding a print statement outside the function</p>
<p>3: this is a simple factorial function offered by the professor</p>
<p><a href="http://i.stack.imgur.com/LX4hy.jpg" rel="nofollow">image here</a></p>
<p><strong>EDIT 2:</strong></p>
<p>I made a quick change to the structure of the code. I don't really understand why this made it work, but putting the if/else statements outside the while loop made things result in expected true/false outputs</p>
<pre><code>def is_prime(n):
limit = (n**0.5)+1
q=2
p=1
while p!=0 and q<limit:
p = n%q
q = q+1
if p==0 and n!=2:
return 'false'
else:
return 'true'
</code></pre>
<p>Also, I call the function in the console using is_prime(int_of_choice)</p>
<p>Thanks for the helpful suggestions</p>
| -1 | 2016-09-26T19:11:55Z | 39,710,877 | <p>Nothing is wrong, but you have to print out the return of your function.
Like this:</p>
<pre><code>def Test():
if True:
return "Hi"
print(Test())
</code></pre>
<p>In this case python will show "Hi" in your console.</p>
| 1 | 2016-09-26T19:30:22Z | [
"python",
"python-2.7"
]
|
Confusion about Python Functions and Lists | 39,710,632 | <p>I am trying to create a function to remove an item from the passed list either by a specified index, or item passed. </p>
<p>If the user wishes to remove an item from the list using an index, the third argument passed will be <code>âindexâ</code>, if the user wishes to remove the first item found in the list using the item passed, the second argument will be <code>â{item}â</code> </p>
<p>For example, to remove the item at index 3 from a list, this would be the command <code>myFunction(myList,3,âindexâ)</code></p>
<p>I am quite confused about this function part. I have written code that does exactly what the question seems to ask, but it does not use a function. My code is below :</p>
<pre><code>mylist = ["one" , "two" ,"three" , "four" , "five"]
print "list is composed of: "+ str(mylist)
name = raw_input("Index of item to be removed. ex. 1")
name2 = raw_input('"item to be removed. ex. four')
name3 = int(name)
del mylist[name3]
mylist.remove(name2)
print mylist
</code></pre>
<p>It appears that I need to create a function to do this, and then pass in my list, the index/item, etc.) but I am very lost on this part. </p>
| -1 | 2016-09-26T19:15:31Z | 39,710,833 | <p>You really need to work on your questionsmithing skills. It's very difficult to understand what you are trying to accomplish. After making about half a dozen assumptions, I think this is what you are trying to do:</p>
<pre><code>def listRemover(mylist,index_or_name,mytype):
if mytype == "index":
del mylist[index_or_name]
if mytype == "name":
mylist.remove(index_or_name)
</code></pre>
<p>It's obvious though that there are some gaping holes in your basic knowledge of python. You need to study what a function is, why they are useful, and how to use them.</p>
| 1 | 2016-09-26T19:27:43Z | [
"python"
]
|
Confusion about Python Functions and Lists | 39,710,632 | <p>I am trying to create a function to remove an item from the passed list either by a specified index, or item passed. </p>
<p>If the user wishes to remove an item from the list using an index, the third argument passed will be <code>âindexâ</code>, if the user wishes to remove the first item found in the list using the item passed, the second argument will be <code>â{item}â</code> </p>
<p>For example, to remove the item at index 3 from a list, this would be the command <code>myFunction(myList,3,âindexâ)</code></p>
<p>I am quite confused about this function part. I have written code that does exactly what the question seems to ask, but it does not use a function. My code is below :</p>
<pre><code>mylist = ["one" , "two" ,"three" , "four" , "five"]
print "list is composed of: "+ str(mylist)
name = raw_input("Index of item to be removed. ex. 1")
name2 = raw_input('"item to be removed. ex. four')
name3 = int(name)
del mylist[name3]
mylist.remove(name2)
print mylist
</code></pre>
<p>It appears that I need to create a function to do this, and then pass in my list, the index/item, etc.) but I am very lost on this part. </p>
| -1 | 2016-09-26T19:15:31Z | 39,710,952 | <blockquote>
<p>It appears that I need to create a function to do this, and then pass in my list, the index/item, etc.) but I am very lost on this part.</p>
</blockquote>
<p><a href="https://www.google.com/webhp#q=define+function+python" rel="nofollow">Google it!</a> (query = "define function python")</p>
<p>Show your research. The basic form of a function is:</p>
<pre><code>def funcname(arg1, arg2, arg3):
# now you can use the vars arg1, arg2, and arg3.
# rename them to whatever you want.
arg1[0] = "bannanas"
</code></pre>
<p>so, </p>
<pre><code>array = ['mango', 'apple']
funcname(array)
print(array) # -> ['bannanas', 'apple']
</code></pre>
| 1 | 2016-09-26T19:34:09Z | [
"python"
]
|
Confusion about Python Functions and Lists | 39,710,632 | <p>I am trying to create a function to remove an item from the passed list either by a specified index, or item passed. </p>
<p>If the user wishes to remove an item from the list using an index, the third argument passed will be <code>âindexâ</code>, if the user wishes to remove the first item found in the list using the item passed, the second argument will be <code>â{item}â</code> </p>
<p>For example, to remove the item at index 3 from a list, this would be the command <code>myFunction(myList,3,âindexâ)</code></p>
<p>I am quite confused about this function part. I have written code that does exactly what the question seems to ask, but it does not use a function. My code is below :</p>
<pre><code>mylist = ["one" , "two" ,"three" , "four" , "five"]
print "list is composed of: "+ str(mylist)
name = raw_input("Index of item to be removed. ex. 1")
name2 = raw_input('"item to be removed. ex. four')
name3 = int(name)
del mylist[name3]
mylist.remove(name2)
print mylist
</code></pre>
<p>It appears that I need to create a function to do this, and then pass in my list, the index/item, etc.) but I am very lost on this part. </p>
| -1 | 2016-09-26T19:15:31Z | 39,711,509 | <p>The question (I think) is: "<em>If the user wishes to remove an item from the list using an index, the third argument passed will be âindexâ, if the user wishes to remove the first item found in the list using the item passed, the second argument will be â{item}â</em>"</p>
<p>The purpose of this exercise (presumably) is to practice writing a function. Yes you could do it without a function but right now you need the practice of writing a function and passing parameters. Functions are a very important part of programming, but this is not a good place to go into that.</p>
<p>So first we define our function:</p>
<pre><code>def removeItem( theList, theItem, typeOfItem=None ):
</code></pre>
<p>Notice I have given a default value of <code>None</code> because the third parameter is optional.</p>
<p>The first thing we will do is to test <code>typeOfItem</code>. The question says that is it is an index then it will say <code>"index"</code> else the second parameter will say <code>"{item}"</code>. So it will be one or the other. (What to do if that is not the case is a question you should ask).</p>
<p>The index part is easy:</p>
<pre><code> if typeOfItem == "index":
del(theList[theItem])
</code></pre>
<p>but now its a bit more complicated, because of the <code>{ }</code>, which we have to remove:</p>
<pre><code> else:
theList.remove(theItem[1:-1])
</code></pre>
<p>This last part is removing a <em>slice</em>, which starts at character 1 (the second character) and ends at the final character -1, thus removing the <code>{ }</code> </p>
<p>So the final function code, with tests, is:</p>
<pre><code>def removeItem( theList, theItem, typeOfItem=None ):
if typeOfItem == "index":
del(theList[theItem])
else:
theList.remove(theItem[1:-1])
mylist = ["one" , "two" ,"three" , "four" , "five"]
removeItem(mylist, 3, "index")
print mylist
mylist = ["one" , "two" ,"three" , "four" , "five"]
removeItem(mylist, "{two}")
print mylist
</code></pre>
<p>Notice an important feature of the function and the list. If you alter the list inside the function then it also alters it <em>outside</em> the function as well - it is the same list. That is not the case with numbers and strings.</p>
| 1 | 2016-09-26T20:07:16Z | [
"python"
]
|
Pycharm does not find module with one interpreter but does with another, why? | 39,710,675 | <p>I am trying to install a package called "quantecon" through PyCharm. If I have Python 3.5 as an interpreter then I can find the package in the settings menu. But I need to run Anaconda, it has a bunch of other packages I need like scipy, numpy, etc. Once I install Anaconda and use it as the interpreter (it runs on Python 3.5 and a bunch of other packages) quantecon disappears from the menu of modules in PyCharm. Why does quantecon appear with one interpreter and not with another when they both run on python 3.5? This only happens with PyCharm. If I use jupyter/ipython notebook I can have both Anaconda and quantecon.</p>
<p>I prefer working with PyCharm, it would be ideal to be able to have both Anaconda and quantecon there. How can I install quantecon and have Anaconda as the interpreter?</p>
<p>Thanks</p>
| 1 | 2016-09-26T19:17:49Z | 39,711,203 | <p>Inside PyCharm, in Ubuntu, go to <code>File -> Settings -> Project -> Project Interpreter</code> and change the interpreter. If Anaconda is not there, click on the gear, add local and then go to <code>/home/user/anaconda2/bin/python</code></p>
| 0 | 2016-09-26T19:49:17Z | [
"python",
"module",
"pycharm",
"package-managers"
]
|
Pycharm does not find module with one interpreter but does with another, why? | 39,710,675 | <p>I am trying to install a package called "quantecon" through PyCharm. If I have Python 3.5 as an interpreter then I can find the package in the settings menu. But I need to run Anaconda, it has a bunch of other packages I need like scipy, numpy, etc. Once I install Anaconda and use it as the interpreter (it runs on Python 3.5 and a bunch of other packages) quantecon disappears from the menu of modules in PyCharm. Why does quantecon appear with one interpreter and not with another when they both run on python 3.5? This only happens with PyCharm. If I use jupyter/ipython notebook I can have both Anaconda and quantecon.</p>
<p>I prefer working with PyCharm, it would be ideal to be able to have both Anaconda and quantecon there. How can I install quantecon and have Anaconda as the interpreter?</p>
<p>Thanks</p>
| 1 | 2016-09-26T19:17:49Z | 39,711,232 | <p>Did you change the interpreter in PyCharm?</p>
<p>If not, go to File -> Settings -> Project -> Project Interpreter and change the interpreter to the one in Anaconda. It should find the package, unless it's installed in a weird location.</p>
<p>If you don't have the Anaconda interpreter in the list of available interpreters, you can easily add it that dialog as well. Click the gear icon, select "Add local" and navigate to the <code>python</code> executable from Anaconda.</p>
| 0 | 2016-09-26T19:51:10Z | [
"python",
"module",
"pycharm",
"package-managers"
]
|
Pycharm does not find module with one interpreter but does with another, why? | 39,710,675 | <p>I am trying to install a package called "quantecon" through PyCharm. If I have Python 3.5 as an interpreter then I can find the package in the settings menu. But I need to run Anaconda, it has a bunch of other packages I need like scipy, numpy, etc. Once I install Anaconda and use it as the interpreter (it runs on Python 3.5 and a bunch of other packages) quantecon disappears from the menu of modules in PyCharm. Why does quantecon appear with one interpreter and not with another when they both run on python 3.5? This only happens with PyCharm. If I use jupyter/ipython notebook I can have both Anaconda and quantecon.</p>
<p>I prefer working with PyCharm, it would be ideal to be able to have both Anaconda and quantecon there. How can I install quantecon and have Anaconda as the interpreter?</p>
<p>Thanks</p>
| 1 | 2016-09-26T19:17:49Z | 39,711,364 | <p>I think you want to install quantecon to your anaconda:</p>
<p><a href="https://anaconda.org/pypi/quantecon" rel="nofollow">https://anaconda.org/pypi/quantecon</a></p>
<p>(make sure you use anaconda's version of pip, not system pip)</p>
<p>You could also try to create a new Conda environment that has quantecon in it:</p>
<p><a href="http://www.quantecon.org/wiki_py_conda_dev_env.html" rel="nofollow">http://www.quantecon.org/wiki_py_conda_dev_env.html</a></p>
| 0 | 2016-09-26T19:59:01Z | [
"python",
"module",
"pycharm",
"package-managers"
]
|
Infer the length of a sequence using the CIGAR | 39,710,796 | <p>To give you a bit of context: I am trying to convert a sam file to bam</p>
<pre><code>samtools view -bT reference.fasta sequences.sam > sequences.bam
</code></pre>
<p>which exits with the following error</p>
<pre><code>[E::sam_parse1] CIGAR and query sequence are of different length
[W::sam_read1] parse error at line 102
[main_samview] truncated file
</code></pre>
<p>and the offending line looks like this:</p>
<pre><code>SRR808297.2571281 99 gi|309056|gb|L20934.1|MSQMTCG 747 80 101M = 790 142 TTGGTATAAAATTTAATAATCCCTTATTAATTAATAAACTTCGGCTTCCTATTCGTTCATAAGAAATATTAGCTAAACAAAATAAACCAGAAGAACAT @@CFDDFD?HFDHIGEGGIEEJIIJJIIJIGIDGIGDCHJJCHIGIJIJIIJJGIGHIGICHIICGAHDGEGGGGACGHHGEEEFDC@=?CACC>CCC NM:i:2 MD:Z:98A1A
</code></pre>
<p>My sequence is 98 characters long but a probable bug when creating the sam file reported 101 in the CIGAR. I can give myself the luxury to loss a couple of reads and I don't have access at the moment to the source code that produced the sam files, so no opportunity to hunt down the bug and re-run the alignment. In other words, I need a pragmatic solution to move on (for now). Therefore, I devised a python script that counts the length of my string of nucleotides, compares it with what is registered in the CIGAR, and saves the "sane" lines in a new file.</p>
<pre><code>#!/usr/bin/python
import itertools
import cigar
with open('myfile.sam', 'r') as f:
for line in itertools.islice(f,3,None): #Loop through the file and skip the first three lines
cigar=line.split("\t")[5]
cigarlength = len(Cigar(cigar)) #Use module Cigar to obtain the length reported in the CIGAR string
seqlength = len(line.split("\t")[9])
if (cigarlength == seqlength):
...Preserve the line in a new file...
</code></pre>
<p>As you can see, to translate the CIGAR into an integer showing the length, I am using the module <a href="https://pypi.python.org/pypi/cigar/0.1" rel="nofollow">CIGAR</a>. To be honest, I am a bit wary of its behavior. This module seems to miscalculate the length in very obvious cases. Is there another module or a more explicit strategy to translate the CIGAR into the length of the sequence?</p>
<p><strong>Sidenote:</strong> Interesting, to say the least, that this problem has been widely reported but no pragmatic solution can be found in the internet. See the links below:</p>
<pre><code>https://github.com/COMBINE-lab/RapMap/issues/9
http://seqanswers.com/forums/showthread.php?t=67253
http://seqanswers.com/forums/showthread.php?t=21120
https://groups.google.com/forum/#!msg/snap-user/FoDsGeNBDE0/nRFq-GhlAQAJ
</code></pre>
| 4 | 2016-09-26T19:25:17Z | 39,812,985 | <p>I suspect the reason there isn't a tool to fix this problem is because there is no general solution, aside from performing the alignment again using software that does not exhibit this problem. In your example, the query sequence aligns perfectly to the reference and so in that case the CIGAR string is not very interesting (just a single <code>M</code>atch operation prefixed by the overall query length). In that case the fix simply requires changing <code>101M</code> to <code>98M</code>. </p>
<p>However, for more complex CIGAR strings (e.g. those that include <code>I</code>nsertions, <code>D</code>eletions, or any other operations), you would have no way of knowing which part of the CIGAR string is too long. If you subtract from the wrong part of the CIGAR string, you'll be left with a misaligned read, which is probably worse for your downstream analysis than just leaving the whole read out. </p>
<p>That said, if it happens to be trivial to get it right (perhaps your broken alignment procedure always adds extra bases to the first or last CIGAR operation), then what you need to know is the correct way to calculate the query length according to the CIGAR string, so that you know what to subtract from it. </p>
<p><code>samtools</code> calculates this using the <code>htslib</code> function <a href="https://github.com/samtools/htslib/blob/19c189438f852e6e62dbda73f854d465cebb3d9f/sam.c#L325-L332" rel="nofollow">bam_cigar2qlen</a>.</p>
<p>The other functions that <code>bam_cigar2qlen</code> calls are defined in <a href="https://github.com/samtools/htslib/blob/bf753361dab9b1640cf64f7886dbfe35357a43c5/htslib/sam.h#L76-L105" rel="nofollow">sam.h</a>, including a helpful comment showing the truth table for which operations consume query sequence vs reference sequence.</p>
<p>In short, to calculate the query length of a CIGAR string the way that samtools (really htslib) does it, you should add the given length for CIGAR operations <code>M</code>, <code>I</code>, <code>S</code>, <code>=</code>, or <code>X</code> and ignore the length of CIGAR operations for any of the other operations. </p>
<p>The current version of the python cigar module seem to be using the <a href="https://github.com/brentp/cigar/blob/master/cigar.py#L68" rel="nofollow">same set of operations</a>, and the algorithm for calculating the query length (which is what <code>len(Cigar(cigar))</code> would return) looks right to me. What makes you think that it isn't giving the correct results?</p>
<p>It looks like you should be able to use the cigar python module to hard clip from either the left or right end using the <code>mask_left</code> or <code>mask_right</code> method with <code>mask="H"</code>.</p>
| 0 | 2016-10-02T01:18:55Z | [
"python",
"module",
"bioinformatics",
"samtools"
]
|
pd.read_html() imports a list rather than a dataframe | 39,710,903 | <p>I used <code>pd.read_html()</code> to import a table from a webpage but instead of structuring the data as a dataframe Python imported it as a list. How can I import the data as a dataframe? Thank you!</p>
<p>The code is the following:</p>
<pre><code>import pandas as pd
import html5lib
url = 'http://www.fdic.gov/bank/individual/failed/banklist.html'
dfs = pd.read_html(url)
type(dfs)
Out[1]: list
</code></pre>
| 0 | 2016-09-26T19:31:44Z | 39,710,987 | <p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_html.html" rel="nofollow"><code>.read_html()</code></a> produces a <em>list of dataframes</em> (there could be multiple tables in an HTML source), get the desired one by index. In your case, there is a single dataframe:</p>
<pre><code>dfs = pd.read_html(url)
df = dfs[0]
print(df)
</code></pre>
<p>Note that, if there are no <code>table</code>s in the HTML source, it would return an error and would never produce an empty list.</p>
| 1 | 2016-09-26T19:36:10Z | [
"python",
"html",
"pandas"
]
|
Python Iterating through Directories and Renaming | 39,710,929 | <p>I am trying to iterate through a list of subdirectories and then opening the files within that subdirectory and renaming the files to lowercase. Here is my code:</p>
<pre><code>for root, subdirs, pics in os.walk(rootdir):
for pic in pics:
if pic.endswith('.jpg'):
picpath = os.path.join(pic)
#print pic
print picpath
#os.rename(pic, pic.replace(" ", "-").lower())
os.rename(picpath, picpath.replace(" ", "-").lower())
</code></pre>
<p>I then get: </p>
<blockquote>
<p><em>Traceback (most recent call last): File "imageresizing-renamefiles.py", line 19, in
os.rename(picpath, picpath.replace(" ", "-").lower()) OSError: [Errno
2] No such file or directory</em></p>
</blockquote>
<p>My file structure is a root directory where the code runs from and within that folder are the following
<code>folder1</code> with <code>Image1jpg</code> and <code>Image2jpg</code>, <code>folder2</code> with <code>Image3jpg</code> and <code>Image4jpg</code> and so on. I want to iterate through each to rename the files (not the folders) to lower case names.</p>
<p>Appreciate any help.</p>
| 0 | 2016-09-26T19:33:00Z | 39,711,017 | <pre><code>picpath = os.path.join(root, pic)
# ^^^^^
</code></pre>
<p>looks like it should do the job. Per <a href="https://docs.python.org/2/library/os.html#os.walk" rel="nofollow">the docs</a>,</p>
<blockquote>
<p>Note that the names in the lists contain no path components. To get a full path (which begins with top) to a file or directory in dirpath, do <code>os.path.join(dirpath, name).</code></p>
</blockquote>
<p>That is why you are getting a "No such file" error: you are asking for the filename in the current directory, which isn't <code>root</code> at the point the error happens.</p>
| 1 | 2016-09-26T19:38:09Z | [
"python",
"iteration",
"rename"
]
|
Python Iterating through Directories and Renaming | 39,710,929 | <p>I am trying to iterate through a list of subdirectories and then opening the files within that subdirectory and renaming the files to lowercase. Here is my code:</p>
<pre><code>for root, subdirs, pics in os.walk(rootdir):
for pic in pics:
if pic.endswith('.jpg'):
picpath = os.path.join(pic)
#print pic
print picpath
#os.rename(pic, pic.replace(" ", "-").lower())
os.rename(picpath, picpath.replace(" ", "-").lower())
</code></pre>
<p>I then get: </p>
<blockquote>
<p><em>Traceback (most recent call last): File "imageresizing-renamefiles.py", line 19, in
os.rename(picpath, picpath.replace(" ", "-").lower()) OSError: [Errno
2] No such file or directory</em></p>
</blockquote>
<p>My file structure is a root directory where the code runs from and within that folder are the following
<code>folder1</code> with <code>Image1jpg</code> and <code>Image2jpg</code>, <code>folder2</code> with <code>Image3jpg</code> and <code>Image4jpg</code> and so on. I want to iterate through each to rename the files (not the folders) to lower case names.</p>
<p>Appreciate any help.</p>
| 0 | 2016-09-26T19:33:00Z | 39,711,224 | <p>you have to append the directory name to your path or <code>os.rename</code> is unable to find the proper directory where to apply the rename.</p>
<p>That said, your conversion to lowercase complicates the task. lowercase must only be applied to the basename (that would work on Windows filesystem because case doesn't matter, but would fail on Linux if some directories of the path contain uppercase letters: fortunately, you cannot rename a whole dirtree with a single <code>rename</code> command)</p>
<p>And the match for <code>.jpg</code> extension should be done regardless of the casing, specially if you want to convert picture names to lowercase: extensions are likely to be in uppercase too (like all those DCIM cameras)</p>
<pre><code>for root, subdirs, pics in os.walk(rootdir):
for pic in pics:
if pic.lower().endswith('.jpg'): # more powerful: fnmatch.fnmatch(pic,"*.jpg")
os.rename(os.path.join(root,pic), os.path.join(root,pic.replace(" ", "-").lower()))
</code></pre>
| 1 | 2016-09-26T19:50:17Z | [
"python",
"iteration",
"rename"
]
|
POST related Fields Django rest Framework | 39,710,972 | <p>new at django. What I am trying to do is POSTING a model which has a OneToOneField property. How do you POST that?</p>
<p><strong>Models.py</strong></p>
<pre><code>Article(models.Model):
name=models.CharField(max_length=50)
description=models.TextField(max_length=200)
Characteristic(models.Model):
article=models.OneToOneField(Article,on_delete=models.CASCADE,primary_key=True)
other=models.FloatField()
another=models.IntegerField()
</code></pre>
<p><strong>Serializer.py</strong></p>
<pre><code>class ArticleSerializer(serializers.ModelSerializer):
class Meta:
model=Article
field='__all__'
class CharacteristicSerializer(serializers.ModelSerializer):
article=serializers.StringRelatedField()
class Meta:
model=Characteristic
field='__all__'
</code></pre>
<p><strong>Views.py</strong> POST METHOD (API Based)</p>
<pre><code>def post(self, request,format=None):
serializer=CharacteristicSerializer(data=request.data)
if serializer.is_valid():
serializer.save()
return Response(serializer.data,status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>If I try to POST with something like this:</p>
<blockquote>
<p>(some_url...)/characteristics/ other=4.4 another=4 post=1</p>
</blockquote>
<p>I get the next error:</p>
<blockquote>
<p>django.db.utils.IntegrityError: (1048, "Column 'post_id' cannot be
null")</p>
</blockquote>
<p>The idea is to receive the id of the model Article and then save the model Characteristic. </p>
<p>Any ideas?</p>
| 0 | 2016-09-26T19:35:15Z | 39,731,574 | <p>Finally I was able to solve it. It is only about dictionaries.</p>
<p><strong>Method POST</strong></p>
<pre><code>def post(self,request,format=None):
serializer=CharacteristicsSerializer(data=request.data)
if serializer.is_valid():
tmp=self.get_article(pk=int(self.request.data['article']))
serializer.save(article=tmp)
return Response(serializer.data,status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
</code></pre>
<p>For now that is working, if there is another better way to do it and someone want to share it I'll be thankful </p>
| 0 | 2016-09-27T18:10:36Z | [
"python",
"django",
"django-rest-framework"
]
|
How to add lines to contour plot in python `matplotlib`? | 39,710,994 | <p>I have the following function to illustrate some contour lines :</p>
<pre><code>"""
Illustrate simple contour plotting, contours on an image with
a colorbar for the contours, and labelled contours.
See also contour_image.py.
"""
import matplotlib
import numpy as np
import matplotlib.cm as cm
import matplotlib.mlab as mlab
import matplotlib.pyplot as plt
matplotlib.rcParams['xtick.direction'] = 'out'
matplotlib.rcParams['ytick.direction'] = 'out'
X = np.arange(-1.2, 1.2, 0.005)
Y = np.arange(-1.2, 1.2, 0.005)
X, Y = np.meshgrid(X, Y)
Z = (np.ones([np.shape(X)[0],np.shape(X)[1]])-X)**2+100*(Y-(X)**2)**2
# Create a simple contour plot with labels using default colors. The
# inline argument to clabel will control whether the labels are draw
# over the line segments of the contour, removing the lines beneath
# the label
levels = np.arange(-100.0, 600, 1.0)
plt.figure()
CS = plt.contour(X,
Y,
Z,
levels=levels,
)
plt.clabel(CS,
np.array(filter(lambda lev: lev <5.0, levels)),
inline=0.5,
fontsize=10,
fmt='%1.1f'
)
plt.hold(True)
plt.plot(np.arange(-1.0, 1.0, 0.005),
np.arange(-1.0, 1.0, 0.005),
np.ones(len(np.arange(-1.0, 1.0, 0.005)))*100, '-k')
plt.title('Contour Lines and Constraint of Rosenbrock Optimiztion Problem')
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/pomiv.png" rel="nofollow"><img src="http://i.stack.imgur.com/pomiv.png" alt="enter image description here"></a></p>
<p>The contour plot looks great if you comment out the lines....:</p>
<pre><code># plt.hold(True)
# plt.plot(np.arange(-1.0, 1.0, 0.005),
# np.arange(-1.0, 1.0, 0.005),
# np.ones(len(np.arange(-1.0, 1.0, 0.005)))*100, '-k')
</code></pre>
<p><a href="http://i.stack.imgur.com/asziR.png" rel="nofollow"><img src="http://i.stack.imgur.com/asziR.png" alt="enter image description here"></a></p>
<p>...but I cannot get the lines to show up overlayed on the plot like I need them. I just simply need them to be overlayed on top of the contour plot. What is the best way to do this? </p>
<p>I know it is <a href="http://stackoverflow.com/questions/28458192/r-add-a-line-to-contour-plot">possible in R</a>, but how to do this in <code>Python</code> using <code>matplotlib</code>?</p>
| 0 | 2016-09-26T19:36:44Z | 39,711,759 | <p><code>plt.plot</code> draws a two-dimensional line from a sequence of x- and y-coordinates. There's no z-coordinate associated with each point, so there's no need to pass in a third array argument. At the moment <code>plt.plot</code> is interpreting those arrays as coordinates for two separate lines, and is doing the equivalent of:</p>
<pre><code>plt.plot(np.arange(-1.0, 1.0, 0.005), np.arange(-1.0, 1.0, 0.005))
plt.plot(np.ones(len(np.arange(-1.0, 1.0, 0.005)))*100, '-k')
</code></pre>
<p>Since the second line contains x and y coordinates of up to 100, the axes will be automatically rescaled so that the contour plot is no longer legible.</p>
<p>I think you might be thinking of the <code>zorder=</code> argument (which should just be a scalar rather than an array). It's not necessary in this case - since you're plotting the line after the contours it should have a higher <code>zorder</code> than the contour lines by default. You can just get rid of the third array argument to <code>plt.plot</code></p>
<p>Also, since you're drawing a straight line with only two points, you only need to pass the start and end coordinates:</p>
<pre><code>plt.plot([-1, 1], [-1, 1], '-k')
</code></pre>
<p><a href="http://i.stack.imgur.com/sYbFF.png" rel="nofollow"><img src="http://i.stack.imgur.com/sYbFF.png" alt="enter image description here"></a></p>
| 1 | 2016-09-26T20:23:50Z | [
"python",
"matplotlib",
"plot"
]
|
NaiveBayes Classification in NLTK using python | 39,711,057 | <p>I have the following datasets...
<a href="http://i.stack.imgur.com/9l9nJ.png" rel="nofollow">dataset</a></p>
<p>I have load the data using this</p>
<pre><code>import numpy as np
import pandas as pd
input_file = "C:/Users/User/Documents/R/exp.csv"
df = pd.read_csv(input_file, header = 0)
</code></pre>
<p>Now, I am trying to do this...</p>
<pre><code>classifier = nltk.NaiveBayesClassifier.train(labeled_featuresets)
</code></pre>
<p>How can i reach there ?</p>
| -2 | 2016-09-26T19:40:20Z | 39,712,929 | <p>You can find information on NLTK and it's workings with their <a href="http://www.nltk.org/book/" rel="nofollow">online tutorial</a>.</p>
<p>Specifically, you should look into features and classifiers, both of which can be found in <a href="http://www.nltk.org/book/ch06.html" rel="nofollow">Chapter 6</a>.</p>
<p>Features are simply functions that return some value based on an input, so you can build these functions around the data format of Pandas.</p>
| 0 | 2016-09-26T21:45:57Z | [
"python",
"nltk",
"naivebayes"
]
|
Python: random choice executing all function calls listed | 39,711,221 | <p>I'm stuck on a probably simple issue:
when using choice with functions, it seems like all of them gets executed while only one should.
Example:</p>
<pre><code>from ordereddict import OrderedDict
from random import choice
def PrintStrings():
Text = choice(["Gutentag!", "Ni hao!", "Hola!"])
print "Chosen Text is:", Text
return Text
class Greeting():
fields = OrderedDict([
("Morning", "Hi"),
("Afternoon", "Good Afternoon!"),
("Evening", "Good Evening!"),
])
def change(self):
self.fields["Morning"] = "Good morning!"
def changerandom(self, n = 1):
function=[
{self.fields["Morning"]: PrintStrings()},
{self.fields["Afternoon"]: PrintStrings()},
{self.fields["Evening"]: PrintStrings()},
]
result = {}
for i in range(n):
result.update(choice(function))
print "Updated string:",result
return result
text = Greeting()
text.change()
text.changerandom()
</code></pre>
<p>When running this script, I get all 3</p>
<pre><code> {self.fields["Morning"]: PrintStrings()},
{self.fields["Afternoon"]: PrintStrings()},
{self.fields["Evening"]: PrintStrings()},
</code></pre>
<p>executed, while it shouldn't.
This script returns:</p>
<pre><code>Chosen Text is: Ni hao!
Chosen Text is: Gutentag!
Chosen Text is: Hola!
Updated string: {'Good morning!': 'Hola!'}
</code></pre>
<p>Expected result is:</p>
<pre><code>Chosen Text is: Hola!
Updated string: {'Good morning!': 'Hola!'}
</code></pre>
| 1 | 2016-09-26T19:50:13Z | 39,711,325 | <p>Putting aside some strange data structures choices, you are calling a function in <code>function</code>. Remove the parenthesis' to just pass the function as an <em>object</em>.</p>
<p><code>PrintStrings()</code> -> <code>PrintStrings</code></p>
<p>Here's a possible solution to get the required output:</p>
<pre><code>def changerandom(self, n = 1):
result = {}
for i in range(n):
key_choice = Greeting.fields[choice(Greeting.fields.keys())]
result[key_choice] = PrintStrings()
print "Updated string:", result
return result
</code></pre>
<p>With this approach, we grab our random key, and call <code>PrintStrings()</code> all in the same iteration.</p>
| 1 | 2016-09-26T19:56:51Z | [
"python",
"random",
"choice"
]
|
Python: random choice executing all function calls listed | 39,711,221 | <p>I'm stuck on a probably simple issue:
when using choice with functions, it seems like all of them gets executed while only one should.
Example:</p>
<pre><code>from ordereddict import OrderedDict
from random import choice
def PrintStrings():
Text = choice(["Gutentag!", "Ni hao!", "Hola!"])
print "Chosen Text is:", Text
return Text
class Greeting():
fields = OrderedDict([
("Morning", "Hi"),
("Afternoon", "Good Afternoon!"),
("Evening", "Good Evening!"),
])
def change(self):
self.fields["Morning"] = "Good morning!"
def changerandom(self, n = 1):
function=[
{self.fields["Morning"]: PrintStrings()},
{self.fields["Afternoon"]: PrintStrings()},
{self.fields["Evening"]: PrintStrings()},
]
result = {}
for i in range(n):
result.update(choice(function))
print "Updated string:",result
return result
text = Greeting()
text.change()
text.changerandom()
</code></pre>
<p>When running this script, I get all 3</p>
<pre><code> {self.fields["Morning"]: PrintStrings()},
{self.fields["Afternoon"]: PrintStrings()},
{self.fields["Evening"]: PrintStrings()},
</code></pre>
<p>executed, while it shouldn't.
This script returns:</p>
<pre><code>Chosen Text is: Ni hao!
Chosen Text is: Gutentag!
Chosen Text is: Hola!
Updated string: {'Good morning!': 'Hola!'}
</code></pre>
<p>Expected result is:</p>
<pre><code>Chosen Text is: Hola!
Updated string: {'Good morning!': 'Hola!'}
</code></pre>
| 1 | 2016-09-26T19:50:13Z | 39,712,031 | <p>An object will give you a means to run code only on serialization, not on instantiation:</p>
<pre><code>class PrintStrings(object):
def __init__(self):
self.text = None
def __str__(self):
if self.text is None:
text = choice(["Gutentag!", "Ni hao!", "Hola!"])
print "Chosen Text is:", text
return text
def __repr__(self):
return str(self)
</code></pre>
<p>The rest of your code can be used as-is, with this class replacing your <code>PrintStrings</code> function.</p>
| 0 | 2016-09-26T20:42:45Z | [
"python",
"random",
"choice"
]
|
TypeError for bar plot with custom date range | 39,711,229 | <p>I'm attempting to display a dataframe as a bar graph with a custom date range for <code>xlim</code>. I'm able to output a graph if I select <code>kind='line'</code> but I get the following error message when attempting <code>kind='bar'</code>: </p>
<pre><code>TypeError: ufunc 'isfinite' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p>the dataframe looks as follows:</p>
<pre><code>df1 =
Date Quantity
0 2010-01-01 1
1 2010-01-02 0
2 2010-01-03 0
3 2010-01-04 2
4 2010-01-05 3
5 2010-01-06 1
6 2010-01-07 0
7 2010-01-08 1
8 2010-01-09 1
9 2010-01-10 2
10 2010-01-11 0
11 2010-01-12 5
12 2010-01-13 2
13 2010-01-14 1
14 2010-01-15 2
...
</code></pre>
<p>This works:</p>
<pre><code>df1.plot(x='Date', y='Quantity', kind='line', grid=False, legend=False,
xlim=['2010-01-01', '2010-01-10'], figsize=(40, 16))
</code></pre>
<p>but this doesn't</p>
<pre><code> df1.plot(x='Date', y='Quantity', kind='bar', grid=False, legend=False,
xlim=['2010-01-01', '2010-01-10'], figsize=(40, 16))
</code></pre>
<p>Yet if I remove <code>xlim</code> from <code>kind='bar'</code> I produce an output. It would be nice to be able to output a bar graph with a custom x range.</p>
| 2 | 2016-09-26T19:50:51Z | 39,712,085 | <p>What about an alternative approach - filtering your data <strong>before</strong> plotting?</p>
<pre><code>In [10]: df.set_index('Date').ix['2010-01-01' : '2010-01-10']
Out[10]:
Quantity
Date
2010-01-01 1
2010-01-02 0
2010-01-03 0
2010-01-04 2
2010-01-05 3
2010-01-06 1
2010-01-07 0
2010-01-08 1
2010-01-09 1
2010-01-10 2
In [11]: df.set_index('Date').ix['2010-01-01' : '2010-01-10', 'Quantity'].plot.bar(grid=False, legend=False)
</code></pre>
| 2 | 2016-09-26T20:46:35Z | [
"python",
"pandas",
"matplotlib",
"plot",
"dataframe"
]
|
How does a descriptor with __set__ but without __get__ work? | 39,711,281 | <p>I read somewhere about the fact you can have a descriptor with <code>__set__</code> and without <code>__get__</code>.</p>
<p>How does it work? </p>
<p>Does it count as a data descriptor? Is it a non-data descriptor?</p>
<p>Here is a code example:</p>
<pre><code>class Desc:
def __init__(self, name):
self.name = name
def __set__(self, inst, value):
inst.__dict__[self.name] = value
print("set", self.name)
class Test:
attr = Desc("attr")
>>>myinst = Test()
>>> myinst.attr = 1234
set attr
>>> myinst.attr
1234
>>> myinst.attr = 5678
set attr
>>> myinst.attr
5678
</code></pre>
| 3 | 2016-09-26T19:54:31Z | 39,711,282 | <p>The descriptor you've given in the example is a data descriptor.</p>
<p>Upon setting the attribute, as any other data descriptor, it takes the highest priority and is called like so:</p>
<pre><code>type(myinst).__dict__["attr"].__set__(myinst, 1234)
</code></pre>
<p>This in turn, adds <code>attr</code> to the instance dictionary according to your <code>__set__</code> method.</p>
<p>Upon attribute access, the descriptor is checked for having the <code>__get__</code> method but fails, causing for the search to be redirected to the instance's dictionary like so:</p>
<pre><code>myinst.__dict__["attr"]
</code></pre>
<p>If it is not found in the instance dictionary, the descriptor itself is returned.</p>
<p>This behavior is shortly documented in the <a href="https://docs.python.org/3.6/reference/datamodel.html#invoking-descriptors" rel="nofollow">data model</a> like so:</p>
<blockquote>
<p>If it does not define <code>__get__()</code>, then accessing the attribute will
return the descriptor object itself unless there is a value in the
objectâs instance dictionary.</p>
</blockquote>
<p>Common usecases include avoiding <code>{instance: value}</code> dictionaries inside the descriptors, and caching values in an efficient way.</p>
<hr>
<p>In Python 3.6, <code>__set_name__</code> was added to the descriptor protocol thus eliminating the need for specifying the name inside the descriptor. This way, your descriptor can be written like so:</p>
<pre><code>class Desc:
def __set_name__(self, owner, name):
self.name = name
def __set__(self, inst, value):
inst.__dict__[self.name] = value
print("set", self.name)
class Test:
attr = Desc()
</code></pre>
| 5 | 2016-09-26T19:54:31Z | [
"python",
"python-descriptors"
]
|
Python - Reading a UTF-8 encoded string byte-by-byte | 39,711,335 | <p>I have a device that returns a UTF-8 encoded string. I can only read from it byte-by-byte and the read is terminated by a byte of value 0x00.</p>
<p>I'm making a Python 2.7 function for others to access my device and return string.</p>
<p>In a previous design when the device just returned ASCII, I used this in a loop:</p>
<pre><code>x = read_next_byte()
if x == 0:
break
my_string += chr(x)
</code></pre>
<p>Where x is the latest byte value read from the device.</p>
<p>Now the device can return a UTF-8 encoded string, but I'm not sure how to convert the bytes that I get back into a UTF-8 encoded string/unicode. </p>
<p><code>chr(x)</code> understandably causes an error when the x>127, so I thought that using <code>unichr(x)</code> may work, but that assumes the value passed is a full unicode character value, but I only have a part 0-255.</p>
<p>So how can I convert the bytes that I get back from the device into a string that can be used in Python and still handle the full UTF-8 string?</p>
<p>Likewise, if I was given a UTF-8 string in Python, how would I break that down into individual bytes to send to my device and still maintain UTF-8?</p>
| 3 | 2016-09-26T19:57:24Z | 39,711,380 | <p>The correct solution would be to read until you hit the terminating byte, then convert to UTF-8 at that time (so you have all characters):</p>
<pre><code>mybytes = bytearray()
while True:
x = read_next_byte()
if x == 0:
break
mybytes.append(x)
my_string = mybytes.decode('utf-8')
</code></pre>
<p>The above is the most direct translation of your original code. Interestingly, this is one of those cases where <a href="https://docs.python.org/3/library/functions.html#iter" rel="nofollow">two arg <code>iter</code></a> can be used to dramatically simplify the code by making your C-style stateful byte reader function into a Python iterator that lets you one-line the work:</p>
<pre><code># If this were Python 3 code, you'd use the bytes constructor instead of bytearray
my_string = bytearray(iter(read_next_byte, 0)).decode('utf-8')
</code></pre>
| 3 | 2016-09-26T19:59:45Z | [
"python",
"python-2.7",
"unicode",
"encoding",
"utf-8"
]
|
How to use base classes in Django | 39,711,366 | <p>I'm trying to alter an app I've created so that it is reusable. It's based around a single model which sites using the app will subclass. As it stands, my non-reusable version has the following kind of structure:</p>
<pre><code># models.py
class Document(models.Model):
contents = models.TextField()
date = models.DateTimeField()
# views.py
from .models import SiteModel
# ...
class MyView(ListView):
def some_method(self, list_of_pks):
model_vals = Document.objects.filter(pk__in = list_of_pks).values()
def perform_action(request):
obj_pk = request.POST.get('obj_pk')
obj = Document.objects.filter(pk = obj_pk)
MySignal.send(sender=Document, instance = obj)
#etc, etc
</code></pre>
<p>This works well enough. But my use case calls for different types of <code>Document</code>, one per site, that will have additional fields that aren't known in advance. Based on reading the documentation on <a href="https://docs.djangoproject.com/en/1.10/topics/db/models/#abstract-base-classes" rel="nofollow">abstract base classes</a>, I thought the a reasonable solution would look like:</p>
<pre><code># models.py for the app
class BaseDocument(models.Model):
contents = models.TextField()
class Meta:
abstract = True
# models.py for a hypothetical site using the app
class SiteDocument(myapp.BaseDocument):
date = models.DateTimeField()
# other site-specific fields
</code></pre>
<p>What I don't understand is how to then reference the model in the app's <code>views.py</code>, <code>forms.py</code>, etc. I know <code>BaseDocument.objects.all()</code>, for example, won't return anything since it isn't connected to a database. Conversely, I can't have <code>Document.objects.all()</code> because <code>Document</code> hasn't been created yet and is specific to each site. Is an abstract base class not the correct solution, and if so, what is?</p>
<p><strong>Edit:</strong></p>
<p>It looks like using a <code>OneToOneField</code> may be best suited to my use case, although it looks like that precludes inheriting methods from the superclass and that <code>BaseDocument.objects.all()</code> won't list out all its children.</p>
<p>Alternatively, I was wondering if I could just add a <code>get_document_model()</code> method to my abstract base class, in the style of <a href="https://docs.djangoproject.com/en/1.8/_modules/django/contrib/auth/#get_user_model" rel="nofollow"><code>get_user_model()</code></a>?</p>
| 1 | 2016-09-26T19:59:06Z | 39,711,935 | <p>You can't query your abstract classes directly like that since they won't have managers, only the inherited classes. If you really must do inheritance, you can use a concrete base model and inherit from that at the cost of a join on every query.</p>
<p>Think long and hard about whether this is truly necessary, or if you can represent your data in a more generic way. Models make inheritance seem easy, but they're not magic. There are very real performance and complexity considerations.</p>
<p>It might be as easy as adding a <code>type</code> field to your model</p>
<pre><code>class Document(models.Model):
DOCUMENT_TYPES = ['site', 'another', 'whatever']
document_type = models.CharField(choices=DOCUMENT_TYPES)
...
</code></pre>
<p>For more information about abstract vs concrete classes and querying, visit <a href="http://stackoverflow.com/questions/3797982/how-to-query-abstract-class-based-objects-in-django">How to query abstract-class-based objects in Django?</a></p>
| 0 | 2016-09-26T20:36:57Z | [
"python",
"django",
"django-models"
]
|
How to use base classes in Django | 39,711,366 | <p>I'm trying to alter an app I've created so that it is reusable. It's based around a single model which sites using the app will subclass. As it stands, my non-reusable version has the following kind of structure:</p>
<pre><code># models.py
class Document(models.Model):
contents = models.TextField()
date = models.DateTimeField()
# views.py
from .models import SiteModel
# ...
class MyView(ListView):
def some_method(self, list_of_pks):
model_vals = Document.objects.filter(pk__in = list_of_pks).values()
def perform_action(request):
obj_pk = request.POST.get('obj_pk')
obj = Document.objects.filter(pk = obj_pk)
MySignal.send(sender=Document, instance = obj)
#etc, etc
</code></pre>
<p>This works well enough. But my use case calls for different types of <code>Document</code>, one per site, that will have additional fields that aren't known in advance. Based on reading the documentation on <a href="https://docs.djangoproject.com/en/1.10/topics/db/models/#abstract-base-classes" rel="nofollow">abstract base classes</a>, I thought the a reasonable solution would look like:</p>
<pre><code># models.py for the app
class BaseDocument(models.Model):
contents = models.TextField()
class Meta:
abstract = True
# models.py for a hypothetical site using the app
class SiteDocument(myapp.BaseDocument):
date = models.DateTimeField()
# other site-specific fields
</code></pre>
<p>What I don't understand is how to then reference the model in the app's <code>views.py</code>, <code>forms.py</code>, etc. I know <code>BaseDocument.objects.all()</code>, for example, won't return anything since it isn't connected to a database. Conversely, I can't have <code>Document.objects.all()</code> because <code>Document</code> hasn't been created yet and is specific to each site. Is an abstract base class not the correct solution, and if so, what is?</p>
<p><strong>Edit:</strong></p>
<p>It looks like using a <code>OneToOneField</code> may be best suited to my use case, although it looks like that precludes inheriting methods from the superclass and that <code>BaseDocument.objects.all()</code> won't list out all its children.</p>
<p>Alternatively, I was wondering if I could just add a <code>get_document_model()</code> method to my abstract base class, in the style of <a href="https://docs.djangoproject.com/en/1.8/_modules/django/contrib/auth/#get_user_model" rel="nofollow"><code>get_user_model()</code></a>?</p>
| 1 | 2016-09-26T19:59:06Z | 39,772,562 | <p>I ended up going with a solution mentioned in my edit, namely creating a <code>get_document_model()</code> method inspired by <a href="https://docs.djangoproject.com/en/1.8/_modules/django/contrib/auth/#get_user_model" rel="nofollow"><code>get_user_model()</code></a>. This gives me exactly the desired behavior. </p>
<pre><code># models.py in app1
from django.db import models
from django.apps import apps as django_apps
class BaseDocument(models.Model):
contents = models.TextField()
class Meta:
abstract = True
def get_document_model():
# exception handling removed for concision's sake
return django_apps.get_model(settings.DOCUMENT_MODEL)
# models.py in app2
from django.db import models
from app1.models import BaseDocument
class SiteDocument(BaseDocument):
date = models.DateTimeField()
</code></pre>
<p>Throughout <code>views.py</code> and elsewhere, I changed things that would have been of the form <code>Document.objects.all()</code> to <code>BaseDocument().get_document_model().objects.all()</code>.</p>
| 0 | 2016-09-29T14:04:41Z | [
"python",
"django",
"django-models"
]
|
python add array of hours to datetime | 39,711,370 | <p>import timedelta as td
I have a date time and I want to add an array of hours to it. </p>
<p>i.e. </p>
<pre><code>Date[0]
datetime.datetime(2011, 1, 1, 0, 0)
Date[0] + td(hours=9)
datetime.datetime(2011, 1, 1, 9, 0)
hrs = [1,2,3,4]
Date[0] + td(hours=hrs)
</code></pre>
<p>But obviously it is not supported.</p>
<p>Date array above is a giant array of size 100X1 and I want to add hrs = [1,2,3,4] to each row of Date to get a datetime array of size 100x4. So, a for loop is not going to work in my case.</p>
| 1 | 2016-09-26T19:59:22Z | 39,711,427 | <p>Use a nested <em>list comprehension</em> and <a href="https://docs.python.org/2/library/datetime.html#datetime.datetime.replace" rel="nofollow"><code>.replace()</code> method</a>. Sample for a list with 2 datetimes:</p>
<pre><code>In [1]: from datetime import datetime
In [2]: l = [datetime(2011, 1, 1, 0, 0), datetime(2012, 1, 1, 0, 0)]
In [3]: hours = [1, 2, 3, 4]
In [4]: [[item.replace(hour=hour) for hour in hours] for item in l]
Out[4]:
[[datetime.datetime(2011, 1, 1, 1, 0),
datetime.datetime(2011, 1, 1, 2, 0),
datetime.datetime(2011, 1, 1, 3, 0),
datetime.datetime(2011, 1, 1, 4, 0)],
[datetime.datetime(2012, 1, 1, 1, 0),
datetime.datetime(2012, 1, 1, 2, 0),
datetime.datetime(2012, 1, 1, 3, 0),
datetime.datetime(2012, 1, 1, 4, 0)]]
</code></pre>
<p>As a result a 2x4 list of lists.</p>
| 2 | 2016-09-26T20:02:55Z | [
"python",
"datetime"
]
|
Getting percentages after binning pandas dataframe | 39,711,422 | <p>Based on the following mock DF:</p>
<pre><code>df = pd.DataFrame({'State': {0: "AZ", 1: "AZ", 2:"AZ", 3: "AZ", 4: "AK", 5: "AK", 6 : "AK", 7: "AK"},
'# of Boxes': {0: 1, 1: 2, 2:2, 3: 1, 4: 2, 5: 2, 6 : 1, 7: 2},
'Price': {0: 2, 1: 4, 2:15, 3: 25, 4: 17, 5: 13, 6 : 3, 7: 3}},
columns=['State', '# of Boxes', 'Price'])
print(df)
State # of Boxes Price
0 AZ 1 2
1 AZ 2 4
2 AZ 2 15
3 AZ 1 25
4 AK 2 17
5 AK 2 13
6 AK 1 3
7 AK 2 3
</code></pre>
<p>I want to bin the Prices as (0, 15], (15, 30], then get the % of the total by box, by state.</p>
<pre><code>State Box Price (0,15] Price (15,30]
AZ 1 .5 .5
AZ 2 1 0
AK 1 1 0
AK 2 .66 .33
</code></pre>
<p>I've tried pivoting using an agg function but I can't seem to figure it out.</p>
<p>Thank you!</p>
| 3 | 2016-09-26T20:02:07Z | 39,711,931 | <p>Here is a solution using <code>pivot_table()</code> method:</p>
<pre><code>In [57]: pvt = (df.assign(bins=pd.cut(df.Price, [0,15,30]))
....: .pivot_table(index=['State','# of Boxes'],
....: columns='bins', aggfunc='size', fill_value=0)
....: )
In [58]: pvt
Out[58]:
bins (0, 15] (15, 30]
State # of Boxes
AK 1 1 0
2 2 1
AZ 1 1 1
2 2 0
In [59]: pvt.apply(lambda x: x/pvt.sum(1))
Out[59]:
bins (0, 15] (15, 30]
State # of Boxes
AK 1 1.000000 0.000000
2 0.666667 0.333333
AZ 1 0.500000 0.500000
2 1.000000 0.000000
</code></pre>
| 2 | 2016-09-26T20:36:32Z | [
"python",
"pandas",
"dataframe"
]
|
Getting percentages after binning pandas dataframe | 39,711,422 | <p>Based on the following mock DF:</p>
<pre><code>df = pd.DataFrame({'State': {0: "AZ", 1: "AZ", 2:"AZ", 3: "AZ", 4: "AK", 5: "AK", 6 : "AK", 7: "AK"},
'# of Boxes': {0: 1, 1: 2, 2:2, 3: 1, 4: 2, 5: 2, 6 : 1, 7: 2},
'Price': {0: 2, 1: 4, 2:15, 3: 25, 4: 17, 5: 13, 6 : 3, 7: 3}},
columns=['State', '# of Boxes', 'Price'])
print(df)
State # of Boxes Price
0 AZ 1 2
1 AZ 2 4
2 AZ 2 15
3 AZ 1 25
4 AK 2 17
5 AK 2 13
6 AK 1 3
7 AK 2 3
</code></pre>
<p>I want to bin the Prices as (0, 15], (15, 30], then get the % of the total by box, by state.</p>
<pre><code>State Box Price (0,15] Price (15,30]
AZ 1 .5 .5
AZ 2 1 0
AK 1 1 0
AK 2 .66 .33
</code></pre>
<p>I've tried pivoting using an agg function but I can't seem to figure it out.</p>
<p>Thank you!</p>
| 3 | 2016-09-26T20:02:07Z | 39,711,997 | <p>I think you can use <code>groupby</code> by columns with binned <code>Series</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.cut.html" rel="nofollow"><code>cut</code></a>, aggregated by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a> and reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow"><code>unstack</code></a>:</p>
<pre><code>print (pd.cut(df['Price'], bins=[0,15,30]))
0 (0, 15]
1 (0, 15]
2 (0, 15]
3 (15, 30]
4 (15, 30]
5 (0, 15]
6 (0, 15]
7 (0, 15]
Name: Price, dtype: category
Categories (2, object): [(0, 15] < (15, 30]
df1 = df.Price.groupby([df['State'],df['# of Boxes'],pd.cut(df['Price'], bins=[0,15,30])])
.size()
.unstack(fill_value=0)
print (df1)
Price (0, 15] (15, 30]
State # of Boxes
AK 1 1 0
2 2 1
AZ 1 1 1
2 2 0
</code></pre>
<p>Then divide all values by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow"><code>sum</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.div.html" rel="nofollow"><code>div</code></a></p>
<pre><code>df1 = df1.div(df1.sum(axis=1), axis=0)
print (df1)
Price (0, 15] (15, 30]
State # of Boxes
AK 1 1.000000 0.000000
2 0.666667 0.333333
AZ 1 0.500000 0.500000
2 1.000000 0.000000
</code></pre>
<p><strong>Timings</strong>:</p>
<pre><code>In [135]: %timeit (jez(df))
100 loops, best of 3: 3.51 ms per loop
In [136]: %timeit (maxu(df))
100 loops, best of 3: 6.21 ms per loop
def jez(df):
df1 = df.Price.groupby([df['State'],df['# of Boxes'],pd.cut(df['Price'], bins=[0,15,30])]).size().unstack(fill_value=0)
return df1.div(df1.sum(1), axis=0)
def maxu(df):
pvt = df.assign(bins=pd.cut(df.Price, [0,15,30])).pivot_table(index=['State','# of Boxes'], columns='bins', aggfunc='size', fill_value=0)
return pvt.apply(lambda x: x/pvt.sum(1))
</code></pre>
| 3 | 2016-09-26T20:40:43Z | [
"python",
"pandas",
"dataframe"
]
|
Matplotlib: Points do not show in SVG | 39,711,461 | <p>I have a scatter plot that I'd like to output as SVG (Python 3.5). However, when used with <code>agg</code> as backend, some points are simply missing. See the data and the PNG and SVG output. Is this some kind of misconfiguration or a bug?</p>
<p>Code:</p>
<pre><code>import matplotlig
matplotlib.use('agg')
import matplotlib.pyplot as plt
x = [22752.9597858324,33434.3100283611,None,None,3973.2239542398,None,None,None
,None,None,None,None,None,960.6513071797,None,None,None,None,None,None,None
,None,None,None,None,None,749470.931292081,None,None,None,None,None,None
,None,None,None,None,None,None,None,None,23045.262784499,None,None,None
,None,None,None,None,1390.8383822667,None,None,9802.5632611025
,3803.3240362092,None,None,None,None,None,2058.1191666219,None
,3777.5383953988,None,91224.0759036624,23296.1857550166,27956.249381887
,None,237247.707648005,None,None,None,None,None,None,None,None,None
,760.3493458787,None,321687.799104496,None,None,22339.5617383239,None,None
,None,None,None,28135.0261453192,None,None,None,None,None,None,None
,1687.4387356974,None,None,29037.8494868489,None,None,None,None,None,None
,None,3937.3066755226,None,None,None,None]
y = [63557.4319306279,None,None,None,9466.0204228915,None,None,None,None,None
,None,None,None,3080.3393940948,None,None,None,None,None,None,None,None
,None,None,None,None,592184.803802073,None,None,None,None,None,None,None
,None,None,None,None,None,None,None,18098.725166318,None,None,None,None
,None,None,None,789.2710621298,None,None,7450.9539135753,4251.6033622036
,None,None,None,None,None,1277.1691956597,None,4273.5950324508,None
,51861.5572682614,19415.3369388317,2117.2407148378,None,160776.887146683
,None,None,None,None,None,None,None,None,None,1550.3003177484,None
,402333.163939038,None,None,16604.3340243551,None,None,None,None,None
,32545.0784355136,None,None,None,None,None,None,None,2567.9264180605,None
,None,45786.935597305,None,None,None,None,None,None,None,5645.5218715636
,None,None,None,None]
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot(x, y, '.')
fig.savefig('/home/me/test_svg', format='svg')
fig.savefig('/home/me/test_png', format='png')
</code></pre>
<p>The result:</p>
<p>PNG:</p>
<p><a href="http://i.stack.imgur.com/9XIaG.png" rel="nofollow"><img src="http://i.stack.imgur.com/9XIaG.png" alt="PNG output"></a></p>
<p>SVG:</p>
<p><a href="http://i.stack.imgur.com/9dHws.png" rel="nofollow"><img src="http://i.stack.imgur.com/9dHws.png" alt="SVG output"></a></p>
| 1 | 2016-09-26T20:04:30Z | 39,712,842 | <p>The problem seems to be related to the <code>None</code> values. Though there is simply no point included if no matching point exists, it seems to influence the rendering of the SVG. Removing both entries if at one or the other point is <code>None</code> fixes the issue.</p>
<pre><code>data = np.array([x, y])
data = data.transpose()
# Filter out pairs of points of which at least one is None.
data = [pair for pair in data if pair[0] and pair[1]]
data = np.array(data).transpose()
x = data[0]
y = data[1]
ax.plot(x, y, '.')
fig.savefig('/home/me/test_svg', format='svg')
fig.savefig('/home/me/test_png', format='png')
</code></pre>
| 0 | 2016-09-26T21:39:05Z | [
"python",
"svg",
"matplotlib"
]
|
Cannot find django.views.generic . Where is generic. Looked in all folders for the file | 39,711,473 | <p>I know this is a strange question but I am lost on what to do. i cloned pinry... It is working and up . I am trying to find django.views.generic. I have searched the directory in my text editor, I have looked in django.views. But I cannot see generic (only a folder with the name "generic"). I cant understand where the generic file is . It is used in many imports and to extend classes but I cannot find the file to see the import functions. I have a good understanding of files and imports and i would say at this stage I am just above noob level. So is there something I am missing here. How come i cannot find this file? If i go to from django.core.urlresolvers import reverse, I can easly find this but not
eg : from django.views.generic import CreateView</p>
<p>Where is generic?</p>
| 1 | 2016-09-26T20:05:25Z | 39,711,943 | <p>Try running this from a Python interpreter: </p>
<pre><code>>>> import django.views.generic
>>> django.views.generic.__file__
</code></pre>
<p>This will show you the location of the <code>gerneric</code> as a string path. In my case the output is:</p>
<pre><code>'/.../python3.5/site-packages/django/views/generic/__init__.py'
</code></pre>
<p>If you look at this <code>__init__.py</code> you will not see the code for any of the generic <code>*View</code> classes. However, these classes can still be imported from the path <code>django.views.generic</code> (if I am not mistaken, this is because the <code>*View</code> classes are part of the <a href="http://stackoverflow.com/a/44843/3642398"><code>__all__</code></a> list in <code>django/views/generic/__init__.py</code>). In the case of <code>CreateView</code>, it is actually in <code>django/views/generic/edit.py</code>, although it can be imported from <code>django.views.generic</code>, because of the way the <code>__init__.py</code> is set up.</p>
<p>This is technique is generally useful when you want to find the path to a <code>.py</code> file. Also useful: if you use it on its own in a script (<code>print(__file__)</code>), it will give you the path to the script itself.</p>
| 3 | 2016-09-26T20:37:33Z | [
"python",
"django",
"generics"
]
|
Do you and should you rename a custom User model in Django 1.9? | 39,711,484 | <p>I am creating a new User model for my Django project. I have been many people calling their custom user model, XXXUser and custom user manager, XXXUserManager. </p>
<p>I was wondering if there is a reason for this. Can you just create a custom user and still call it User? Does this create conflicts in the code?</p>
| 1 | 2016-09-26T20:06:00Z | 39,711,543 | <p>If you want custom user models then take a look at subclassing either <code>AbstractUser</code> or <code>AbstractBaseUser</code> detailed in the documentation here <a href="https://docs.djangoproject.com/en/1.10/topics/auth/customizing/#extending-django-s-default-user" rel="nofollow">https://docs.djangoproject.com/en/1.10/topics/auth/customizing/#extending-django-s-default-user</a></p>
<p>You can then do something like the following:</p>
<pre><code>from django.contrib.auth.models import AbstractUser
from django.db import models
from django.utils.translation import ugettext_lazy as _
class KarmaUser(AbstractUser):
karma = models.PositiveIntegerField(verbose_name=_("karma"), default=0, blank=True)
# Inside project/settings.py
AUTH_USER_MODEL = "profiles.KarmaUser"
</code></pre>
<p>You could also create your own completely seperate model and tie it back to the user with a one-to-one relationship</p>
| 0 | 2016-09-26T20:09:31Z | [
"python",
"django",
"django-admin",
"django-1.9"
]
|
Do you and should you rename a custom User model in Django 1.9? | 39,711,484 | <p>I am creating a new User model for my Django project. I have been many people calling their custom user model, XXXUser and custom user manager, XXXUserManager. </p>
<p>I was wondering if there is a reason for this. Can you just create a custom user and still call it User? Does this create conflicts in the code?</p>
| 1 | 2016-09-26T20:06:00Z | 39,711,775 | <p>Basically you can. But for readability purposes it's better to do it <code>XxxUser</code>, if you see <code>XxxUser</code> you are instantly understand that this is custom one. And you need to keep in mind that you should replace some code that is common for base usage.</p>
<p>Such as(should be replaces)</p>
<pre><code>from django.contrib.auth.models import User
</code></pre>
<p>Should be </p>
<pre><code>from django.contrib.auth import get_user_model
User = get_user_model()
</code></pre>
<p>And if you reference to <code>User</code> model in your <em>models.py</em> you need to</p>
<pre><code>from django.conf import settings
class SomeModel(models.Model):
field = models.RetationField(settings.AUTH_USER_MODEL)
</code></pre>
<p>Also do not forget to set your <code>settings.AUTH_USER_MODEL</code> that references to your custom one</p>
| 0 | 2016-09-26T20:24:55Z | [
"python",
"django",
"django-admin",
"django-1.9"
]
|
Firefox blank webbrowser with selenium | 39,711,674 | <p>When I call a firefox webbrowser with python firefox webdriver, the firefox is opening with a blank page (nothing in the navigation barre), wait and close.</p>
<p>The python consol give me this error :</p>
<p>Traceback (most recent call last):
File "firefox_selenium2.py", line 4, in
driver = webdriver.Firefox()
File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/webdriver.py", line 80, in <strong>init</strong>
self.binary, timeout)
File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/extension_connection.py", line 52, in <strong>init</strong>
self.binary.launch_browser(self.profile, timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 68, in launch_browser
self._wait_until_connectable(timeout=timeout)
File "/usr/local/lib/python3.5/dist-packages/selenium/webdriver/firefox/firefox_binary.py", line 108, in _wait_until_connectable
% (self.profile.path))
selenium.common.exceptions.WebDriverException: Message: Can't load the profile. Profile Dir: /tmp/tmpngm7g76x If you specified a log_file in the FirefoxBinary constructor, check it for details.</p>
<p>My code is the exemple from the python selenium read_the_doc :
from selenium import webdriver
from selenium.webdriver.common.keys import Keys</p>
<pre><code>driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
</code></pre>
<p>Any help would be appreciated</p>
<p>PS : firefox version 49
selenium version 2.53.6
python 3.5</p>
| 3 | 2016-09-26T20:17:24Z | 39,712,057 | <p>According to this post
<a href="https://github.com/SeleniumHQ/selenium/issues/2739#issuecomment-249479530" rel="nofollow">https://github.com/SeleniumHQ/selenium/issues/2739#issuecomment-249479530</a>
the is that you need to use something called Gecko Driver, found here <a href="https://github.com/mozilla/geckodriver" rel="nofollow">https://github.com/mozilla/geckodriver</a>. Other people have had success going back to a previous version (before 48) of Firefox as well. I'm also experiencing this problem and don't actually understand how to do either solution and am making slow progress.</p>
<p>Hi Dennis, I'll post my step by step solution now that I got it to work.</p>
<h1>Step By Step solution</h1>
<p>The problem is that Selenium and Firefox don't support one another anymore. I don't actually understand why but hopefully someone can comment and explain in more thorough detail than I. There are two possible solutions. One is install something called Geckodriver. I got that installed but had difficulty adding it to my PATH and generally found myself frustrated. </p>
<p><strong>Instead</strong> I went a simpler route.
First I uninstalled firefox with the command</p>
<pre><code>sudo apt-get purge firefox
</code></pre>
<p>Then I downloaded <a href="https://ftp.mozilla.org/pub/firefox/releases/47.0.1/" rel="nofollow">Firefox 47.0.1</a> from here (I selected the english US version). I then moved it from my downloads folder to my home folder. Then I extracted it using this command.</p>
<pre><code>tar xjf firefox-47.0.1.tar.bz2
</code></pre>
<p>Your number Firefox may be different from mine. Then I cd'd into that directory</p>
<pre><code>cd firefox
</code></pre>
<p>which brought me into that directory. Then all that was left was to run the command </p>
<pre><code>sudo apt install firefox
</code></pre>
<p>After which the version of Selenium I have worked again. Happily I'm back to writing code not configuring things!</p>
| 0 | 2016-09-26T20:44:34Z | [
"python",
"python-3.x",
"selenium",
"firefox"
]
|
Writing to different columns in Excel from python | 39,711,698 | <p>I'm using Python Selenium webdriver to scrape a web table by iterating over the rows in the table, and CSV module to write to excel.</p>
<pre><code>When printing in python my scraped web table shows
['PS208\n43:51\nOUTBOUND\nFDEX\n708135']
['PS207\n01:24\nINBOUND\nUPSS\n889058']
['PS206\n12:34\nOTHER\nFDEG\n506796']
</code></pre>
<p>when writing to excel csv</p>
<p>all the data goes into column A</p>
<pre><code>A
</code></pre>
<p>1 PS208
2 43:51
3 OUTBOUND
4 FDEX
5 708135
6
7 PS207
8 01:24
9 INBOUND
10 UPSS
11 889058</p>
<p>I need each piece of data between the \n to be on a different column
and each block of data between the ' to be on a different row.</p>
<pre><code> A B C D E
</code></pre>
<p>1 'PS208 43:51 OUTBOUND FDEX 708135'
2 'PS207 01:24 INBOUND UPSS 889058'</p>
<pre><code>import csv
import time
from selenium import webdriver
# ****Logging****
# Current time
now = ("Run Time = " + time.ctime())
print(now)
# ****Visible Chrome Browser****
# Path to Chrome exe
chrome_path = r"C:\Users/userr\Desktop\chromedriver.exe"
# Chrome as browser
browser = webdriver.Chrome(chrome_path)
# URL to open
url = "https://somewebsite"
browser.maximize_window()
browser.get(url)
time.sleep(5)
# Open csv to write to
outfile = open("YMS.csv", "w")
# Parse by spaces
writer = csv.writer(outfile, delimiter=" ")
# ******Query******
# Select "empty trailers"
browser.find_element_by_xpath("""//*[@id="empty_checkbox"]""").click()
time.sleep(5)
table = browser.find_element_by_xpath("""//*[@id="ship-clerk- dashboard- table"]""")
rows = table.find_elements_by_tag_name("""tr""")
for row in rows:
foo = row.text
print([foo])
writer.writerow([foo])
# Close CSV file
outfile.close()
# Close browser
browser.close()
</code></pre>
| 1 | 2016-09-26T20:19:32Z | 39,712,275 | <pre><code>import csv
output = ['PS208\n43:51\nOUTBOUND\nFDEX\n708135']
output = output[0].split('\n')
with open(output_file, 'ab') as csvfile:
writer = csv.writer(csvfile)
writer.writerow(output)
</code></pre>
<p>Each row can be written into coulmns like this. Adding a loop for each block of data and splitting each of them should work in your case.</p>
| 0 | 2016-09-26T20:58:11Z | [
"python",
"csv",
"selenium",
"webdriver"
]
|
Find all-zero columns in pandas sparse matrix | 39,711,838 | <p>For example I have a coo_matrix A :</p>
<pre><code>import scipy.sparse as sp
A = sp.coo_matrix([3,0,3,0],
[0,0,2,0],
[2,5,1,0],
[0,0,0,0])
</code></pre>
<p>How can I get result [0,0,0,1], which indicates that first 3 columns contain non-zero values, only the 4th column is all zeros.</p>
<p>PS : cannot convert A to other type.<br>
PS2 : I tried using <code>np.nonzeros</code> but it seems that my implementation is not very elegant.</p>
| 2 | 2016-09-26T20:29:06Z | 39,711,946 | <p><strong>Approach #1</strong> We could do something like this -</p>
<pre><code># Get the columns indices of the input sparse matrix
C = sp.find(A)[1]
# Use np.in1d to create a mask of non-zero columns.
# So, we invert it and convert to int dtype for desired output.
out = (~np.in1d(np.arange(A.shape[1]),C)).astype(int)
</code></pre>
<p>Alternatively, to make the code shorter, we can use subtraction -</p>
<pre><code>out = 1-np.in1d(np.arange(A.shape[1]),C)
</code></pre>
<p>Step-by-step run -</p>
<p>1) Input array and sparse matrix from it :</p>
<pre><code>In [137]: arr # Regular dense array
Out[137]:
array([[3, 0, 3, 0],
[0, 0, 2, 0],
[2, 5, 1, 0],
[0, 0, 0, 0]])
In [138]: A = sp.coo_matrix(arr) # Convert to sparse matrix as input here on
</code></pre>
<p>2) Get non-zero column indices :</p>
<pre><code>In [139]: C = sp.find(A)[1]
In [140]: C
Out[140]: array([0, 2, 2, 0, 1, 2], dtype=int32)
</code></pre>
<p>3) Use <code>np.in1d</code> to get mask of non-zero columns :</p>
<pre><code>In [141]: np.in1d(np.arange(A.shape[1]),C)
Out[141]: array([ True, True, True, False], dtype=bool)
</code></pre>
<p>4) Invert it :</p>
<pre><code>In [142]: ~np.in1d(np.arange(A.shape[1]),C)
Out[142]: array([False, False, False, True], dtype=bool)
</code></pre>
<p>5) Finally convert to int dtype :</p>
<pre><code>In [143]: (~np.in1d(np.arange(A.shape[1]),C)).astype(int)
Out[143]: array([0, 0, 0, 1])
</code></pre>
<p>Alternative subtraction approach :</p>
<pre><code>In [145]: 1-np.in1d(np.arange(A.shape[1]),C)
Out[145]: array([0, 0, 0, 1])
</code></pre>
<p><strong>Approach #2</strong> Here's another way and possibly a faster one using <code>matrix-multiplication</code> -</p>
<pre><code>out = 1-np.ones(A.shape[0],dtype=bool)*A.astype(bool)
</code></pre>
<hr>
<p><strong>Runtime test</strong> </p>
<p>Let's test out all the posted approaches on a big and really sparse matrix -</p>
<pre><code>In [29]: A = sp.coo_matrix((np.random.rand(4000,4000)>0.998).astype(int))
In [30]: %timeit 1-np.in1d(np.arange(A.shape[1]),sp.find(A)[1])
100 loops, best of 3: 4.12 ms per loop # Approach1
In [31]: %timeit 1-np.ones(A.shape[0],dtype=bool)*A.astype(bool)
1000 loops, best of 3: 771 µs per loop # Approach2
In [32]: %timeit 1 - (A.col==np.arange(A.shape[1])[:,None]).any(axis=1)
1 loops, best of 3: 236 ms per loop # @hpaulj's soln
In [33]: %timeit (A!=0).sum(axis=0)==0
1000 loops, best of 3: 1.03 ms per loop # @jez's soln
In [34]: %timeit (np.sum(np.absolute(A.toarray()), 0) == 0) * 1
10 loops, best of 3: 86.4 ms per loop # @wwii's soln
</code></pre>
| 1 | 2016-09-26T20:37:43Z | [
"python",
"numpy",
"scipy",
"sparse-matrix"
]
|
Find all-zero columns in pandas sparse matrix | 39,711,838 | <p>For example I have a coo_matrix A :</p>
<pre><code>import scipy.sparse as sp
A = sp.coo_matrix([3,0,3,0],
[0,0,2,0],
[2,5,1,0],
[0,0,0,0])
</code></pre>
<p>How can I get result [0,0,0,1], which indicates that first 3 columns contain non-zero values, only the 4th column is all zeros.</p>
<p>PS : cannot convert A to other type.<br>
PS2 : I tried using <code>np.nonzeros</code> but it seems that my implementation is not very elegant.</p>
| 2 | 2016-09-26T20:29:06Z | 39,712,076 | <p>The actual logical operation can be performed like this:</p>
<pre><code>b = (A!=0).sum(axis=0)==0
# matrix([[False, False, False, True]], dtype=bool)
</code></pre>
<p>Now, to ensure that I'm answering your question exactly, I'd better tell you how you <em>could</em> convert from booleans to integers (although really, for most applications I can think of, you can do a lot more in <code>numpy</code> and friends if you stick with an array of <code>bool</code>s):</p>
<pre><code>b = b.astype(int)
#matrix([[0, 0, 0, 1]])
</code></pre>
<p>Either way, to then convert from a <code>matrix</code> to a <code>list</code>, you could do this:</p>
<pre><code>c = list(b.flat)
# [0, 0, 0, 1]
</code></pre>
<p>...although again, I'm not sure this is the best thing to do: for most applications I can imagine, I would perhaps just convert to a one-dimensional <code>numpy.array</code> with <code>c = b.A.flatten()</code> instead.</p>
| 1 | 2016-09-26T20:45:35Z | [
"python",
"numpy",
"scipy",
"sparse-matrix"
]
|
Find all-zero columns in pandas sparse matrix | 39,711,838 | <p>For example I have a coo_matrix A :</p>
<pre><code>import scipy.sparse as sp
A = sp.coo_matrix([3,0,3,0],
[0,0,2,0],
[2,5,1,0],
[0,0,0,0])
</code></pre>
<p>How can I get result [0,0,0,1], which indicates that first 3 columns contain non-zero values, only the 4th column is all zeros.</p>
<p>PS : cannot convert A to other type.<br>
PS2 : I tried using <code>np.nonzeros</code> but it seems that my implementation is not very elegant.</p>
| 2 | 2016-09-26T20:29:06Z | 39,713,115 | <p>Recent</p>
<p><a href="http://stackoverflow.com/questions/39683931/scipy-sparse-coo-matrix-how-to-fast-find-all-zeros-column-fill-with-1-and-norma">scipy.sparse.coo_matrix how to fast find all zeros column, fill with 1 and normalize</a></p>
<p>similar, except it wants to fill those columns with 1s and normalize them.</p>
<p>I immediately suggested the <code>lil</code> format of the transpose. All-0 columns will be empty lists in this format. But sticking with the <code>coo</code> format I suggested</p>
<pre><code>np.nonzero(~(Mo.col==np.arange(Mo.shape[1])[:,None]).any(axis=1))[0]
</code></pre>
<p>or for this 1/0 format</p>
<pre><code>1 - (Mo.col==np.arange(Mo.shape[1])[:,None]).any(axis=1)
</code></pre>
<p>which is functionally the same as:</p>
<pre><code>1 - np.in1d(np.arange(Mo.shape[1]),Mo.col)
</code></pre>
<p><code>sparse.find</code> converts the matrix to <code>csr</code> to sum duplicates and eliminate duplicates, and then back to <code>coo</code> to get the <code>data</code>, <code>row</code>, and <code>col</code> attributes (which it returns).</p>
<p><code>Mo.nonzero</code> uses <code>A.data != 0</code> to eliminate 0s before returning the <code>col</code> and <code>row</code> attributes.</p>
<p>The <code>np.ones(A.shape[0],dtype=bool)*A.astype(bool)</code> solution requires converting <code>A</code> to <code>csr</code> format for multiplication. </p>
<p><code>(A!=0).sum(axis=0)</code> also converts to <code>csr</code> because column (or row) sum is done with a matrix multiplication.</p>
<p>So the no-convert requirement is unrealistic, at least within the bounds of sparse formats. </p>
<p>===============</p>
<p>For Divakar's test case my <code>==</code> version is quite slow; it's ok with small ones, but creates too large of test array with the 1000 columns.</p>
<p>Testing on a matrix that is sparse enough to have a number of 0 columns:</p>
<pre><code>In [183]: Arr=sparse.random(1000,1000,.001)
In [184]: (1-np.in1d(np.arange(Arr.shape[1]),Arr.col)).any()
Out[184]: True
In [185]: (1-np.in1d(np.arange(Arr.shape[1]),Arr.col)).sum()
Out[185]: 367
In [186]: timeit 1-np.ones(Arr.shape[0],dtype=bool)*Arr.astype(bool)
1000 loops, best of 3: 334 µs per loop
In [187]: timeit 1-np.in1d(np.arange(Arr.shape[1]),Arr.col)
1000 loops, best of 3: 323 µs per loop
In [188]: timeit 1-(Arr.col==np.arange(Arr.shape[1])[:,None]).any(axis=1)
100 loops, best of 3: 3.9 ms per loop
In [189]: timeit (Arr!=0).sum(axis=0)==0
1000 loops, best of 3: 820 µs per loop
</code></pre>
| 1 | 2016-09-26T22:02:40Z | [
"python",
"numpy",
"scipy",
"sparse-matrix"
]
|
Find all-zero columns in pandas sparse matrix | 39,711,838 | <p>For example I have a coo_matrix A :</p>
<pre><code>import scipy.sparse as sp
A = sp.coo_matrix([3,0,3,0],
[0,0,2,0],
[2,5,1,0],
[0,0,0,0])
</code></pre>
<p>How can I get result [0,0,0,1], which indicates that first 3 columns contain non-zero values, only the 4th column is all zeros.</p>
<p>PS : cannot convert A to other type.<br>
PS2 : I tried using <code>np.nonzeros</code> but it seems that my implementation is not very elegant.</p>
| 2 | 2016-09-26T20:29:06Z | 39,715,773 | <p>Convert to an array or dense matrix, sum the absolute value along the first axis, test the result against zero, convert to int</p>
<pre><code>>>> import numpy as np
>>> (np.sum(np.absolute(a.toarray()), 0) == 0) * 1
array([0, 0, 0, 1])
>>> (np.sum(np.absolute(a.todense()), 0) == 0) * 1
matrix([[0, 0, 0, 1]])
>>>
>>> np.asarray((np.sum(np.absolute(a.todense()), 0) == 0), dtype = np.int32)
array([[0, 0, 0, 1]])
>>>
</code></pre>
<hr>
<p>The first is the fastest - 24 uS for your example on my machine.</p>
<p>For a matrix made with <code>np.random.randint(0,3,(1000,1000))</code>, all are right at 25 mS on my machine.</p>
| 0 | 2016-09-27T04:04:03Z | [
"python",
"numpy",
"scipy",
"sparse-matrix"
]
|
Python - DNS lookup from SQL database | 39,711,864 | <p>I was hoping you could point me in the right direction about a little problem I'm having.</p>
<p>I want to create a basic web app that performs DNS queries to validate whether a specific set of DNS records are active or not. The DNS records are all A records and are hosted on an SQL-based CMDB. </p>
<p>I'm a fan of python and would prefer to use it but I have no idea where to start. The use case I'm looking at is as follows:</p>
<blockquote>
<ul>
<li>Browse to <a href="http://hostname/pythonDnsTool.html" rel="nofollow">http://hostname/pythonDnsTool.html</a></li>
<li>Click [RUN QUERY] button</li>
<li>Script queries the <strong>[DnsARecord]</strong> table to validate whether a list of dnsNames exist or not</li>
<li>Upon completion <a href="http://hostname/pythonDnsTool.html" rel="nofollow">http://hostname/pythonDnsTool.html</a> returns a list of queried domains with a YES or NO</li>
</ul>
</blockquote>
<p>Any advice or guidance would be greatly appreciated.</p>
<p>Many thanks,</p>
<p>Westie5017</p>
| 1 | 2016-09-26T20:31:27Z | 39,712,126 | <p>There are a few tools, which one is the best depends on your needs.</p>
<p><strong>1]</strong> A great python tool that does what you ask and a lot more is <strong>scapy</strong>.
For example:</p>
<pre><code>>>> packet = DNSQR()
>>> packet.show()
###[ DNS Question Record ]###
qname= '.'
qtype= A
qclass= IN
</code></pre>
<p>As you can see it gives you the ability to craft your packets in high level.
For example if you wanted to send a simple DNS query and get an answer back you could do something like that:</p>
<pre><code>sr1(IP(dst="some_ip")/UDP()/DNS(rd=1,qd=DNSQR(qname="")))
</code></pre>
<p>But again this is only a minimal example of the things you can do.
For more information take a look at:
<a href="http://www.secdev.org/projects/scapy/doc/usage.html" rel="nofollow">here</a> and <a href="http://www.secdev.org/projects/scapy/demo.html" rel="nofollow">here</a> (scapy's general documentation)</p>
<p>and specifically for <strong>DNS</strong> take a look at <a href="https://thepacketgeek.com/scapy-p-09-scapy-and-dns/" rel="nofollow">here</a> where you can find a lot of examples.</p>
<p><strong>2]</strong> Alternatively you can use the built-in <code>socket</code> module:</p>
<pre><code>import socket
info = socket.getaddrinfo('a_domain', a_port)
socket.gethostbyname('a_domain') # but this will give only the ip
</code></pre>
<p><strong>3]</strong> Finally you can use <a href="http://www.dnspython.org/examples.html" rel="nofollow">dnspython</a>.</p>
| 0 | 2016-09-26T20:49:03Z | [
"python",
"sql-server",
"dns"
]
|
Reading audio data from an ALSA buffer to a numpy array | 39,711,867 | <p>I'm trying to pull data from an ALSA buffer in order to generate count noise from a mic. However, when I try to convert the data to an array I get an incorrect result.</p>
<p>Below is part of my code:</p>
<pre><code>#!/usr/bin/env python
from __future__ import print_function
import alsaaudio
import numpy
card = 'default'
buf = [64]
numpy.set_printoptions(threshold=numpy.inf)
stream = alsaaudio.PCM(alsaaudio.PCM_CAPTURE, alsaaudio.PCM_NORMAL, card)
stream.setchannels(1)
stream.setrate(44100)
stream.setformat(alsaaudio.PCM_FORMAT_S16_LE)
stream.setperiodsize(64)
def listen():
print("Listening")
while True:
try:
l, data = stream.read()
f = open('test.raw', 'wb')
if l:
f.write(data)
f.close()
except IOError, e:
error_count += 1
print(" (%d) Error recording: %s" % (error_count, e))
else:
decoded_block = numpy.frombuffer(data, dtype='i2' )
print('Array PCM: \n',decoded_block)
return 0
listen()
</code></pre>
<p><img src="http://i.stack.imgur.com/v9o3n.jpg" alt="Image with result"></p>
| 1 | 2016-09-26T20:31:38Z | 39,712,320 | <p><code>hexdump -d</code> displays the contents as <em>unsigned</em> 16bit integers, whereas you are converting it to a <em>signed</em> int16 numpy array.</p>
<p>Try converting to a dtype of <code>'u2'</code> (or equivalently, <code>np.uint16</code>) and you'll see that the output matches that of <code>hexdump</code>:</p>
<pre><code>print(np.array([-7, -9, -10, -6, -2], dtype=np.uint16))
# [65529 65527 65526 65530 65534]
</code></pre>
| 0 | 2016-09-26T21:01:32Z | [
"python",
"arrays",
"numpy",
"alsa"
]
|
Python 2.7 smtplib how to send attachment with Error 13 permission denied? | 39,711,974 | <p>hope you are well. I'm using python 2.7 with PyCharm on a Windows 7 and new at it.
I'm trying to send email with attachment but get error :<br>
IOError: [Errno 13] Permission denied: 'C:\Users\Myname\Desktop' This is my code: </p>
<pre><code>import smtplib
from email.MIMEMultipart import MIMEMultipart
from email.MIMEText import MIMEText
from email.MIMEBase import MIMEBase
from email import encoders
fromaddr = "mail@gmail.com"
toaddr = "mail@gmail.com"
msg = MIMEMultipart()
msg['From'] = fromaddr
msg['To'] = toaddr
msg['Subject'] = "Something bla bla bla"
body = "Something bla bla bla"
msg.attach(MIMEText(body, 'plain'))
filename = "CV.txt"
attachment = open("C:\Users\MyName\Desktop","rb")
part = MIMEBase('application', 'octet-stream')
part.set_payload((attachment).read())
encoders.encode_base64(part)
part.add_header('Content-Disposition', "attachment; filename= %s" % filename)
msg.attach(part)
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(fromaddr, "PASSWORD")
text = msg.as_string()
server.sendmail(fromaddr, toaddr, text)
server.quit()
</code></pre>
<p>I read the other articles and the most common problem seems to be to not have sufficent permission, however i am the Administrator. Anyhow if that were to be the case what i need to do exactlys step by step to get it going? or is there another problem not related to the permission?
Thanks in advance.
Warmest regards </p>
| 0 | 2016-09-26T20:39:08Z | 39,712,004 | <p>You are trying to open a <em>directory</em> as a file, you need to pass the actual file you want to open:</p>
<pre><code>attachment = open(r"C:\Users\MyName\Desktop\the_file")
</code></pre>
| 1 | 2016-09-26T20:40:57Z | [
"python",
"python-2.7",
"mime-types",
"mime",
"smtplib"
]
|
How can I filter for string values in a mixed datatype object in Python Pandas Dataframe | 39,711,977 | <p>I have a column in a Pandas Dataframe like:(whose value_counts are shown below)</p>
<pre><code>1 246804
2 135272
5 8983
8 3459
4 3177
6 1278
9 522
D 314
E 91
0 29
F 20
Name: Admission_Source_Code, dtype: int64
</code></pre>
<p>As you can see it contains both integers and letters. I'm having to write a function where I would have to filter and search for the lettered values.</p>
<p>I was initially importing this dataset using pd.read_excel, but after having read multiple bug reports, it seems that read_excel doesnt have option to explicitly read a column as a string.</p>
<p>So I tried reading using pd.read_csv which has the dtype option. Initially this column was being stored as float64 by default, now even though I have tried to run</p>
<pre><code>Df_name['Admission_Source_Code'] = Df_name['Admission_Source_Code'].astype(int).astype('str')
</code></pre>
<p>I'm unable to format it as a string column.</p>
<p>Hence, when I filter for </p>
<pre><code>Accepted[Accepted['Admission_Source_Code']==1]
</code></pre>
<p>it works, but </p>
<pre><code>Accepted[Accepted['Admission_Source_Code']=='E']
</code></pre>
<p>still returns no results. When i try and say str(column_name) in the mask, it says invalid literal.
Can someone please help me on how would i go about either changing the dtype or how to filter for lettered values?</p>
<p>Thanks.</p>
<p>P.S. even formatting as object doesnt help</p>
| 3 | 2016-09-26T20:39:29Z | 39,712,236 | <p>I think you should be able to filter your <code>value_counts</code> series using <code>.ix[]</code> or <code>.loc[]</code> indexers, filtering (indexing) by strings</p>
<p>Demo:</p>
<pre><code>In [27]: df
Out[27]:
Count
Admission_Source_Code
1 246804
2 135272
5 8983
8 3459
4 3177
6 1278
9 522
D 314
E 91
0 29
F 20
In [28]: df.index.dtype
Out[28]: dtype('O')
In [29]: df.ix['2']
Out[29]:
Count 135272
Name: 2, dtype: int64
In [30]: df.ix[['2','E','5','D']]
Out[30]:
Count
Admission_Source_Code
2 135272
E 91
5 8983
D 314
</code></pre>
<p>List index values:</p>
<pre><code>In [36]: df.index.values
Out[36]: array(['1', '2', '5', '8', '4', '6', '9', 'D', 'E', '0', 'F'], dtype=object)
</code></pre>
| 2 | 2016-09-26T20:55:54Z | [
"python",
"pandas",
"dataframe"
]
|
How can I filter for string values in a mixed datatype object in Python Pandas Dataframe | 39,711,977 | <p>I have a column in a Pandas Dataframe like:(whose value_counts are shown below)</p>
<pre><code>1 246804
2 135272
5 8983
8 3459
4 3177
6 1278
9 522
D 314
E 91
0 29
F 20
Name: Admission_Source_Code, dtype: int64
</code></pre>
<p>As you can see it contains both integers and letters. I'm having to write a function where I would have to filter and search for the lettered values.</p>
<p>I was initially importing this dataset using pd.read_excel, but after having read multiple bug reports, it seems that read_excel doesnt have option to explicitly read a column as a string.</p>
<p>So I tried reading using pd.read_csv which has the dtype option. Initially this column was being stored as float64 by default, now even though I have tried to run</p>
<pre><code>Df_name['Admission_Source_Code'] = Df_name['Admission_Source_Code'].astype(int).astype('str')
</code></pre>
<p>I'm unable to format it as a string column.</p>
<p>Hence, when I filter for </p>
<pre><code>Accepted[Accepted['Admission_Source_Code']==1]
</code></pre>
<p>it works, but </p>
<pre><code>Accepted[Accepted['Admission_Source_Code']=='E']
</code></pre>
<p>still returns no results. When i try and say str(column_name) in the mask, it says invalid literal.
Can someone please help me on how would i go about either changing the dtype or how to filter for lettered values?</p>
<p>Thanks.</p>
<p>P.S. even formatting as object doesnt help</p>
| 3 | 2016-09-26T20:39:29Z | 39,731,770 | <p>I did some tests using your example and filters works well, for example:</p>
<pre><code>df = pandas.read_csv('Yourfile.csv')
df['Admission_Source_Code'].value_counts()
1 246804
2 135272
5 8983
8 3459
4 3177
6 1278
9 522
D 314
E 91
0 29
F 20
Name: Admission_Source_Code, dtype: int64
</code></pre>
<p>If I try:</p>
<pre><code>print (df[(df['Admission_Source_Code']==1)])
</code></pre>
<p>I got:</p>
<pre><code>Empty DataFrame
Columns: [Admission_Source_Code]
Index: []
</code></pre>
<p>However with a <code>list comprehesion</code></p>
<pre><code>df['Admission_Source_Code'] = [str(i) for i in df['Admission_Source_Code']]
</code></pre>
<p>Using a data example:</p>
<p><a href="http://i.stack.imgur.com/Pmjfc.png" rel="nofollow"><img src="http://i.stack.imgur.com/Pmjfc.png" alt="enter image description here"></a></p>
<p>If problems persist, have you considered clean items from csv columns? <em>(i.e. a whitespace)</em>.</p>
<p>For example using the same <code>list comprehesion</code> and <code>strip()</code>:</p>
<pre><code>df['Admission_Source_Code'] = [str(i.strip()) for i in df['Admission_Source_Code']]
</code></pre>
| 0 | 2016-09-27T18:22:53Z | [
"python",
"pandas",
"dataframe"
]
|
Plot an R function curve in rpy2 | 39,712,048 | <p>I am trying to get the plot a simple curve in rpy2. </p>
<p><code>curve((x))</code> in R behaves as expected, but I cannot implement this in rpy2. </p>
<p>When I issue the following commands in sequence:</p>
<pre><code>import rpy2.robjects as ro
R = ro.r
R.curve(R.x)
</code></pre>
<p>I get the error that <code>AttributeError: 'R' object has no attribute 'x'</code>...</p>
<p>How do I access <code>x</code> as the vectorizing function within python? (I can issue <code>ro.r('curve((x))')</code> and it works as expected, but I need to be able to pass arguments from python to the curve function).</p>
<p>More generally, how do I plot a function curve in rpy2 ala this post: <a href="http://stackoverflow.com/questions/26091323/how-to-plot-a-function-curve-in-r">plotting function curve in R</a></p>
<p><strong><em>EDIT 1</em></strong></p>
<p>Some context:</p>
<p>I am trying to plot a curve of the inverse logit:</p>
<pre><code>invlogit = function(x){ + exp(x)/(1 + exp(x)) }
</code></pre>
<p>of the linear function:</p>
<pre><code>invlogit(coef(mod1)[1] + coef(mod1)[2]*x
</code></pre>
<p>Where coef(mod1) are the coefficients of a GLM I ran.</p>
<p>In R, I can do the following:</p>
<pre><code>plot(outcome~survrate, data = d, ylab = "P(outcome = 1 |
survrate)", xlab = "SURVRATE: Probability of Survival after 5
Years", xaxp = c(0, 95, 19))
curve(invlogit(coef(mod1)[1] + coef(mod1)[2]*x), add = TRUE)
</code></pre>
<p>And I get the expected sigmoidal curve.</p>
<p>I python/rpy2, I get my model and coefficients:</p>
<pre><code>formula = 'outcome~survrate'
mod1 = R.glm(formula=R(formula), data=r_analytical_set, family=R('binomial(link="logit")'))
s = R.summary(mod1)
print(mod1)
print(R.summary(mod1))
</code></pre>
<p>Set up the plot</p>
<pre><code>formula = Formula('outcome~survrate')
formula.getenvironment()['outcome'] = data.rx2('outcome')
formula.getenvironment()['survrate'] = data.rx2('survrate')
R.plot(formula, data=data, ylab = 'P(outcome = 1 | outcome)', xlab = 'SURVRATE: Probability of Survival after 5
Years", xaxp = c(0, 95, 19))
</code></pre>
<p>So far so good...</p>
<p>Then, I get my coefficients from the model:</p>
<pre><code>a = R.coef(mod1)[0]
b = R.coef(mod1)[1]
</code></pre>
<p>And then try to run the curve function by passing in these arguments, all to no avail, trying such constructs as</p>
<pre><code>R.curve(invlogit(a + b*R.x))
</code></pre>
<p>I've tried many others too besides this, all of which are embarrassingly weird.</p>
<p>First, the naive question: If term (x) in curve() is a special R designation for last environment expression, I assume I should be able to access this somehow through python/rpy2. </p>
<p>I understand that its representation in the curve function is a ListVector of 101 elements. I do not follow what it means though that it "is a special R designation for last environment expression." Could someone please elaborate? If this is an object in R, should I not be able to access it through the at least the low-level interface?</p>
<p>Or, do I actually have to create <code>x</code> as a python function to represent my x, y tuples as two lists and then convert them to a ListVector for use in the function to plot its curve.</p>
<p>Second: Should I not be able to construct my function, <code>invlogit(a + b*x)</code> in python and pass it for evaluation in R's curve function? </p>
<p>I am grabbing <code>invlogit</code> from an R file by reading it in using the STAP library: <code>from rpy2.robjects.packages import STAP</code>. </p>
<p>Third: Am I over complicating things? My goal is to recreate an analysis I had previously done in R using python/rpy2 to work through all the idiosyncrasies, before I try doing a new one in python/rpy2.</p>
| 0 | 2016-09-26T20:43:46Z | 39,737,102 | <p>Simply pass in an actual function, call, or expression like <code>sin</code> as <code>x</code> is not assigned in Python. Below uses the example from the R documentation for <a href="http://stat.ethz.ch/R-manual/R-devel/library/graphics/html/curve.html" rel="nofollow">curve</a>: <code>curve(sin, -2*pi, 2*pi)</code>. Also, because you output a graph use <code>grDevices</code> (built-in R package) to save image to file:</p>
<pre><code>import rpy2.robjects as ro
from rpy2.robjects.packages import importr
grdevices = importr('grDevices')
grdevices.png(file="Rpy2Curve.png", width=512, height=512)
p = ro.r('curve(sin, -2*pi, 2*pi)')
grdevices.dev_off()
</code></pre>
<p><a href="http://i.stack.imgur.com/LabOp.png" rel="nofollow"><img src="http://i.stack.imgur.com/LabOp.png" alt="RPy2 curve plot image 1"></a></p>
<p>Alternatively, you can define <code>(x)</code> just as your link shows:</p>
<pre><code>grdevices.png(file="Rpy2Curve.png", width=512, height=512)
ro.r('''eq <- function(x) {x*x}''')
p = ro.r('curve(eq,1,1000)') # OUTPUTS TO FILE
grdevices.dev_off()
p = ro.r('curve(eq,1,1000)') # OUTPUTS TO SCREEN
</code></pre>
<p><a href="http://i.stack.imgur.com/csF80.png" rel="nofollow"><img src="http://i.stack.imgur.com/csF80.png" alt="RPy2 curve plot image 2"></a></p>
<hr>
<p>UPDATE</p>
<p>Specifically to the OP's issue, to plot the inverse logit curve with the Python variables, <em>a</em> and <em>b</em>, derived from model coefficients, consider concatenating them to the <code>robjects.r()</code> string parameter:</p>
<pre><code>import rpy2.robjects as ro
ro.r('invlogit <- function(x){ + exp(x)/(1 + exp(x)) }')
p = ro.r('curve(invlogit({0} + {1}*x), add = TRUE)'.format(a,b))
</code></pre>
| 1 | 2016-09-28T02:31:05Z | [
"python",
"curve",
"rpy2"
]
|
convert tripple pointer array to numpy array or list python | 39,712,181 | <p>I'm speeding up my program by integrating c code into my python program. I'm using <code>ctypes</code> to execute functions in c from python. </p>
<p>c program: </p>
<pre><code> #include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <math.h>
#define MAX_ITERATIONS 1000
static void calculate_pixel(int x, int y, int size, float*** img){
int i = 0;
float a = 0.0, b = 0.0;
while(a*a+b*b <= 4.0 && i < MAX_ITERATIONS){
float temp_a = a;
a = a*a - b*b + x;
b = 2*a*b + y;
i++;
}
float temp[3];
memset(&temp, 0, sizeof(float)*3);
if(i != MAX_ITERATIONS){
float brightness = logf(1.75 + i - logf(log(a*a + b*b))) / log(MAX_ITERATIONS);
temp[0] = brightness;
temp[1] = brightness;
temp[2] = 1;
}
memcpy(img[x][y], &temp, sizeof(float)*3);
}
float*** mandel(int size){
ssize_t len = (size)*sizeof(float*);
float*** img = malloc(len);
int x, y;
for(x = 0; x < size; x++){
img[x] = malloc(len);
for(y = 0; y < size; y++){
img[x][y] = malloc(sizeof(float)*3);
calculate_pixel(x, y, size, img);
}
}
return img;
}
</code></pre>
<p>Python program:</p>
<pre><code> from ctypes import *
import matplotlib.pyplot as pl
size = 1000
lib = './mandelbrot3.so'
dll = cdll.LoadLibrary(lib)
dll.mandel.argtypes = [c_int]
#what i get in return from the c program
dll.mandel.restype = POINTER(POINTER(POINTER(c_float*3)*size)*size)
#calling function "mandel" in c program
res = dll.mandel(size)
#printing first value, does work this way
print(res.contains[0].contains[0].contains[0])
#creating a picture with pyplot, and inserting the array this way,
#does not work because its pointers
pl.imshow(res.contains)
pl.show()
</code></pre>
<p><code>dll.mandel.restype</code>is a tripple pointer with size: 1000*1000*3. This is creates a picture where the size is 1000*1000 pixels and the 3 floats is the rgb values. </p>
<p>so my problem is that what i get back from the c program is just a tripple pointer. And i need to be able to convert it over to a normal 3d python list or numpy array. Is there a better way to do this than just reading all the elements in the pointer array with a for loop and inserting them in to the new list or numpy array?</p>
| 0 | 2016-09-26T20:53:00Z | 39,863,021 | <p>I'm not sure about this triple-pointer business, but getting arrays from C to numpy works like this:</p>
<p>Assuming you have already received a ctypes <code>POINTER</code> to your image buffer, and you know the <code>height</code>, <code>width</code> and data type...</p>
<p>You can make a <code>ctypes</code> array the same size as your image from your buffer like so:</p>
<pre><code>arr = (c_float * height * width * 3).from_address(addressof(data.contents))
</code></pre>
<p>In this case <code>data</code> is the <code>POINTER</code>.</p>
<p>Then create an <code>ndarray</code> from it using the <code>buffer</code> kwarg:</p>
<pre><code>import numpy as np
img = np.ndarray(buffer=arr, dtype=np.float32, shape=(height, width, 3))
</code></pre>
| 0 | 2016-10-04T22:58:24Z | [
"python",
"c",
"arrays",
"python-3.x",
"ctypes"
]
|
In Python (without Pickle), Can Arbitrary JSON strings be Constructed Dynamically? | 39,712,225 | <p>In Python (2.7), is there a way to dynamically generate arbitrary JSON strings without using Pickle? </p>
<p>(To be clear, this inquiry dwells on constructing the JSON payload...long before invoking json.dumps().)</p>
<p>WHY?</p>
<p>Imagine you are undertaking a project that will involve interacting with several complex APIs (using JSON). </p>
<p>Each instance of EVERY JSON object that will be built and POSTed to the API(s) could well require a unique hierarchical structure (many, many levels of objects...a blizzard of ':', ',', '[]', '{}' ... you get the idea).</p>
<p>Other than instantiating a Python object and serializing it via Pickle, Is there an easier way to construct complex, arbitrary JSON strings that doesn't rely on hand/hard-coding a structure in place; such as this [poor] example:</p>
<pre><code>json_string = '{' + '"field_a": [ {' + '"id":' + id + '", "category":' + category . . . + '] }'
</code></pre>
<p>Your thoughts? Suggestions? Comments?</p>
| -1 | 2016-09-26T20:55:22Z | 39,712,675 | <p><strong>Default JSON encoder</strong> is capable of creating a JSON from some basic types (e.g. int, str, list, dict), but it can't handle custom objects.</p>
<p>For example:</p>
<pre><code>json.dumps([1, 2, 3]) # this works: '[1, 2, 3]'
class A(object):
def __init__(self):
self.value1 = 1
self.value2 = {'foo': ['bar', 'baz']}
json.dumps(A()) # this fails
</code></pre>
<h2>Custom JSON Encoders</h2>
<p>Luckily, json allows creating custom JSON encoders.</p>
<p>A custom JSON encoder does not have to create JSON. It just hs to convert an object into something that can be converted to JSON!</p>
<p>So, it is enough to create a dictionary.</p>
<p>The simplest solution is to use the <code>__dict__</code> - that will work for most custom classes, because <code>__dict__</code> contains a dictionary of all member variables, and dictionaries can be encoded into JSON by the built-in encoder:</p>
<pre><code>class DictJsonEncoder(json.JSONEncoder):
def default(self, obj):
return obj.__dict__
json.dumps(a, cls=DictJsonEncoder) # '{"value2": {"foo": ["bar", "baz"]}, "value1": 1}'
</code></pre>
<h2>A More Flexible Solution</h2>
<p>For more flexibility, don't always use <code>__dict__</code> (but use it as the default).</p>
<p>A common pattern is to implement <code>__json__</code> method which does what is needed and use that when available:</p>
<pre><code>class MyJsonEncoder(json.JSONEncoder):
def default(self, obj):
if hasattr(obj, '__json__'):
return obj.__json__()
else:
return super(MyJsonEncoder, self).default(obj)
</code></pre>
<p>Then implement the default <code>__json__</code> method which returns <code>__dict__</code>:</p>
<pre><code>class JsonCapable(object):
def __json__(self):
return self.__dict__
class A(JsonCapable):
def __init__(self):
self.value1 = 1
self.value2 = {'foo': ['bar', 'baz']}
# For special needs, override __json__ here
json.dumps(a, cls=MyJsonEncoder) # '{"value2": {"foo": ["bar", "baz"]}, "value1": 1}'
</code></pre>
<hr>
<p>Alternatively, the encoder could use <code>__json__</code> if it finds it and <code>__dict__</code> as backup. Then there is no need for a <code>JsonCapable</code> class.</p>
| 1 | 2016-09-26T21:25:57Z | [
"python",
"json",
"python-2.7",
"pickle"
]
|
How can i call URL's from text file one by one | 39,712,304 | <p>I want to parse on one website with some URL's and i created a text file has all links that i want to parse. How can i call this URL's from the text file one by one on python program. </p>
<pre><code>from bs4 import BeautifulSoup
import requests
soup = BeautifulSoup(requests.get("https://www.example.com").content, "html.parser")
for d in soup.select("div[data-selenium=itemDetail]"):
url = d.select_one("h3[data-selenium] a")["href"]
upc = BeautifulSoup(requests.get(url).content, "html.parser").select_one("span.upcNum")
if upc:
data = json.loads(d["data-itemdata"])
text = (upc.text.strip())
print(upc.text)
outFile = open('/Users/Burak/Documents/new_urllist.txt', 'a')
outFile.write(str(data))
outFile.write(",")
outFile.write(str(text))
outFile.write("\n")
outFile.close()
</code></pre>
<p><strong>urllist.txt</strong></p>
<pre><code>https://www.example.com/category/1
category/2
category/3
category/4
</code></pre>
<p>Thanks in advance</p>
| -2 | 2016-09-26T21:00:30Z | 39,712,397 | <p>Use a context manager :</p>
<pre><code>with open("/file/path") as f:
urls = [u.strip('\n') for u in f.readlines()]
</code></pre>
<p>You obtain your list with all urls in your file and can then call them as you like.</p>
| 0 | 2016-09-26T21:06:54Z | [
"python",
"parsing",
"web-scraping",
"beautifulsoup"
]
|
Is there a way with pygame to get keyboard input when the window is minimized? | 39,712,307 | <p>Title says it all. Is there a way with pygame to get keyboard input when the window is minimized? </p>
<p>Thanks.</p>
| 0 | 2016-09-26T21:00:43Z | 39,712,596 | <p>Yes, although you have to have it focused.</p>
<pre><code>import pygame
pygame.init()
pygame.display.iconify()
while 1:
for event in pygame.event.get():
if event.type == pygame.KEYDOWN:
print(event.key)
</code></pre>
<p>Otherwise it seems to not be possible according to <a href="http://stackoverflow.com/questions/9815995/read-console-input-using-pygame">this answer</a> and <a href="http://stackoverflow.com/questions/32567480/use-keyboard-without-calling-pygame-display-set-mode">this answer</a></p>
| 0 | 2016-09-26T21:20:00Z | [
"python",
"pygame"
]
|
Python Jinja2 macro whitespace issues | 39,712,338 | <p>This is kind of extension to my other question <a href="http://stackoverflow.com/questions/39626767/python-jinja2-call-to-macro-results-in-undesirable-newline">Python Jinja2 call to macro results in (undesirable) newline</a>. </p>
<p>My python program is</p>
<pre><code>import jinja2
template_env = jinja2.Environment(trim_blocks=True, lstrip_blocks=True, autoescape=False, undefined=jinja2.StrictUndefined)
template_str = '''
{% macro print_car_review(car) %}
{% if car.get('review') %}
{{'Review: %s' % car['review']}}
{% endif %}
{% endmacro %}
hi there
car {{car['name']}} reviews:
{{print_car_review(car)}}
2 spaces before me
End of car details
'''
ctx_car_with_reviews = {'car':{'name':'foo', 'desc': 'foo bar', 'review':'good'}}
ctx_car_without_reviews = {'car':{'name':'foo', 'desc': 'foo bar'}}
print 'Output for car with reviews:'
print template_env.from_string(template_str).render(ctx_car_with_reviews)
print 'Output for car without reviews:'
print template_env.from_string(template_str).render(ctx_car_without_reviews)
</code></pre>
<p>Actual output:</p>
<pre><code>Output for car with reviews:
hi there
car foo reviews:
Review: good
2 spaces before me
End of car details
Output for car without reviews:
hi there
car foo reviews:
2 spaces before me
End of car details
</code></pre>
<p>Expected output:</p>
<pre><code>Output for car with reviews:
hi there
car foo reviews:
Review: good
2 spaces before me
End of car details
Output for car without reviews:
hi there
car foo reviews:
2 spaces before me
End of car details
</code></pre>
<p>What is undesirable (per car) is an extra newline at the beginning and an extra line before the line '2 spaces before me'</p>
<p>Thanks Rags</p>
| 1 | 2016-09-26T21:02:35Z | 39,720,257 | <p>Complete edit of answer. I see what you're going for now, and I have a working solution (I added in one <code>if</code> statement to your template). Here is what I used, with all other lines of your code unchanged:</p>
<pre><code>template_str = '''{% macro print_car_review(car) %}
{% if car.get('review') %}
{{'Review: %s' % car['review']}}
{% endif %}
{% endmacro %}
hi there
car {{car['name']}} reviews:
{% if 'review' in car %}
{{print_car_review(car)-}}
{% endif %}
2 spaces before me
End of car details
'''
</code></pre>
<p>The spacing is exactly how I'm running it on my end, giving exactly the desired output you put in your question. I confess that I am myself confused by one point, which is that I had to move the first line, <code>{% macro print_car_review(car) %}</code>, up onto the same line as <code>template_str = '''</code>. Based on my understanding of the docs, setting <code>trim_blocks=True</code> should make that unnecessary, but I must be understanding it wrong.</p>
<p>Hopefully that gets you what you needed.</p>
| 0 | 2016-09-27T08:52:55Z | [
"python",
"macros",
"whitespace",
"jinja2"
]
|
Why is my django form validating when I am not calling is_valid | 39,712,398 | <p>I have a form which actions towards my home view, it contains two buttons, save and cancel. When run on the local dev server (manage.py runserver) this works fine. When I pushed this to production, the cancel button returns form validation errors, despite not calling the is_valid method.</p>
<p>Here is the view:</p>
<pre><code>def home(request):
#uses home.html
if request.method == 'POST':
#Figure out which button was pressed
#Cancel Button - Back to home
if request.POST.get("cancel"):
#return HttpResponseRedirect(request.META.get('HTTP_REFERER'))
footer = request
lineitems = Budget.build(request.user)
c = {'lineitems': lineitems,
'footer':footer,}
return render(request, 'home.html', c)
#Save button on config.html IncomeForm/Expenses Form
if request.POST.get("config_save"):
#ExpensesForm submitted
if 'expenseName' in request.POST:
form = ExpensesForm(request.POST)
if form.is_valid():
form.save()
else:
temp = 'config.html'
footer = 'Expense Form Invalid'
c = {'form':form,
'footer':footer,}
return render(request, temp, c)
#IncomeForm submitted
else:
form = IncomeForm(request.POST)
if form.is_valid():
form.save()
else:
form = IncomeForm(request.POST)
temp = 'config.html'
footer = 'Form Invalid'
c = {'form':form,
'footer':footer,}
return render(request, temp, c)
#Use Budget Class to populate a table in template
Budget.update_data({'months':12,
'user':request.user})
temp = 'home.html'
footer = '* Line Modified'
lineitems = Budget.build(request.user)
c = {'lineitems': lineitems,
'footer':footer,}
return render(request, temp, c)
# if a GET (or any other method) we'll load the budget
else:
footer = '* Line item modified'
footer = request
Budget.update_data({'user': request.user,
'months':12})
lineitems = Budget.build(request.user)
c = {'lineitems': lineitems,
'footer':footer,}
return render(request, 'home.html', c)
</code></pre>
<p>Here is the template:</p>
<pre><code>{% extends "base.html" %}
{% load bootstrap3 %}
{% block title %}
<h1>Add {{ itemtype }}</h1>
{% endblock %}
{% block content %}
<form action="{% url 'home' %}" method="post">
{% csrf_token %}
{{ form.as_p }}
<div class="btn-group">
<input type="submit" name="config_save" value="Save" class="btn btn-primary"/>
<input type="submit" name="cancel" value="Cancel" class="btn btn-default"/>
</div>
</form>
{% endblock %}
{% block footer %}
{{ footer }}
{% endblock %}
</code></pre>
<p>EDIT*</p>
<p>I was able to recreate this issue in the dev environment when I replaced</p>
<pre><code>{{ form.as_p }}
</code></pre>
<p>With</p>
<pre><code>{% bootstrap_form form layout='vertical' %}
</code></pre>
<p>But unfortunately neither of these worked when run on the apache/wsgi server.</p>
<p>Here are my forms. Note I also tried removing the class:form-control and it made no difference. I have another form and view that behaves almost identical to this (cancel is handled with an else, form is modelform) that works, the only difference being there are no date fields. To rule out that being the issue I excluded the date fields but still had the same issue.</p>
<pre><code>#Edit form to add/edit Expenses and Bills
class ExpensesForm(forms.ModelForm):
class Meta:
model = Items
exclude = ('skiplst',)
widgets = {'user': forms.HiddenInput(),
'itemType': forms.HiddenInput(),
'itemName': forms.TextInput(attrs={'class':'form-control',}),
'category': forms.Select(attrs={'class':'form-control',}),
'itemAmount': forms.NumberInput(attrs={'class':'form-control',}),
'payCycle': forms.Select(attrs={'class':'form-control',}),
'itemNote': forms.TextInput(attrs={'class':'form-control',}),
'nextDueDate': forms.DateInput(attrs={'name': 'date',
'class':'form-control'}),
'endDate': forms.DateInput(attrs={'name': 'date',
'class':'form-control'})}
#Edit form to add/edit Income Sources
class IncomeForm(forms.ModelForm):
class Meta:
model = Items
exclude = ('category','skiplst')
widgets = {'user': forms.HiddenInput(),
'itemType': forms.HiddenInput(),
'itemName': forms.TextInput(attrs={'class':'form-control',}),
'itemAmount': forms.NumberInput(attrs={'class':'form-control',}),
'payCycle': forms.Select(attrs={'class':'form-control',}),
'itemNote': forms.TextInput(attrs={'class':'form-control',}),
'nextDueDate': forms.DateInput(attrs={'name': 'date',
'class':'form-control'}),
'endDate': forms.DateInput(attrs={'name': 'date',
'class':'form-control'})}
</code></pre>
| 0 | 2016-09-26T21:06:56Z | 39,724,255 | <p>It looks like (slightly hard to tell as the formatting/indentation is off slightly) when your form is submitting the cancel value, it's submitting <code>Cancel</code>, and you're checking for <code>cancel</code>, so that logic is never executed. </p>
| 0 | 2016-09-27T12:08:07Z | [
"python",
"django",
"python-2.7",
"django-1.9"
]
|
sum of 'float64' column type in pandas return float instead of numpy.float64 | 39,712,435 | <p>I have a dataframe in pandas. I am taking sum of a column of a dataframe as:</p>
<pre><code>x = data['col1'].sum(axis=0)
print(type(x))
</code></pre>
<p>I have checked that <code>col1</code> column in <code>data</code> dataframe is of type <code>float64</code>. But the type of <code>x</code> is <code><class 'float'></code>. I was expecting the type of <code>x</code> to be <code>numpy.float64</code>.</p>
<p>What is it that I am missing here?</p>
<p>pandas version - '0.18.0', numpy version - '1.10.4', python version - 3.5.2</p>
| 4 | 2016-09-26T21:09:26Z | 39,720,343 | <p>This seems to be from the way that pandas is handling nans. When I set <code>skipna=False</code> in the <code>sum</code> method I get the <code>numpy</code> datatype</p>
<pre><code>import pandas as pd
import numpy as np
type(pd.DataFrame({'col1':[.1,.2,.3,.4]}).col1.sum(skipna=True))
#float
type(pd.DataFrame({'col1':[.1,.2,.3,.4]}).col1.sum(skipna=False))
#numpy.float64
</code></pre>
<p>The <code>sum</code> method is calling <code>nansum</code> from <code>pandas/core/nanops.py</code>, which produces the same behaviours. </p>
<pre><code>from pandas.core.nanops import nansum
type(sum(np.arange(10.0)))
# numpy.float64
type(nansum(np.arange(10.0)))
# float
</code></pre>
<p>Why <code>nansum</code> is converting from <code>numpy.float64</code> to <code>float</code>, I couldn't tell you. I've looked at the <code>nansum</code> source code, but none of the functions it itself calls seem to be producing that change.</p>
| 3 | 2016-09-27T08:57:17Z | [
"python",
"pandas",
"numpy"
]
|
Django: Which way is better way to find(search) a equal data object? | 39,712,490 | <p>I am implementing searching functions in <code>Django</code>, which of these would be better?</p>
<pre><code>def same_cart_item_in_cart(cart, new_cart_item):
already_exist_cart_item = cart.cartitem_set.filter(
Q(variation__product=new_cart_item.variation.product),
Q(variation=new_cart_item.variation),
Q(width=new_cart_item.width),
Q(height=new_cart_item.height),
).first()
return already_exist_cart_item # It can be None
</code></pre>
<p>Override <code>CartItem</code>'s <code>__eq__</code> first.</p>
<pre><code>class CartItem(TimeStampedModel):
cart = models.ForeignKey("Cart")
variation = models.ForeignKey(Variation)
# ë²½í ëë¹
width = models.PositiveIntegerField(
default=1,
validators=[MinValueValidator(1)],
)
# ë²½í ëì´
height = models.PositiveIntegerField(
default=1,
validators=[MinValueValidator(1)],
)
quantity = models.PositiveIntegerField(
default=1,
validators=[MinValueValidator(1)],
)
class Meta:
ordering = ('-created',)
def __str__(self):
return str(self.variation.product) + ' - ' + str(self.variation)
def __eq__(self, other):
return (
self.variation.product == other.variation.product and
self.variation == other.variation and
self.width == other.width and
self.height == other.height
)
</code></pre>
<p>and then,</p>
<pre><code>def same_cart_item_in_cart(cart, new_cart_item):
for cart_item in cart.cartitem_set.all():
if cart_item == new_cart_item:
return cart_item
return None
</code></pre>
| 0 | 2016-09-26T21:13:31Z | 39,712,791 | <p>As i said in the comment <1> option is better.</p>
<p>And it you are trying to save new instance and check it before saving, Django made it for you. You can add <a href="https://docs.djangoproject.com/en/1.10/ref/models/options/#unique-together" rel="nofollow"><code>unique_together</code></a> to your <code>Model</code> <code>Meta</code> and you it will raise error if you would try to save partly same object.</p>
<p><strong>UPDATE</strong></p>
<p>It should optimize your code with <a href="https://docs.djangoproject.com/en/1.10/topics/db/optimization/#use-foreign-key-values-directly" rel="nofollow">using foregn key values directly</a></p>
<p>So your code would be </p>
<pre><code>def same_cart_item_in_cart(cart, new_cart_item):
already_exist_cart_item = cart.cartitem_set.filter(
Q(variation_id=new_cart_item.variation_id),
Q(width=new_cart_item.width),
Q(height=new_cart_item.height),
).first()
return already_exist_cart_item
</code></pre>
<p>I removed <code>Q(variation__product=...)</code>, if <code>Variation</code> instance is the same, you don't need to query on its fields. And changed <code>Q(variation=...)</code> to <code>Q(variation_id=...)</code></p>
| 0 | 2016-09-26T21:34:41Z | [
"python",
"django"
]
|
calculate the total value dict | 39,712,512 | <p>I have a problem, how to calculate the total dict of the same keys ? I have a dict:</p>
<pre><code>{'learning': {'DOC1': 0.14054651081081646,
'DOC2': 0,
'DOC3': 0.4684883693693881},
'life': {'DOC1': 0.14054651081081646,
'DOC2': 0.20078072972973776,
'DOC3': 0}
}
</code></pre>
<p>and I hope the results as:</p>
<pre><code>{'learning life': {
'DOC1': DOC1 in learning + DOC1 in life,
'DOC2': DOC2 in learning + DOC2 in life,
'DOC3': DOC3 in learning + DOC3 in life,}}
</code></pre>
<p>Thank you very much</p>
| 0 | 2016-09-26T21:14:44Z | 39,712,576 | <p>Pretty simple:</p>
<pre><code>for k in d['learning']:
print(d['learning'][k] + d['life'][k])
</code></pre>
<p>... with <code>d</code> being your <code>dict</code> and no error checking whatsoever (does the key exist, is it really a number, etc.).
<hr>
As whole code snippet with a comprehension:</p>
<pre><code>d = {'learning': {'DOC1': 0.14054651081081646,
'DOC2': 0,
'DOC3': 0.4684883693693881},
'life': {'DOC1': 0.14054651081081646,
'DOC2': 0.20078072972973776,
'DOC3': 0}
}
d['sum'] = [d['learning'][k] + d['life'][k]
for k in d['learning']]
print(d)
</code></pre>
<p>See <a href="http://ideone.com/BDmyrw" rel="nofollow"><strong>a demo on ideone.com</strong></a>.</p>
| 1 | 2016-09-26T21:18:32Z | [
"python",
"dictionary",
"tf-idf"
]
|
calculate the total value dict | 39,712,512 | <p>I have a problem, how to calculate the total dict of the same keys ? I have a dict:</p>
<pre><code>{'learning': {'DOC1': 0.14054651081081646,
'DOC2': 0,
'DOC3': 0.4684883693693881},
'life': {'DOC1': 0.14054651081081646,
'DOC2': 0.20078072972973776,
'DOC3': 0}
}
</code></pre>
<p>and I hope the results as:</p>
<pre><code>{'learning life': {
'DOC1': DOC1 in learning + DOC1 in life,
'DOC2': DOC2 in learning + DOC2 in life,
'DOC3': DOC3 in learning + DOC3 in life,}}
</code></pre>
<p>Thank you very much</p>
| 0 | 2016-09-26T21:14:44Z | 39,712,755 | <p>You can use a dictionary comprehension to add all the numbers nested in a dictionary <code>d</code> just like so:</p>
<pre><code>totals = {k: sum(v.get(k, 0) for v in d.values()) for k in d.values()[0]} # dict of totals
</code></pre>
| 0 | 2016-09-26T21:31:49Z | [
"python",
"dictionary",
"tf-idf"
]
|
Create groups/classes based on conditions within columns | 39,712,602 | <p>I need help transforming my data so I can read through transaction data.</p>
<p><strong>Business Case</strong></p>
<p>I'm trying to group together some related transactions to create some groups or classes of events. This data set represents workers going out on various leaves of absence events. I want to create one class of leaves based on any transaction falling within 365 days of the leave event class. For charting trends, I want to number the classes so I get a sequence/pattern.</p>
<p>My code allows me to see when the very first event occurred, and it can identify when a new class starts, but it doesn't bucket each transaction into a class.</p>
<p><strong>Requirements:</strong></p>
<ul>
<li>Tag all rows based on what leave class they fall into.</li>
<li>Number each Unique Leave Event. Using this example index 0 would be Unique Leave Event 2, index 1 would be Unique Leave Event 2, index 3 would be Unique Leave Event 2, AND index 4 would be Unique Leave Event 1, etc. </li>
</ul>
<p>I added in a column for the desired output, labeled as "Desired Output". Note, there can be many more rows/events per person; and there can be many more people.</p>
<p><strong>Some Data</strong></p>
<pre><code>import pandas as pd
data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"],
'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]}
df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output'])
</code></pre>
<p><strong>Some Code I've Tried</strong></p>
<pre><code>df['Effective Date'] = df['Effective Date'].astype('datetime64[ns]')
df['EmplidShift'] = df['Employee ID'].shift(-1)
df['Effdt-Shift'] = df['Effective Date'].shift(-1)
df['Prior Row in Same Emplid Class'] = "No"
df['Effdt Diff'] = df['Effdt-Shift'] - df['Effective Date']
df['Effdt Diff'] = (pd.to_timedelta(df['Effdt Diff'], unit='d') + pd.to_timedelta(1,unit='s')).astype('timedelta64[D]')
df['Cumul. Count'] = df.groupby('Employee ID').cumcount()
df['Groupby'] = df.groupby('Employee ID')['Cumul. Count'].transform('max')
df['First Row Appears?'] = ""
df['First Row Appears?'][df['Cumul. Count'] == df['Groupby']] = "First Row"
df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes"
df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes"
df['Effdt > 1 Yr?'] = ""
df['Effdt > 1 Yr?'][ ((df['Prior Row in Same Emplid Class'] == "Yes" ) & (df['Effdt Diff'] < -365)) ] = "Yes"
df['Unique Leave Event'] = ""
df['Unique Leave Event'][ (df['Effdt > 1 Yr?'] == "Yes") | (df['First Row Appears?'] == "First Row") ] = "Unique Leave Event"
df
</code></pre>
| 7 | 2016-09-26T21:20:28Z | 39,791,114 | <p>This is a bit clunky but it yields the right output at least for your small example:</p>
<pre><code>import pandas as pd
data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01-01"],
'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]}
df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output'])
df["Effective Date"] = pd.to_datetime(df["Effective Date"])
df = df.sort_values(["Employee ID","Effective Date"]).reset_index(drop=True)
for i,_ in df.iterrows():
df.ix[0,"Result"] = "Unique Leave Event 1"
if i < len(df)-1:
if df.ix[i+1,"Employee ID"] == df.ix[i,"Employee ID"]:
if df.ix[i+1,"Effective Date"] - df.ix[i,"Effective Date"] > pd.Timedelta('365 days'):
df.ix[i+1,"Result"] = "Unique Leave Event " + str(int(df.ix[i,"Result"].split()[-1])+1)
else:
df.ix[i+1,"Result"] = df.ix[i,"Result"]
else:
df.ix[i+1,"Result"] = "Unique Leave Event 1"
</code></pre>
<p>Note that this code assumes that the first row always contains the string <code>Unique Leave Event 1</code>.</p>
<p>EDIT: Some explanation.</p>
<p>First I convert the dates to datetime format and then reorder the dataframe such that the dates for every Employee ID are ascending.</p>
<p>Then I iterate over the rows of the frame using the built-int iterator <code>iterrows</code>. The <code>_</code> in <code>for i,_</code> is merely a placeholder for the second variable I do not use because the iterator gives back both row numbers and rows, I only need the numbers here.</p>
<p>In the iterator I'm doing row-wise comparisons so by default I fill in the first row by hand and then assign to the <code>i+1</code>-th row. I do it like this because I know the value of the first row but not the value of the last row. Then I compare the <code>i+1</code>-th row with the <code>i</code>-th row within an <code>if</code>-safeguard because <code>i+1</code> would give an index-error on the last iteration.</p>
<p>In the loop I first check if the <code>Employee ID</code> has changed between the two rows. If it has not then I compare the dates of the two rows and see if they are apart more than 365 days. If this is the case I read the string <code>"Unique Leave Event X"</code> from the <code>i</code>-th row, increase the number by one and write it in the <code>i+1</code>-row. If the dates are closer I just copy the string from the previous row.</p>
<p>If the <code>Employee ID</code> does change on the other hand I just write <code>"Unique Leave Event 1"</code> to start over.</p>
<p>Note 1: <code>iterrows()</code> has no options to set so I can't iterate only over a subset.</p>
<p>Note 2: Always iterate using one of the built-in iterators and only iterate if you can't solve the problem otherwise.</p>
<p>Note 3: When assigning values in an iteration always use <code>ix</code>, <code>loc</code>, or <code>iloc</code>.</p>
| 3 | 2016-09-30T12:04:16Z | [
"python",
"pandas"
]
|
Create groups/classes based on conditions within columns | 39,712,602 | <p>I need help transforming my data so I can read through transaction data.</p>
<p><strong>Business Case</strong></p>
<p>I'm trying to group together some related transactions to create some groups or classes of events. This data set represents workers going out on various leaves of absence events. I want to create one class of leaves based on any transaction falling within 365 days of the leave event class. For charting trends, I want to number the classes so I get a sequence/pattern.</p>
<p>My code allows me to see when the very first event occurred, and it can identify when a new class starts, but it doesn't bucket each transaction into a class.</p>
<p><strong>Requirements:</strong></p>
<ul>
<li>Tag all rows based on what leave class they fall into.</li>
<li>Number each Unique Leave Event. Using this example index 0 would be Unique Leave Event 2, index 1 would be Unique Leave Event 2, index 3 would be Unique Leave Event 2, AND index 4 would be Unique Leave Event 1, etc. </li>
</ul>
<p>I added in a column for the desired output, labeled as "Desired Output". Note, there can be many more rows/events per person; and there can be many more people.</p>
<p><strong>Some Data</strong></p>
<pre><code>import pandas as pd
data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"],
'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]}
df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output'])
</code></pre>
<p><strong>Some Code I've Tried</strong></p>
<pre><code>df['Effective Date'] = df['Effective Date'].astype('datetime64[ns]')
df['EmplidShift'] = df['Employee ID'].shift(-1)
df['Effdt-Shift'] = df['Effective Date'].shift(-1)
df['Prior Row in Same Emplid Class'] = "No"
df['Effdt Diff'] = df['Effdt-Shift'] - df['Effective Date']
df['Effdt Diff'] = (pd.to_timedelta(df['Effdt Diff'], unit='d') + pd.to_timedelta(1,unit='s')).astype('timedelta64[D]')
df['Cumul. Count'] = df.groupby('Employee ID').cumcount()
df['Groupby'] = df.groupby('Employee ID')['Cumul. Count'].transform('max')
df['First Row Appears?'] = ""
df['First Row Appears?'][df['Cumul. Count'] == df['Groupby']] = "First Row"
df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes"
df['Prior Row in Same Emplid Class'][ df['Employee ID'] == df['EmplidShift']] = "Yes"
df['Effdt > 1 Yr?'] = ""
df['Effdt > 1 Yr?'][ ((df['Prior Row in Same Emplid Class'] == "Yes" ) & (df['Effdt Diff'] < -365)) ] = "Yes"
df['Unique Leave Event'] = ""
df['Unique Leave Event'][ (df['Effdt > 1 Yr?'] == "Yes") | (df['First Row Appears?'] == "First Row") ] = "Unique Leave Event"
df
</code></pre>
| 7 | 2016-09-26T21:20:28Z | 39,903,229 | <p>You can do this without having to loop or iterate through your dataframe. Per <a href="http://stackoverflow.com/a/10964938/5066140">Wes McKinney</a> you can use <code>.apply()</code> with a groupBy object and define a function to apply to the groupby object. If you use this with <code>.shift()</code> (<a href="http://stackoverflow.com/a/22082596/5066140">like here</a>) you can get your result without using any loops.</p>
<p><strong>Terse example:</strong></p>
<pre><code># Group by Employee ID
grouped = df.groupby("Employee ID")
# Define function
def get_unique_events(group):
# Convert to date and sort by date, like @Khris did
group["Effective Date"] = pd.to_datetime(group["Effective Date"])
group = group.sort_values("Effective Date")
event_series = (group["Effective Date"] - group["Effective Date"].shift(1) > pd.Timedelta('365 days')).apply(lambda x: int(x)).cumsum()+1
return event_series
event_df = pd.DataFrame(grouped.apply(get_unique_events).rename("Unique Event")).reset_index(level=0)
df = pd.merge(df, event_df[['Unique Event']], left_index=True, right_index=True)
df['Output'] = df['Unique Event'].apply(lambda x: "Unique Leave Event " + str(x))
df['Match'] = df['Desired Output'] == df['Output']
print(df)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> Employee ID Effective Date Desired Output Unique Event \
3 100 2013-01-01 Unique Leave Event 1 1
2 100 2014-07-01 Unique Leave Event 2 2
1 100 2015-06-05 Unique Leave Event 2 2
0 100 2016-01-01 Unique Leave Event 2 2
6 200 2013-01-01 Unique Leave Event 1 1
5 200 2015-01-01 Unique Leave Event 2 2
4 200 2016-01-01 Unique Leave Event 2 2
7 300 2014-01 Unique Leave Event 1 1
Output Match
3 Unique Leave Event 1 True
2 Unique Leave Event 2 True
1 Unique Leave Event 2 True
0 Unique Leave Event 2 True
6 Unique Leave Event 1 True
5 Unique Leave Event 2 True
4 Unique Leave Event 2 True
7 Unique Leave Event 1 True
</code></pre>
<p><strong>More verbose example for clarity:</strong></p>
<pre><code>import pandas as pd
data = {'Employee ID': ["100", "100", "100","100","200","200","200","300"],
'Effective Date': ["2016-01-01","2015-06-05","2014-07-01","2013-01-01","2016-01-01","2015-01-01","2013-01-01","2014-01"],
'Desired Output': ["Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 2","Unique Leave Event 2","Unique Leave Event 1","Unique Leave Event 1"]}
df = pd.DataFrame(data, columns=['Employee ID','Effective Date','Desired Output'])
# Group by Employee ID
grouped = df.groupby("Employee ID")
# Define a function to get the unique events
def get_unique_events(group):
# Convert to date and sort by date, like @Khris did
group["Effective Date"] = pd.to_datetime(group["Effective Date"])
group = group.sort_values("Effective Date")
# Define a series of booleans to determine whether the time between dates is over 365 days
# Use .shift(1) to look back one row
is_year = group["Effective Date"] - group["Effective Date"].shift(1) > pd.Timedelta('365 days')
# Convert booleans to integers (0 for False, 1 for True)
is_year_int = is_year.apply(lambda x: int(x))
# Use the cumulative sum function in pandas to get the cumulative adjustment from the first date.
# Add one to start the first event as 1 instead of 0
event_series = is_year_int.cumsum() + 1
return event_series
# Run function on df and put results into a new dataframe
# Convert Employee ID back from an index to a column with .reset_index(level=0)
event_df = pd.DataFrame(grouped.apply(get_unique_events).rename("Unique Event")).reset_index(level=0)
# Merge the dataframes
df = pd.merge(df, event_df[['Unique Event']], left_index=True, right_index=True)
# Add string to match desired format
df['Output'] = df['Unique Event'].apply(lambda x: "Unique Leave Event " + str(x))
# Check to see if output matches desired output
df['Match'] = df['Desired Output'] == df['Output']
print(df)
</code></pre>
<p><strong>You get the same output:</strong></p>
<pre><code> Employee ID Effective Date Desired Output Unique Event \
3 100 2013-01-01 Unique Leave Event 1 1
2 100 2014-07-01 Unique Leave Event 2 2
1 100 2015-06-05 Unique Leave Event 2 2
0 100 2016-01-01 Unique Leave Event 2 2
6 200 2013-01-01 Unique Leave Event 1 1
5 200 2015-01-01 Unique Leave Event 2 2
4 200 2016-01-01 Unique Leave Event 2 2
7 300 2014-01 Unique Leave Event 1 1
Output Match
3 Unique Leave Event 1 True
2 Unique Leave Event 2 True
1 Unique Leave Event 2 True
0 Unique Leave Event 2 True
6 Unique Leave Event 1 True
5 Unique Leave Event 2 True
4 Unique Leave Event 2 True
7 Unique Leave Event 1 True
</code></pre>
| 2 | 2016-10-06T18:29:09Z | [
"python",
"pandas"
]
|
InvalidStateError with asyncio futures and RuntimeError with aiohttp when using Futures with callback | 39,712,656 | <p>I'm new to <code>asyncio</code> and <code>aiohttp</code>. I am currently getting this error and not sure why I am getting <code>InvalidStateError</code> for my <code>asyncio</code> future and <code>RuntimeError</code> for my session:</p>
<pre><code>Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 184, in _run_module_as_main
"__main__", mod_spec)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/Users/bli1/Development/QE/idea/trinity-tracer/tracer/tests/tracer.py", line 100, in <module>
sys.exit(main(sys.argv))
File "/Users/bli1/Development/QE/idea/trinity-tracer/tracer/tests/tracer.py", line 92, in main
poster.post()
File "/Users/bli1/Development/QE/idea/trinity-tracer/tracer/utils/poster.py", line 87, in post
results = event_loop.run_until_complete(self.async_post_events(events))
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/base_events.py", line 387, in run_until_complete
return future.result()
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/futures.py", line 274, in result
raise self._exception
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/Users/bli1/Development/QE/idea/trinity-tracer/tracer/utils/poster.py", line 79, in async_post_events
task.add_done_callback(self.send_oracle, task.result(), session)
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/futures.py", line 268, in result
raise InvalidStateError('Result is not ready.')
asyncio.futures.InvalidStateError: Result is not ready.
Task exception was never retrieved
future: <Task finished coro=<Poster.async_post_event() done, defined at /Users/bli1/Development/QE/idea/trinity-tracer/tracer/utils/poster.py:62> exception=RuntimeError('Session is closed',)>
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.5/lib/python3.5/asyncio/tasks.py", line 239, in _step
result = coro.send(None)
File "/Users/bli1/Development/QE/idea/trinity-tracer/tracer/utils/poster.py", line 64, in async_post_event
async with session.post(self.endpoint, data=event) as resp:
File "/Users/bli1/Development/QE/idea/trinity-tracer/lib/python3.5/site-packages/aiohttp/client.py", line 565, in __aenter__
self._resp = yield from self._coro
File "/Users/bli1/Development/QE/idea/trinity-tracer/lib/python3.5/site-packages/aiohttp/client.py", line 161, in _request
raise RuntimeError('Session is closed')
RuntimeError: Session is closed
</code></pre>
<p>What I am trying to do is <code>POST</code> to an endpoint, and then use the same event <code>POST</code>ed to post to another endpoint. This will be ran as another <code>async</code> method as a <code>callback</code></p>
<p>Here is my code:</p>
<pre><code> async def async_post_event(self, event, session):
async with session.post(self.endpoint, data=event) as resp:
event["tracer"]["post"]["timestamp"] = time.time() * 1000.0
event["tracer"]["post"]["statusCode"] = await resp.status
return event
async def send_oracle(self, event, session):
async with session.post(self.oracle, data=event) as resp:
return event["event"]["event_header"]["event_id"], await resp.status
async def async_post_events(self, events):
tasks = []
conn = aiohttp.TCPConnector(verify_ssl=self.secure)
async with aiohttp.ClientSession(connector=conn) as session:
for event in events:
task = asyncio.ensure_future(self.async_post_event(event, session))
task.add_done_callback(self.send_oracle, task.result(), session)
tasks.append(task)
return await asyncio.gather(*tasks)
def post(self):
event_loop = asyncio.get_event_loop()
try:
events = [self.gen_random_event() for i in range(self.num_post)]
results = event_loop.run_until_complete(self.async_post_events(events))
print(results)
finally:
event_loop.close()
</code></pre>
| 1 | 2016-09-26T21:24:55Z | 39,727,049 | <p><code>add_done_callback</code> accepts a <em>callback</em>, not a <em>coroutine</em>.
Moreover it's a part of very low level API which should be avoided by a casual developer.</p>
<p>But your main mistake is calling <code>session.post()</code> outside of <code>ClientSession</code> async context manager, the stacktrace explicitly points on it.</p>
<p>I've modified your snippet for getting something which looks like a working code:</p>
<pre><code>async def async_post_event(self, event, session):
async with session.post(self.endpoint, data=event) as resp:
event["tracer"]["post"]["timestamp"] = time.time() * 1000.0
event["tracer"]["post"]["statusCode"] = await resp.status
async with session.post(self.oracle, data=event) as resp:
return event["event"]["event_header"]["event_id"], await resp.status
async def async_post_events(self, events):
coros = []
conn = aiohttp.TCPConnector(verify_ssl=self.secure)
async with aiohttp.ClientSession(connector=conn) as session:
for event in events:
coros.append(self.async_post_event(event, session))
return await asyncio.gather(*coros)
def post(self):
event_loop = asyncio.get_event_loop()
try:
events = [self.gen_random_event() for i in range(self.num_post)]
results = event_loop.run_until_complete(self.async_post_events(events))
print(results)
finally:
event_loop.close()
</code></pre>
<p>You may extract two <em>posts</em> from <code>async_post_event</code> into separate coroutines but the main idea remains the same.</p>
| 2 | 2016-09-27T14:14:30Z | [
"python",
"python-3.5",
"python-asyncio",
"aiohttp"
]
|
pyspark: difference between using (,) and [,] for pair representation for reducedByKey | 39,712,696 | <p>I am applying a map and then reduceByKey transformations on an RDD using pyspark. I tried both of the following syntax, and both of them seem to work:</p>
<p>case 1:</p>
<pre><code>my_rdd_out = my_rdd.map(lambda r: [r['my_id'], [[r['my_value']]]])\
.reduceByKey(lambda a, b: a+b)\
.map(lambda r: r[1])
</code></pre>
<p>case 2:</p>
<pre><code>my_rdd_out = my_rdd.map(lambda r: (r['my_id'], [[r['my_value']]]))\
.reduceByKey(lambda a, b: a+b)\
.map(lambda r: r[1])
</code></pre>
<p>The r here is of the class <code>from pyspark.sql import Row</code>.
In case 1, the map output pair is in bracket; in case 2, the map output pair is in parenthesis. Though both works, I am wondering is there any difference between using [] and () for representing a pair that will be the input for reduceByKey later? Thanks!</p>
| 0 | 2016-09-26T21:27:33Z | 39,712,807 | <p>The difference between a <code>tuple</code> and a <code>list</code> in python is that <code>tuple</code> object are immutable so they are hashable. <code>list</code> objects are not hashable since they can be modified using their reference.</p>
<p>In your case you can use any of them (or the <code>reduceByKey</code> method wouldn't apply to both tuples and lists), it's just a convenience to avoid casting one into another when you get the object from some caller (The method only needs to iterate through the collection, does not care what kind of collection it is).</p>
<p>here is an implementation of <code>reduceByKey</code> lifted from <a href="https://gist.github.com/Juanlu001/562d1ec55be970403442" rel="nofollow">here</a></p>
<pre><code>def reduceByKey(func, iterable):
"""Reduce by key.
Equivalent to the Spark counterpart
Inspired by http://stackoverflow.com/q/33648581/554319
1. Sort by key
2. Group by key yielding (key, grouper)
3. For each pair yield (key, reduce(func, last element of each grouper))
"""
get_first = lambda p: p[0]
get_second = lambda p: p[1]
# iterable.groupBy(_._1).map(l => (l._1, l._2.map(_._2).reduce(func)))
return map(
lambda l: (l[0], reduce(func, map(get_second, l[1]))),
groupby(sorted(iterable, key=get_first), get_first)
)
</code></pre>
<p>In your example you have <code>tuple(<something>).reduceByKey(lambda <something>)</code>. Obviously, the iterable is the <code>tuple</code> and the <code>func</code> is the lambda expression.</p>
<p>As you can see, the input just needs to be an iterable. index access is not even required.</p>
<p>You could have passed a <code>set</code>, a <code>deque</code>, a generator comprehension, whatever. It contains no conversion to list or tuple whatsoever.</p>
<p>It doesn't even need to get all data at the same time, just one at a time (generator functions/comprehensions would work too): elegant way of avoiding useless temporary object creation.</p>
<p>This requires that <code>iterable</code> is only iterated through once in the function, which is the case here with the <code>sorted</code> function that generates a <code>list</code>.</p>
| 1 | 2016-09-26T21:35:43Z | [
"python",
"lambda",
"row",
"pyspark"
]
|
Open automatically python file as Administrator | 39,712,751 | <p>I want to use the third party module called "ping", <a href="https://pypi.python.org/pypi/python-ping/2011.10.17.376a019" rel="nofollow">found here</a></p>
<p>But it requires to open the file as "Administrator" in Windows.</p>
<p>I want it so every time the computer is started, to open this file as Administrator.
To open it, I just need to put the file in <code>shell:startup</code></p>
<p>But the issue is that I need to do it as Administrator.
How can I achieve my goal?</p>
| -1 | 2016-09-26T21:31:32Z | 39,713,305 | <p>So there's this program called </p>
<blockquote>
<p>Task Scheduler you can find it just by pressing start and searching
"Task Scheduler"</p>
</blockquote>
<p><a href="http://i.stack.imgur.com/mOi8Z.png" rel="nofollow"><img src="http://i.stack.imgur.com/mOi8Z.png" alt="Looks like this."></a></p>
<blockquote>
<p>So first you want to go to action then click create action.</p>
</blockquote>
<p><a href="http://i.stack.imgur.com/BDtUZ.png" rel="nofollow"><img src="http://i.stack.imgur.com/BDtUZ.png" alt="Should open a window that looks like this."></a></p>
<blockquote>
<p>Go to triggers then select create new trigger configure it so it looks
like this.</p>
</blockquote>
<p><a href="http://i.stack.imgur.com/sjnJV.png" rel="nofollow"><img src="http://i.stack.imgur.com/sjnJV.png" alt="Trigger"></a></p>
<p>Then Create a batch script for launching your python file.
Somthing like this.</p>
<pre><code>@echo off
python c:\somescript.py %*
pause
</code></pre>
<blockquote>
<p>then go to action on the window and create a new action.
<a href="http://i.stack.imgur.com/QcSPD.png" rel="nofollow"><img src="http://i.stack.imgur.com/QcSPD.png" alt="action page"></a></p>
<p>last go to the main page configure it like this and click ok and done.
<a href="http://i.stack.imgur.com/YNhpD.png" rel="nofollow"><img src="http://i.stack.imgur.com/YNhpD.png" alt="enter image description here"></a></p>
</blockquote>
| 0 | 2016-09-26T22:19:32Z | [
"python"
]
|
Keeping the user's input intact when outputing to terminal at the same time | 39,712,754 | <p>To simplify, let's say I'm trying to write a command line two-way chat in Python. I would like the user to input his message with <code>input()</code> at the command prompt, but a listening thread could print a message at any moment. By default, this would "break" the user's input. Visually something like this:</p>
<pre><code>userB>Stop interuserA wrote:Hey check this out!
rupting me!
</code></pre>
<p>The closest I was able to find was <a href="http://stackoverflow.com/questions/27304078/how-to-print-over-raw-inputs-line-in-python">this answer here</a> which is almost, but not exactly, what I'm looking for, but it did point me towards the <code>blessings</code> package which seems to be what I need (although I'm happy with an answer for any package, or even pure ANSII).</p>
<p>What I'm trying to achieve is to print incoming output from a Thread <em>above</em> the user's input, so that his text doesn't break. Let's say the user is typing:</p>
<pre><code>userB>Stop inter
</code></pre>
<p>Suddenly a message comes in from the thread, but our user's input doesn't brake:</p>
<pre><code>userA says: Ok I won't interrupt you
userB>Stop inter
</code></pre>
<p>What should my threads theoretical <code>print_incoming_message()</code> method look like to achieve this?</p>
<p>NB: I'm using Linux and am not interested in cross-platform compatibility.</p>
| 1 | 2016-09-26T21:31:44Z | 39,712,946 | <p>There are two ways of doing this.</p>
<p>One is to use <code>ncurses</code>. There are python bindings for this. With <code>ncurses</code>, the terminal screen is under your complete control, and you can print characters at any point.</p>
<p>Without <code>ncurses</code>, you can't write above the current line. What you <em>can</em> do, however, is print a <code>\r</code> character and go back to the beginning of the line.</p>
<p>If you save the user's input (say he wrote <code>foo</code>), and you want to print the line <code>bar</code> above that, you can output:</p>
<pre><code>\rbar\nfoo
</code></pre>
<p>This will overwrite the current line, and introduce a newline, moving the user's input down. The effect is similar, but it won't be as tamper-proof as <code>ncurses</code>.</p>
| 1 | 2016-09-26T21:47:35Z | [
"python",
"multithreading",
"python-3.x",
"output"
]
|
How to set size for scatter plot | 39,712,767 | <p>I have three lines setting a plot. I expect the size of the pic.png to be 640x640 pixels. But I got 800x800 picture. </p>
<pre><code>plt.figure(figsize=(8, 8), dpi=80)
plt.scatter(X[:],Y[:])
plt.savefig('pic.png')
</code></pre>
<p>BTW I have no problem setting size with object-oriented interface but I need to use pyplot style.</p>
| 0 | 2016-09-26T21:32:46Z | 39,713,009 | <p>The following code produces a 576x576 PNG image in my machine:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.arange(10)
y = np.random.rand(10)
plt.figure(figsize=(8, 8), dpi=80)
plt.scatter(x, y)
plt.savefig('pic.png')
</code></pre>
<p>Shifting <code>dpi=80</code> to the <code>plt.savefig</code> call correctly results in a 640x640 PNG image:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.arange(10)
y = np.random.rand(10)
plt.figure(figsize=(8, 8))
plt.scatter(x, y)
plt.savefig('pic.png', dpi=80)
</code></pre>
<p>I can't offer any explanation as to why this happens though.</p>
| 0 | 2016-09-26T21:54:05Z | [
"python",
"matplotlib"
]
|
How to set size for scatter plot | 39,712,767 | <p>I have three lines setting a plot. I expect the size of the pic.png to be 640x640 pixels. But I got 800x800 picture. </p>
<pre><code>plt.figure(figsize=(8, 8), dpi=80)
plt.scatter(X[:],Y[:])
plt.savefig('pic.png')
</code></pre>
<p>BTW I have no problem setting size with object-oriented interface but I need to use pyplot style.</p>
| 0 | 2016-09-26T21:32:46Z | 39,713,472 | <p>While Alberto's answer gives you the correct work around, looking at the <a href="http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.savefig" rel="nofollow">documentation for <code>plt.savefig</code></a> gives you a better idea as to why this behavior happens.</p>
<blockquote>
<p>dpi: [ None | scalar > 0 | âfigureâ]
The resolution in dots per inch. If None it will default to the value savefig.dpi in the matplotlibrc file. If âfigureâ it will set the dpi to be the value of the figure.</p>
</blockquote>
<p><strong>Short answer:</strong> use <code>plt.savefig(..., dpi='figure')</code> to use the dpi value set at figure creation.</p>
<p><strong>Longer answer:</strong> with a <code>plt.figure(figsize=(8,8), dpi=80)</code>, you can get a 640x640 figure in the following ways:</p>
<ol>
<li><p>Pass the <code>dpi='figure'</code> keyword argument to <code>plt.savefig</code>:</p>
<pre><code>plt.figure(figsize=(8, 8), dpi=80)
...
plt.savefig('pic.png', dpi='figure')
</code></pre></li>
<li><p>Explicitly give the dpi you want to <code>savefig</code>:</p>
<pre><code>plt.figure(figsize=(8, 8))
...
plt.savefig('pic.png', dpi=80)
</code></pre></li>
<li><p>Edit your <a href="http://matplotlib.org/users/customizing.html#the-matplotlibrc-file" rel="nofollow">matplotlibrc file</a> at the following line:</p>
<pre><code>#figure.dpi : 80 # figure dots per inch
</code></pre>
<p>then</p>
<pre><code>plt.figure(figsize=(8, 8))
...
plt.savefig('pic.png') #dpi=None implicitly defaults to rc file value
</code></pre></li>
</ol>
| 2 | 2016-09-26T22:37:57Z | [
"python",
"matplotlib"
]
|
Search and replace placeholder text in PDF with Python | 39,712,828 | <p>I need to generate a customized PDF copy of a template document.
The easiest way - I thought - was to create a source PDF that has some placeholder text where customization needs to happen , ie <code><first_name></code> and <code><last_name></code>, and then replace these with the correct values.</p>
<p>I've searched high and low, but is there really no way of basically taking the source template PDF, replace the placeholders with actual values and write to a new PDF?</p>
<p>I looked at PyPDF2 and ReportLab but neither seem to be able to do so.
Any suggestions? Most of my searches lead to using a Perl app, CAM::PDF, but I'd prefer to keep it all in Python.</p>
| 1 | 2016-09-26T21:37:29Z | 39,713,796 | <p>There is no direct way to do this that will work reliably. PDFs are not like HTML: they specify the positioning of text character-by-character. They may not even include the whole font used to render the text, just the characters needed to render the specific text in the document. No library I've found will do nice things like re-wrap paragraphs after updating the text. PDFs are for the most part a display-only format, so you'll be much better off using a tool that turns markup into a PDF than updating the PDF in-place.</p>
<p>If that's not an option, you can create a <a href="http://www.adobe.com/devnet/acrobat/fdftoolkit.html" rel="nofollow">PDF form</a> in something like Acrobat, then use a PDF manipulation library like <a href="http://itextpdf.com/" rel="nofollow">iText (AGPL)</a> or <a href="https://pdfbox.apache.org/" rel="nofollow">pdfbox</a>, which has a nice clojure wrapper called <a href="https://github.com/dotemacs/pdfboxing#fill-in-pdf-forms" rel="nofollow">pdfboxing</a> that can handle some of that.</p>
<p>From my experience, Python's support for writing to PDFs is pretty limited. Java has, by far, the best language support. Also, you get what you pay for, so it would probably be worth paying for a iText license if you're using this for commercial purposes. I've had pretty good results writing python wrappers around PDF-manipulation CLI tools like pdfboxing and ghostscript. That will probably be <em>much</em> easier for your use case than trying to shoehorn this into Python's PDF ecosystem.</p>
| 3 | 2016-09-26T23:18:43Z | [
"python",
"pdf"
]
|
getaddrinfow fails if running process from Python | 39,712,983 | <p>I try to run 3rd party process (nsqd.exe) from my python script but when I do nsqd fails to bind socket. I've no idea why.</p>
<p>The script I'm using:</p>
<pre><code>import subprocess
import sys
proc = subprocess.Popen(['nsqd.exe', '-tcp-address="127.0.0.1:{}"'.format(sys.argv[1]),
'-http-address="127.0.0.1:{}"'.format(sys.argv[2])])
print("the commandline is {}".format(proc.args))
proc.wait()
sys.exit(proc.returncode)
</code></pre>
<p>And the output:</p>
<pre><code>D:\bsm.tar\bsm\final\nsqd>python nsqd.py 4150 4151
the commandline is ['nsqd.exe', '-tcp-address="127.0.0.1:4150"', '-http-address="127.0.0.1:4151"']
[nsqd] 2016/09/26 21:41:51.974681 nsqd v0.3.8 (built w/go1.6.2)
[nsqd] 2016/09/26 21:41:51.975681 ID: 864
[nsqd] 2016/09/26 21:41:51.979675 NSQ: persisting topic/channel metadata to nsqd.864.dat
[nsqd] 2016/09/26 21:41:52.004711 FATAL: listen ("127.0.0.1:4150") failed - listen tcp: lookup "127.0.0.1: getaddrinfow: No such host is known.
</code></pre>
<p>If I run that by myself directly everything works fine:</p>
<pre><code>D:\bsm.tar\bsm\final\nsqd>nsqd.exe -tcp-address="127.0.0.1:4150" -http-address="127.0.0.1:4151"
[nsqd] 2016/09/26 21:42:20.093848 nsqd v0.3.8 (built w/go1.6.2)
[nsqd] 2016/09/26 21:42:20.094850 ID: 864
[nsqd] 2016/09/26 21:42:20.095851 NSQ: persisting topic/channel metadata to nsqd.864.dat
[nsqd] 2016/09/26 21:42:20.127984 TCP: listening on 127.0.0.1:4150
[nsqd] 2016/09/26 21:42:20.127984 HTTP: listening on 127.0.0.1:4151
[nsqd] 2016/09/26 21:42:22.111580 NSQ: persisting topic/channel metadata to nsqd.864.dat
[nsqd] 2016/09/26 21:42:22.111580 TCP: closing 127.0.0.1:4150
[nsqd] 2016/09/26 21:42:22.112553 HTTP: closing 127.0.0.1:4151
[nsqd] 2016/09/26 21:42:22.135635 NSQ: closing topics
[nsqd] 2016/09/26 21:42:22.135635 QUEUESCAN: closing
[nsqd] 2016/09/26 21:42:22.135635 LOOKUP: closing
[nsqd] 2016/09/26 21:42:22.135635 ID: closing
D:\bsm.tar\bsm\final\nsqd>
</code></pre>
<p>Maybe somebody have idea what is wrong?</p>
<p>Win10, python352. Running as admin does not help.</p>
<p>Thanks.</p>
| 0 | 2016-09-26T21:52:07Z | 39,713,824 | <p>Remove the double quotes in your <code>Popen</code> so it becomes:</p>
<pre><code>proc = subprocess.Popen(['nsqd.exe',
'-tcp-address=127.0.0.1:{}'.format(sys.argv[1]),
'-http-address=127.0.0.1:{}'.format(sys.argv[2])
])
</code></pre>
<p>Before passing your command to <code>CreateProcess</code>, Python converts the list to a string using <code>subprocess.list2cmdline</code>: </p>
<pre><code>>>> subprocess.list2cmdline(['nsqd.exe', '-tcp-address="127.0.0.1:4150"', '-http-address="127.0.0.1:4151"'])
'nsqd.exe -tcp-address=\\"127.0.0.1:1234\\" -http-address=\\"127.0.0.1:1234\\"
</code></pre>
<p>nsqd.exe thinks <code>"127.0.0.1</code> is the hostname - hence the failed lookup.</p>
<h2>Additional Information</h2>
<p>The reason the double quotes work on the command line is that they have special meaning when a function such as <a href="https://msdn.microsoft.com/en-us/library/windows/desktop/bb776391(v=vs.85).aspx" rel="nofollow">CommandLineToArgvW</a> is used to split the command line into individual arguments: normally arguments are delimited by whitespace, but when a quoted string is encountered, the quotes are stripped and the entire string becomes one argument.</p>
<p>This is also why Python is \-escaping the quotes: it expects the resultant line to be parsed in the above manner.</p>
<p>If you pass <code>Popen</code> a string rather than a list, <code>list2cmdline</code> will not be called and you should get the same results as removing the double quotes (i.e. it will be like running it from the command line):</p>
<pre><code>proc = subprocess.Popen('nsqd.exe "-tcp-address=127.0.0.1:{}" '
'"-http-address=127.0.0.1:{}"'
.format(sys.argv[1], sys.argv[2]))
</code></pre>
<p>You can see this illustrated in the following (perhaps contrived) example:</p>
<pre><code>import subprocess
subprocess.Popen('c:\python27\python.exe "--version"')
subprocess.Popen(['c:\python27\python.exe', '"--version"'])
</code></pre>
<p>The first <code>Popen</code> will print the python version. The second will look for a file named <code>"--version"</code>: <code>can't open file '"--version"': [Errno 22] Invalid argument</code></p>
| 2 | 2016-09-26T23:22:07Z | [
"python",
"windows",
"tcp",
"subprocess",
"nsq"
]
|
Speech to Text Watson interrupt by silence | 39,713,014 | <p>We are using the API Sessionless and python, we have put the 'continue:True' parameter like this:</p>
<p>def make_completed_audio_request(url, API_name=None, language=None, time=None):</p>
<pre><code>username, password, endpoint, lan=select_voice_api(name=API_name, language=language)
audio = get_complete_audio(url, api_name=API_name, time=time)
endpoint=get_post_endpoint(url=endpoint, api_name=API_name)
if audio:
list_audio=get_speakers_by_audio(audio[1].name)
headers={'content-type': audio[2]}
params = {'model': lan,
'continuous':True,
'timestamps': True}
if language and (API_name == 'watson' or API_name == 'WATSON'):
print 'enviando request'
response = requests.post(url=endpoint, auth=(username, password),
params=params, data=audio[1], headers=headers)
print 'cladificando error'
error_clasifier(code=response.status_code)
else:
response = requests.post(url=endpoint, auth=(username, password),
params=params, data=audio[1], headers=headers)
error_clasifier(code=response.status_code)
if response:
return response, list_audio, True, None
else:
return None, None, False, None
</code></pre>
<p>But it still does not work, it cuts the transcription in the first silence it founds</p>
<p>What am I doing wrong? is there another way to send it to the API?</p>
| 0 | 2016-09-26T21:54:47Z | 39,748,499 | <p>I am using watson_developer_cloud API. Its easy to use and what is more important - it works. Here is the code sample:</p>
<pre><code>import json
from os.path import join, dirname
from watson_developer_cloud import SpeechToTextV1
speech_to_text = SpeechToTextV1(
username="yourusername",
password="yourpassword",
x_watson_learning_opt_out=False)
with open(join(dirname(__file__), 'test.wav'), 'rb') as audio_file:
data = json.dumps(speech_to_text.recognize(audio_file, content_type='audio/wav', word_confidence=True, continuous=True, word_alternatives_threshold=0, max_alternatives=10))
</code></pre>
| 0 | 2016-09-28T13:08:31Z | [
"python",
"speech-to-text",
"watson"
]
|
python 27 - Creating and running instances of another script in parallel | 39,713,080 | <p>I'm attempting to build a multiprocessing script that retrieves dicts of attributes from a MySQL table and then runs instances of my main script in <strong>parallel</strong>, using each dict retrieved from the MySQL table as an argument to each instance of the main script. The main script has a method called queen_bee() that's responsible for ensuring that all the other methods have the correct information and are executed in the proper order.</p>
<p>I have tried to iterate through the list of dicts in order to create/run parallel processes of the main script using the multiprocessing library. But they end up running consecutively, not concurrently:</p>
<pre><code>from my_main_script import my_main_class as main
import multiprocessing as mp
def create_list_of_attribute_dicts():
...
return list_of_dicts
for each_dict in list_of_dicts:
instance = main(each_dict)
p = mp.Process(target=instance.queen_bee(),args=(each_dict,))
p.start()
...
</code></pre>
<p>I have also tried using the multiprocessing library's Pool.map() method. But I can't figure out how to instantiate the main script one time for each dict using Pool.map():</p>
<pre><code>...
pool = mp.Pool()
jobs = pool.map(main.queen_bee(),list_of_dicts)
</code></pre>
<p>The Pool.map method seems to be the cleanest, most pythonic way to get these instances to run in parallel, but I'm hung up on the proper way to do that in this case. I know the above 'jobs' variable will fail because 'main' has not been instantiated. However, I can't figure out how to pass each dict as an argument to separate instances of the main class and then run those instances using the map method. I'm open to trying a different approach. Thanks in advance for your help.</p>
| 0 | 2016-09-26T22:00:05Z | 39,713,378 | <p>You could store your dictionaries in a list and then try something like that:</p>
<pre><code>from multiprocessing import Pool as ThreadPool
# ...
def parallel_function(list_of_dictionaries,threads=20):
thread_pool = ThreadPool(threads)
results = thread_pool.map(SC.queen_bee() ,list_of_dictionaries)
thread_pool.close()
thread_pool.join()
return results
</code></pre>
<p><strong>Note:</strong> I had problem using multiprocessing due to my pc, so for me worked by replacing <code>multiprocessing</code> with the <code>multiprocessing.dummy</code> in the <code>import</code> statement.</p>
| 0 | 2016-09-26T22:28:22Z | [
"python",
"parallel-processing",
"multiprocessing"
]
|
python 27 - Creating and running instances of another script in parallel | 39,713,080 | <p>I'm attempting to build a multiprocessing script that retrieves dicts of attributes from a MySQL table and then runs instances of my main script in <strong>parallel</strong>, using each dict retrieved from the MySQL table as an argument to each instance of the main script. The main script has a method called queen_bee() that's responsible for ensuring that all the other methods have the correct information and are executed in the proper order.</p>
<p>I have tried to iterate through the list of dicts in order to create/run parallel processes of the main script using the multiprocessing library. But they end up running consecutively, not concurrently:</p>
<pre><code>from my_main_script import my_main_class as main
import multiprocessing as mp
def create_list_of_attribute_dicts():
...
return list_of_dicts
for each_dict in list_of_dicts:
instance = main(each_dict)
p = mp.Process(target=instance.queen_bee(),args=(each_dict,))
p.start()
...
</code></pre>
<p>I have also tried using the multiprocessing library's Pool.map() method. But I can't figure out how to instantiate the main script one time for each dict using Pool.map():</p>
<pre><code>...
pool = mp.Pool()
jobs = pool.map(main.queen_bee(),list_of_dicts)
</code></pre>
<p>The Pool.map method seems to be the cleanest, most pythonic way to get these instances to run in parallel, but I'm hung up on the proper way to do that in this case. I know the above 'jobs' variable will fail because 'main' has not been instantiated. However, I can't figure out how to pass each dict as an argument to separate instances of the main class and then run those instances using the map method. I'm open to trying a different approach. Thanks in advance for your help.</p>
| 0 | 2016-09-26T22:00:05Z | 39,731,021 | <p>I think I have it licked. The answer appears to have been to use Pool's map function to call a worker function that's <strong>inside</strong> the same multiprocessing script. The worker function then instantiates the main script that calls its own worker function ('queen_bee' in this case).</p>
<p>Using my code example from above:</p>
<pre><code>from my_main_script import my_main_class as main
import multiprocessing as mp
def create_list_of_active_job_attribute_dicts():
...
return active_jobs
def call_the_queen(individual_active_job):
instance = main(individual_active_job)
instance.queen_bee()
if __name__ == '__main__':
pool = mp.Pool(processes = 4)
pool.map(call_the_queen,active_jobs)
pool.close()
pool.join()
</code></pre>
<p>I have no idea <strong>why</strong> this approach worked and the other one didn't. If anybody wants to weigh in and explain, I would be grateful. Thanks @coder for your help!</p>
| 0 | 2016-09-27T17:35:30Z | [
"python",
"parallel-processing",
"multiprocessing"
]
|
How can I sort datetime columns by row value in a Pandas dataframe? | 39,713,137 | <p>I'm new to Python and Pandas, and I've pulled in a database table that contains 15+ different datetime columns. My task is to sort these columns generally by earliest to latest value in the rows. However, the data is not clean; sometimes, where Column A's date would come before Column B's date in Row 0, A would come after B in Row 1. </p>
<p>I wrote a few functions (redacted here for simplicity) that compare two columns by calculating the percentage of times dates in A come before and after B, and then sorting the columns based on that percentage:</p>
<pre><code>def get_percentage(df, df_subset):
return len(df_subset)/float(len(df))
def duration_report(df, earlier_column, later_column):
results = {}
td = df[later_column] - df[earlier_column]
results["Before"] = get_percentage(df, df.loc[td >= pd.Timedelta(0)])
results["After"] = get_percentage(df, df.loc[td <= pd.Timedelta(0)])
ind = "%s vs %s" % (earlier_column, later_column)
return pd.DataFrame(data=results, index=[ind])
def order_date_columns(df, col1, col2):
before = duration_report(df, col1, col2).Before.values[0]
after = duration_report(df, col1, col2).After.values[0]
if before >= after:
return [col1, col2]
else:
return [col2, col1]
</code></pre>
<p>My goal with the above code is to programmatically implement the following: </p>
<blockquote>
<p>If Col A dates come before Col B dates 50+% of the time, Col A should come before Col B in the list of earliest to latest datetime columns.</p>
</blockquote>
<p>The <code>order_date_columns()</code> function successfully sorts two columns into the correct order, but how do I apply this sorting to the 15+ columns at once? I've looked into <code>df.apply()</code>, <code>lambda</code>, and <code>map()</code>, but haven't been able to crack this problem. </p>
<p>Any help (with code clarity/efficiency, too) would be appreciated!</p>
| 2 | 2016-09-26T22:05:19Z | 39,713,482 | <p>If you don't mind taking a bit of a shortcut and using the median of each date column, this should work:</p>
<pre><code>def order_date_columns(df, date_columns_to_sort):
x = [(col, df[col].astype(np.int64).median()) for col in date_columns_to_sort]
return [x[0] for x in sorted(x, key=lambda x: x[1])]
</code></pre>
| 2 | 2016-09-26T22:39:10Z | [
"python",
"python-2.7",
"sorting",
"datetime",
"pandas"
]
|
How can I sort datetime columns by row value in a Pandas dataframe? | 39,713,137 | <p>I'm new to Python and Pandas, and I've pulled in a database table that contains 15+ different datetime columns. My task is to sort these columns generally by earliest to latest value in the rows. However, the data is not clean; sometimes, where Column A's date would come before Column B's date in Row 0, A would come after B in Row 1. </p>
<p>I wrote a few functions (redacted here for simplicity) that compare two columns by calculating the percentage of times dates in A come before and after B, and then sorting the columns based on that percentage:</p>
<pre><code>def get_percentage(df, df_subset):
return len(df_subset)/float(len(df))
def duration_report(df, earlier_column, later_column):
results = {}
td = df[later_column] - df[earlier_column]
results["Before"] = get_percentage(df, df.loc[td >= pd.Timedelta(0)])
results["After"] = get_percentage(df, df.loc[td <= pd.Timedelta(0)])
ind = "%s vs %s" % (earlier_column, later_column)
return pd.DataFrame(data=results, index=[ind])
def order_date_columns(df, col1, col2):
before = duration_report(df, col1, col2).Before.values[0]
after = duration_report(df, col1, col2).After.values[0]
if before >= after:
return [col1, col2]
else:
return [col2, col1]
</code></pre>
<p>My goal with the above code is to programmatically implement the following: </p>
<blockquote>
<p>If Col A dates come before Col B dates 50+% of the time, Col A should come before Col B in the list of earliest to latest datetime columns.</p>
</blockquote>
<p>The <code>order_date_columns()</code> function successfully sorts two columns into the correct order, but how do I apply this sorting to the 15+ columns at once? I've looked into <code>df.apply()</code>, <code>lambda</code>, and <code>map()</code>, but haven't been able to crack this problem. </p>
<p>Any help (with code clarity/efficiency, too) would be appreciated!</p>
| 2 | 2016-09-26T22:05:19Z | 39,729,862 | <p>Since you're using Python 2.7, you can use the <code>cmp</code> keyword argument to <code>sorted</code>. To get the column names in the order that you're looking for, I would do something like:</p>
<pre><code># Returns -1 if first_column[i] > second_column[i] more often.
# Returns 1 if vice versa.
# Returns 0 if equal.
# Assumes df[first_column] and df[second_column] are the same length.
def compare_two(first_column, second_column):
c1_greater_count = 0
c2_greater_count = 0
# Iterate over the two columns in the dataframe. df must be in accessible scope.
for i in range(len(df[first_column])):
if df[first_column].iloc(i) > df[second_column].iloc[i]:
c1_greater_count += 1
elif df[second_column].iloc[i] > df[first_column].iloc[i]:
c2_greater_count += 1
if c1_greater_count > c2_greater_count:
return -1
if c2_greater_count > c1_greater_count:
return 1
return 0
df = get_dataframe_from_somewhere()
relevant_column_names = get_relevant_column_names(df) # e.g., get all the dates.
sorted_column_names = sorted(relevant_column_names, cmp=compare_two)
# sorted_column_names holds the names of the relevant columns,
# sorted according to the given ordering.
</code></pre>
<p>I'm sure there's a more Pythonic way to do it, but this should work. Note that for Python 3, you can use the <a href="https://wiki.python.org/moin/HowTo/Sorting#The_Old_Way_Using_the_cmp_Parameter" rel="nofollow"><code>cmp_to_key</code></a> utility.</p>
| 1 | 2016-09-27T16:29:05Z | [
"python",
"python-2.7",
"sorting",
"datetime",
"pandas"
]
|
To remove a particular element from the xml-string using lxml Python 3.5 | 39,713,146 | <p>I have the below xml as an input to a function of python. I want to find a particular element which has Null value((firstChild.nodeValue)) and totally delete that from the xml and return the string. I have a contingency of using only the lxml module. Can I get help with this. </p>
<pre><code><country name="Liechtenstein">
<rank></rank>
<a></a>
<b></b>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E">345</neighbor>
</country>
</code></pre>
<p>I want the output to be:-</p>
<pre><code><country name="Liechtenstein">
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E">345</neighbor>
</country>
</code></pre>
<p>I basically have the flexibility with a constant list containing tag names where I can iterate and find the text.Below is the list.
a= ('rank','year','a','b','gdppc','neighbor')</p>
<p>Please help !</p>
| -1 | 2016-09-26T22:06:21Z | 39,713,463 | <p>You can use a union to look for all nodes in a single xpath, then presuming you want to remove nodes with no text you can just call <code>tree.remove(node)</code>:</p>
<pre><code>x = """<country name="Liechtenstein">
<rank></rank>
<a></a>
<b></b>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E">345</neighbor>
</country>"""
from lxml import etree
tree = etree.fromstring(x)
a = ('rank','year','a','b','gdppc','neighbor')
for node in tree.xpath("|".join(map("//{}".format, a))):
if not node.text:
tree.remove(node)
print(etree.tostring(tree).decode("utf-8"))
</code></pre>
<p>Which would give you:</p>
<pre><code><country name="Liechtenstein">
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E">345</neighbor>
</country>
</code></pre>
| 0 | 2016-09-26T22:36:49Z | [
"python",
"xml",
"lxml",
"python-3.5"
]
|
To remove a particular element from the xml-string using lxml Python 3.5 | 39,713,146 | <p>I have the below xml as an input to a function of python. I want to find a particular element which has Null value((firstChild.nodeValue)) and totally delete that from the xml and return the string. I have a contingency of using only the lxml module. Can I get help with this. </p>
<pre><code><country name="Liechtenstein">
<rank></rank>
<a></a>
<b></b>
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E">345</neighbor>
</country>
</code></pre>
<p>I want the output to be:-</p>
<pre><code><country name="Liechtenstein">
<year>2008</year>
<gdppc>141100</gdppc>
<neighbor name="Austria" direction="E">345</neighbor>
</country>
</code></pre>
<p>I basically have the flexibility with a constant list containing tag names where I can iterate and find the text.Below is the list.
a= ('rank','year','a','b','gdppc','neighbor')</p>
<p>Please help !</p>
| -1 | 2016-09-26T22:06:21Z | 39,715,599 | <p>The below code worked :)</p>
<pre><code>def remove_empty_elements(self,xml_input):
tree = etree.fromstring(xml_input)
for found in tree.xpath("//*[text()=' ']"):
print("deleted " + str(found))
found.getparent().remove(found)
print(etree.tostring(tree).decode("utf-8"))
</code></pre>
| -1 | 2016-09-27T03:42:07Z | [
"python",
"xml",
"lxml",
"python-3.5"
]
|
linear interpolation -- make grid | 39,713,164 | <p>I want to interpolate between different models. To make things easier, my data is shown below:</p>
<p><a href="http://i.stack.imgur.com/8Obef.png" rel="nofollow"><img src="http://i.stack.imgur.com/8Obef.png" alt="enter image description here"></a></p>
<p>I have 10 different simulations (which I will call <code>z</code>). For each <code>z</code> I have an <code>array x</code> and an <code>array y</code> (where for a given <code>z</code>, <code>len(x)=len(y)</code>).
For example: </p>
<p>for <code>z=1</code>: <code>x.shape=(1200,)</code> and <code>y.shape=(1200,)</code></p>
<p>for <code>z=2</code>: <code>x.shape=(1250,)</code> and <code>y.shape=(1250,)</code></p>
<p>for <code>z=3</code>: <code>x.shape=(1236,)</code> and <code>y.shape=(1236,)</code></p>
<p>and so on ...</p>
<p>I want to interpolate so that for a given <code>z</code> and <code>x</code>, I get <code>y</code>. For example, for <code>z=2.5</code> and <code>x=10**9</code>, the code outputs <code>y</code>. I am assuming that:</p>
<p><code>y = a*x + b*z + c</code> where of course I don't know <code>a</code>, <code>b</code>, and <code>c</code>.</p>
<p>My question is that how do I store the data in a grid? I am confused since for a different <code>z</code> the size of <code>x</code> and <code>y</code> differs. How is it possible to build a grid?</p>
<h2>UPDATE</h2>
<p>I was able to partially solve my problem. What I did first is that I interpolated between <code>x</code> and <code>y</code> using <code>interp1d</code>. It worked perfectly fine. I then <em>created</em> a new grid of <code>x</code> and <code>y</code> values. Briefly the method is:</p>
<pre><code>f = interp1d(x, y, kind='linear')
new_x = np.linspace(10**7, 4*10**9, 10000)
new_y = f(new_x)
</code></pre>
<p>I then interpolated <code>x</code>, <code>y</code>, and <code>z</code>:</p>
<pre><code>ff = LinearNDInterpolator( (x, z), y)
</code></pre>
<p>To test whether the method work, here's a plot with <code>z=3</code>.</p>
<p><a href="http://i.stack.imgur.com/Sc6dt.png" rel="nofollow"><img src="http://i.stack.imgur.com/Sc6dt.png" alt="enter image description here"></a></p>
<p>The plot looks good till <code>x=10**8</code>. Indeed, the line deviates from the original model. Here's a plot when I further zoom in:</p>
<p><a href="http://i.stack.imgur.com/utLn9.png" rel="nofollow"><img src="http://i.stack.imgur.com/utLn9.png" alt="enter image description here"></a></p>
<p>The interpolation obviously is not good when <code>x > 10**8</code>. How can I fix it?</p>
| 1 | 2016-09-26T22:07:58Z | 39,722,817 | <p>It seems that in your problem the curves y(x) are well behaving, so you could probably just interpolate y(x) for the given values of z first and then interpolate between the obtained y-values afterwards.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.widgets import Slider
import random
#####
# Generate some data
#####
generate = lambda x, z: 1./(x+1.)+(z*x/75.+z/25.)
def f(z):
#create an array of values between zero and 100 of random length
x = np.linspace(0,10., num=random.randint(42,145))
#generate corresponding y values
y = generate(x, z)
return np.array([x,y])
Z = [1, 2, 3, 3.6476, 4, 5.1]
A = [f(z) for z in Z]
#now A contains the dataset of [x,y] pairs for each z value
#####
# Interpolation
#####
def do_interpolation(x,z):
#assume Z being sorted in ascending order
#look for indizes of z values closest to given z
ig = np.searchsorted(Z, z)
il = ig-1
#interpolate y(x) for those z values
yg = np.interp(x, A[ig][0,:], A[ig][1,:])
yl = np.interp(x, A[il][0,:], A[il][1,:])
#linearly interpolate between yg and yl
return yl + (yg-yl)*float(z-Z[il])/(Z[ig] - Z[il])
# do_interpolation(x,z) will now provide the interpolated data
print do_interpolation( np.linspace(0, 10), 2.5)
#####
# Plotting, use Slider to change the value of z.
#####
fig=plt.figure()
fig.subplots_adjust(bottom=0.2)
ax=fig.add_subplot(111)
for i in range(len(Z)):
ax.plot(A[i][0,:] , A[i][1,:], label="{z}".format(z=Z[i]) )
l, = ax.plot(np.linspace(0, 10) , do_interpolation( np.linspace(0, 10), 2.5), label="{z}".format(z="interpol"), linewidth=2., color="k" )
axn1 = plt.axes([0.25, 0.1, 0.65, 0.03], axisbg='#e4e4e4')
sn1 = Slider(axn1, 'z', Z[0], Z[-1], valinit=2.5)
def update(val):
l.set_data(np.linspace(0, 10), do_interpolation( np.linspace(0, 10), val))
plt.draw()
sn1.on_changed(update)
ax.legend()
plt.show()
</code></pre>
| 0 | 2016-09-27T10:55:07Z | [
"python",
"interpolation",
"linear-interpolation"
]
|
linear interpolation -- make grid | 39,713,164 | <p>I want to interpolate between different models. To make things easier, my data is shown below:</p>
<p><a href="http://i.stack.imgur.com/8Obef.png" rel="nofollow"><img src="http://i.stack.imgur.com/8Obef.png" alt="enter image description here"></a></p>
<p>I have 10 different simulations (which I will call <code>z</code>). For each <code>z</code> I have an <code>array x</code> and an <code>array y</code> (where for a given <code>z</code>, <code>len(x)=len(y)</code>).
For example: </p>
<p>for <code>z=1</code>: <code>x.shape=(1200,)</code> and <code>y.shape=(1200,)</code></p>
<p>for <code>z=2</code>: <code>x.shape=(1250,)</code> and <code>y.shape=(1250,)</code></p>
<p>for <code>z=3</code>: <code>x.shape=(1236,)</code> and <code>y.shape=(1236,)</code></p>
<p>and so on ...</p>
<p>I want to interpolate so that for a given <code>z</code> and <code>x</code>, I get <code>y</code>. For example, for <code>z=2.5</code> and <code>x=10**9</code>, the code outputs <code>y</code>. I am assuming that:</p>
<p><code>y = a*x + b*z + c</code> where of course I don't know <code>a</code>, <code>b</code>, and <code>c</code>.</p>
<p>My question is that how do I store the data in a grid? I am confused since for a different <code>z</code> the size of <code>x</code> and <code>y</code> differs. How is it possible to build a grid?</p>
<h2>UPDATE</h2>
<p>I was able to partially solve my problem. What I did first is that I interpolated between <code>x</code> and <code>y</code> using <code>interp1d</code>. It worked perfectly fine. I then <em>created</em> a new grid of <code>x</code> and <code>y</code> values. Briefly the method is:</p>
<pre><code>f = interp1d(x, y, kind='linear')
new_x = np.linspace(10**7, 4*10**9, 10000)
new_y = f(new_x)
</code></pre>
<p>I then interpolated <code>x</code>, <code>y</code>, and <code>z</code>:</p>
<pre><code>ff = LinearNDInterpolator( (x, z), y)
</code></pre>
<p>To test whether the method work, here's a plot with <code>z=3</code>.</p>
<p><a href="http://i.stack.imgur.com/Sc6dt.png" rel="nofollow"><img src="http://i.stack.imgur.com/Sc6dt.png" alt="enter image description here"></a></p>
<p>The plot looks good till <code>x=10**8</code>. Indeed, the line deviates from the original model. Here's a plot when I further zoom in:</p>
<p><a href="http://i.stack.imgur.com/utLn9.png" rel="nofollow"><img src="http://i.stack.imgur.com/utLn9.png" alt="enter image description here"></a></p>
<p>The interpolation obviously is not good when <code>x > 10**8</code>. How can I fix it?</p>
| 1 | 2016-09-26T22:07:58Z | 39,882,149 | <p>What you're doing seems a bit weird to me, at least you seem to use a single set of <code>y</code> values to do the interpolation. What I suggest is not performing two interpolations one after the other, but considering your <code>y(z,x)</code> function as the result of a pure 2d interpolation problem.</p>
<p>So as I noted in a comment, I suggest using <code>scipy.interpolate.LinearNDInterpolator</code>, the same object that <code>griddata</code> uses under the hood for bilinear interpolatin. As we've also discussed in comments, you need to have a single interpolator that you can query multiple times afterwards, so we have to use the lower-level interpolator object, as that is callable.</p>
<p>Here's a full example of what I mean, complete with dummy data and plotting:</p>
<pre><code>import numpy as np
import scipy.interpolate as interp
import matplotlib.pyplot as plt
# create dummy data
zlist = range(4) # z values
# one pair of arrays for each z value in a list:
xlist = [np.linspace(-1,1,41),
np.linspace(-1,1,61),
np.linspace(-1,1,55),
np.linspace(-1,1,51)]
funlist = [lambda x:0.1*np.ones_like(x),
lambda x:0.2*np.cos(np.pi*x)+0.4,
lambda x:np.exp(-2*x**2)+0.5,
lambda x:-0.7*np.abs(x)+1.7]
ylist = [f(x) for f,x in zip(funlist,xlist)]
# create contiguous 1d arrays for interpolation
all_x = np.concatenate(xlist)
all_y = np.concatenate(ylist)
all_z = np.concatenate([np.ones_like(x)*z for x,z in zip(xlist,zlist)])
# create a single linear interpolator object
yfun = interp.LinearNDInterpolator((all_z,all_x),all_y)
# generate three interpolated sets: one with z=2 to reproduce existing data,
# two with z=1.5 and z=2.5 respectively to see what happens
xplot = np.linspace(-1,1,30)
z = 2
y_repro = yfun(z,xplot)
z = 1.5
y_interp1 = yfun(z,xplot)
z = 2.5
y_interp2 = yfun(z,xplot)
# plot the raw data (markers) and the two interpolators (lines)
fig,ax = plt.subplots()
for x,y,z,mark in zip(xlist,ylist,zlist,['s','o','v','<','^','*']):
ax.plot(x,y,'--',marker=mark,label='z={}'.format(z))
ax.plot(xplot,y_repro,'-',label='z=2 interp')
ax.plot(xplot,y_interp1,'-',label='z=1.5 interp')
ax.plot(xplot,y_interp2,'-',label='z=2.5 interp')
ax.set_xlabel('x')
ax.set_ylabel('y')
# reduce plot size and put legend outside for prettiness, see also http://stackoverflow.com/a/4701285/5067311
box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
</code></pre>
<p><a href="http://i.stack.imgur.com/90BJO.png" rel="nofollow"><img src="http://i.stack.imgur.com/90BJO.png" alt="result"></a></p>
<p>You didn't specify how you series of <code>(x,y)</code> array pairs are stored, I used a list of numpy <code>ndarray</code>s. As you see, I flattened the list of 1d arrays into a single set of 1d arrays: <code>all_x</code>, <code>all_y</code>, <code>all_z</code>. These can be used as scattered <code>y(z,x)</code> data from which you can construct the interpolator object. As you can see in the result, for <code>z=2</code> it reproduces the input points, and for non-integer <code>z</code> it interpolates between the relevant <code>y(x)</code> curves.</p>
<p>This method should be applicable to your dataset. One note, however: you have huge numbers on a logarithmic scale on your <code>x</code> axis. This alone could lead to numeric instabilities. I suggest that you also try performing the interpolation using <code>log(x)</code>, it might behave better (this is just a vague guess).</p>
| 0 | 2016-10-05T19:27:39Z | [
"python",
"interpolation",
"linear-interpolation"
]
|
Python 3 - How to randomize what text displays | 39,713,257 | <p>I am a beginner at python and I am trying to figure out how I can randomly display text. For example, 60% chance of "Hello", 40% chance of saying "Goodbye", etc. Right now, for fun, I am trying to create sort of a bottle flip game. If you don't know what it is, its basically when you flip a half empty water bottle and try to land it. This is what I have: (This is more than likely completely wrong.)</p>
<pre><code>import random
number = random.randrange(10)
if number == "1":
print ("You have landed the bottle!")
elif number == "2":
print ("You have landed the bottle!")
elif number == "3":
print ("You have landed the bottle!")
elif number == "4":
print ("You have landed the bottle!")
elif number == "5":
print ("The bottle did not land, better luck next time.")
elif number == "6":
print ("The bottle did not land, better luck next time.")
elif number == "7":
print ("The bottle did not land, better luck next time.")
elif number == "8":
print ("The bottle did not land, better luck next time.")
elif number == "9":
print ("The bottle did not land, better luck next time.")
elif number == "10":
print ("The bottle landed on the cap!")
</code></pre>
| -2 | 2016-09-26T22:15:23Z | 39,713,302 | <p>You have the right idea, basically, but you can greatly simplify your code.</p>
<pre><code>if number < 5:
print ("You have landed the bottle!")
elif number < 10:
print ("The bottle did not land, better luck next time.")
else:
print ("The bottle landed on the cap!")
</code></pre>
<p>You can change the values in your call to <code>randrange</code> and in the if statements above to get just about any weighting you'd like.</p>
<p>Note that in your original question you were comparing numbers to strings. I changed that here. Comparing numbers to strings (i.e. <code>3 == "3"</code>) will always be <code>False</code>.</p>
| 0 | 2016-09-26T22:19:13Z | [
"python",
"python-3.x",
"random"
]
|
Python 3 - How to randomize what text displays | 39,713,257 | <p>I am a beginner at python and I am trying to figure out how I can randomly display text. For example, 60% chance of "Hello", 40% chance of saying "Goodbye", etc. Right now, for fun, I am trying to create sort of a bottle flip game. If you don't know what it is, its basically when you flip a half empty water bottle and try to land it. This is what I have: (This is more than likely completely wrong.)</p>
<pre><code>import random
number = random.randrange(10)
if number == "1":
print ("You have landed the bottle!")
elif number == "2":
print ("You have landed the bottle!")
elif number == "3":
print ("You have landed the bottle!")
elif number == "4":
print ("You have landed the bottle!")
elif number == "5":
print ("The bottle did not land, better luck next time.")
elif number == "6":
print ("The bottle did not land, better luck next time.")
elif number == "7":
print ("The bottle did not land, better luck next time.")
elif number == "8":
print ("The bottle did not land, better luck next time.")
elif number == "9":
print ("The bottle did not land, better luck next time.")
elif number == "10":
print ("The bottle landed on the cap!")
</code></pre>
| -2 | 2016-09-26T22:15:23Z | 39,713,375 | <p>You could also use a list of the possible solutions.</p>
<pre><code>import random
possibilities=['landed'] * 3 + ['missed'] * 6 + ['landed on cap']
print(random.choice(possibilities))
</code></pre>
| 0 | 2016-09-26T22:27:46Z | [
"python",
"python-3.x",
"random"
]
|
Possible to have same key pair more than once in a dictonary | 39,713,338 | <p>I'm creating a blackjack game with a deck of cards like so:<br>
<code>cards = {'ace':[1,11],'2':2,'3':3}...</code> and so on. </p>
<p>I want to have <code>3</code> decks of cards, so is it possible to have more than one ace, etc.? If so, how? I still want each card to have the keypair of its value, but I don't care if it's in a dictionary. </p>
| 0 | 2016-09-26T22:23:25Z | 39,713,537 | <p>You could use two separate data structures: One to define <em>unchanging</em> information such as card names and values (as you have already done), and another to keep track of <em>changing</em> information, such as how many of each card remains in the deck.</p>
| 1 | 2016-09-26T22:46:00Z | [
"python",
"python-3.x",
"dictionary"
]
|
Possible to have same key pair more than once in a dictonary | 39,713,338 | <p>I'm creating a blackjack game with a deck of cards like so:<br>
<code>cards = {'ace':[1,11],'2':2,'3':3}...</code> and so on. </p>
<p>I want to have <code>3</code> decks of cards, so is it possible to have more than one ace, etc.? If so, how? I still want each card to have the keypair of its value, but I don't care if it's in a dictionary. </p>
| 0 | 2016-09-26T22:23:25Z | 39,713,565 | <p>Using a dictionary type in Python will not allow you to have duplicate keys (<a href="https://docs.python.org/3/tutorial/datastructures.html#dictionaries" rel="nofollow">see documentation</a>). So it would not be possible to have more than one 'ace' key in any given dictionary.</p>
<p>You didn't give much context as to how you would use the dictionary in your game. Is it used to store all of the cards in the game and to determine the order in which cards are drawn. Or is it simply used to lookup the value of a card that is stored somewhere else in your programme?</p>
<p>If the former is the case then a dictionary is probably not the right data structure to be using as order is not guaranteed (again, <a href="https://docs.python.org/3/tutorial/datastructures.html#dictionaries" rel="nofollow">see documentation</a>). Consider using a list, where the order of the elements is maintained and duplicate entries are allowed. You could consider creating a Card class which would have name, value, and deck properties, and store instances of this class in the aforementioned list.</p>
<p>If the latter is the case then it doesn't matter how many decks you have, the name-to-value mappings will always remain the same. In this case you don't need more than one copy of the dictionary.</p>
| 0 | 2016-09-26T22:49:48Z | [
"python",
"python-3.x",
"dictionary"
]
|
Difficulty with document batch import, pymongo | 39,713,374 | <p>I'm having a much more difficult time than I thought I would importing multiple documents from Mongo into RAM in batch. I am writing an application to communicate with a MongoDB via <code>pymongo</code> that currently has 2GBs, but in the near future could grow to over 1TB. Because of this, batch reading a limited number of records into RAM at a time is important for scalability. </p>
<p>Based on <a href="http://stackoverflow.com/questions/9786736/how-to-read-through-collection-in-chunks-by-1000]">this post</a> and <a href="https://api.mongodb.com/python/2.7.2/api/pymongo/cursor.html" rel="nofollow">this documentation</a> I thought this would be about as easy as: </p>
<pre><code>HOST = MongoClient(MONGO_CONN)
DB_CONN = HOST.database_name
collection = DB_CONN.collection_name
cursor = collection.find()
cursor.batch_size(1000)
next_1K_records_in_RAM = cursor.next()
</code></pre>
<p>This isn't working for me, however. Even though I have a Mongo collection populated with >200K BSON objects, this reads them in one at a time as single dictionaries, e.g. <code>{_id : ID1, ...}</code> instead of what I'm looking for, which is an error of dictionaries representing multiple documents in my collections, e.g. <code>[{_id : ID1, ...}, {_id : ID2, ...}, ..., {_id: ID1000, ...}]</code>. </p>
<p>I wouldn't expect this to matter, but I'm on python 3.5 instead of 2.7. </p>
<p>As this example references a secure, remote data source this isn't a reproducible example. Apologies for that. If you have a suggestion for how the question can be improved please let me know. </p>
| 0 | 2016-09-26T22:27:43Z | 39,713,962 | <ul>
<li>Python version is irrelevant here, nothing to do with your output.</li>
<li>Batch_size defines only how many documents mongoDB returns in a single
trip to DB (under some limitations: <a href="http://api.mongodb.com/python/current/api/pymongo/cursor.html" rel="nofollow">see here
here</a> ) </li>
<li>collection.find always returns an iterator/cursor or None. Batching does its job transparently)
(the later if no documents are found)</li>
<li><p>To examine returned documents you
have to iterate through the cursor i.e.</p>
<p><code>For document in cursor:
print (document)
</code></p>
<p>or if you want a list of the documents: <code>list(cursor)</code></p>
<ul>
<li>Remember to do a <code>cursor.rewind()</code> if you need to revisit the documents</li>
</ul></li>
</ul>
| 1 | 2016-09-26T23:41:27Z | [
"python",
"mongodb",
"python-3.x",
"pymongo"
]
|
Convert range to timestamp in Pandas | 39,713,381 | <p>I have a column in a pandas data frame that goes from 0 to 172800000 in steps of 10. I would like to convert into datetime stamp with a specified date, beginning at midnight of that day. </p>
<p>So, suppose, </p>
<pre><code>time = np.arange(0,172800000, 10)
</code></pre>
<p>I would like to convert this in the following format: </p>
<pre><code>YYYY-MM-DD: HH:MM:SS.XXX
</code></pre>
<p>The starting date should be <strong>2016-09-20</strong>. </p>
<p>Here's what I have done: </p>
<pre><code># Create a dummy frame as an example:
test = pd.DataFrame()
time = np.arange(0, 172800000, 10)
test['TIME'] = time
data = np.random.randint(0, 1000, size=len(time))
test['DATA'] = data
# Convert time to datetime:
test['TIME'] = pd.to_datetime(test['TIME'], unit='ms')
</code></pre>
<p>If I check the head of the data frame, I get the following: </p>
<pre><code> TIME DATA
0 1970-01-01 00:00:00.000 681
1 1970-01-01 00:00:00.010 986
2 1970-01-01 00:00:00.020 957
3 1970-01-01 00:00:00.030 422
4 1970-01-01 00:00:00.040 319
</code></pre>
<p>How do I get the year, month, and day to start on 2016, 09, 20 and not 1970?</p>
| 2 | 2016-09-26T22:28:56Z | 39,713,580 | <p>Try:</p>
<pre><code>test['TIME'] = pd.to_datetime('2016-09-20') + pd.to_timedelta(time, 'ms')
</code></pre>
| 3 | 2016-09-26T22:52:00Z | [
"python",
"pandas"
]
|
Convert range to timestamp in Pandas | 39,713,381 | <p>I have a column in a pandas data frame that goes from 0 to 172800000 in steps of 10. I would like to convert into datetime stamp with a specified date, beginning at midnight of that day. </p>
<p>So, suppose, </p>
<pre><code>time = np.arange(0,172800000, 10)
</code></pre>
<p>I would like to convert this in the following format: </p>
<pre><code>YYYY-MM-DD: HH:MM:SS.XXX
</code></pre>
<p>The starting date should be <strong>2016-09-20</strong>. </p>
<p>Here's what I have done: </p>
<pre><code># Create a dummy frame as an example:
test = pd.DataFrame()
time = np.arange(0, 172800000, 10)
test['TIME'] = time
data = np.random.randint(0, 1000, size=len(time))
test['DATA'] = data
# Convert time to datetime:
test['TIME'] = pd.to_datetime(test['TIME'], unit='ms')
</code></pre>
<p>If I check the head of the data frame, I get the following: </p>
<pre><code> TIME DATA
0 1970-01-01 00:00:00.000 681
1 1970-01-01 00:00:00.010 986
2 1970-01-01 00:00:00.020 957
3 1970-01-01 00:00:00.030 422
4 1970-01-01 00:00:00.040 319
</code></pre>
<p>How do I get the year, month, and day to start on 2016, 09, 20 and not 1970?</p>
| 2 | 2016-09-26T22:28:56Z | 39,713,581 | <p>This is the raison d'Ètre for <code>pandas.date_range()</code>:</p>
<pre><code>import pandas as pd
test = pd.DataFrame({'TIME': pd.date_range(start='2016-09-20',
freq='10ms', periods=20)})
print(test)
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code> TIME
0 2016-09-20 00:00:00.000
1 2016-09-20 00:00:00.010
2 2016-09-20 00:00:00.020
3 2016-09-20 00:00:00.030
4 2016-09-20 00:00:00.040
5 2016-09-20 00:00:00.050
6 2016-09-20 00:00:00.060
7 2016-09-20 00:00:00.070
8 2016-09-20 00:00:00.080
9 2016-09-20 00:00:00.090
10 2016-09-20 00:00:00.100
11 2016-09-20 00:00:00.110
12 2016-09-20 00:00:00.120
13 2016-09-20 00:00:00.130
14 2016-09-20 00:00:00.140
15 2016-09-20 00:00:00.150
16 2016-09-20 00:00:00.160
17 2016-09-20 00:00:00.170
18 2016-09-20 00:00:00.180
19 2016-09-20 00:00:00.190
</code></pre>
<p>(Substitute <code>periods=20</code> for <code>periods=172800000</code>)</p>
| 3 | 2016-09-26T22:52:08Z | [
"python",
"pandas"
]
|
Extracting quotations/citations with nltk (not regex) | 39,713,487 | <p>The input list of sentences:</p>
<pre><code>sentences = [
"""Well, I've tried to say "How Doth the Little Busy Bee," but it all came different!""",
"""Alice replied in a very melancholy voice. She continued, 'I'll try again.'"""
]
</code></pre>
<p>The desired output:</p>
<pre><code>How Doth the Little Busy Bee,
I'll try again.
</code></pre>
<p>Is there a way to extract the citations (can appear in both single and double quotes) with <code>nltk</code> with built-in or third-party tokenizers?</p>
<hr>
<p>I've tried using the <a href="http://www.nltk.org/_modules/nltk/tokenize/sexpr.html" rel="nofollow"><code>SExprTokenizer</code> tokenizer</a> providing the single and double quotes as <code>parens</code> values but the result was far from the desired, e.g.:</p>
<pre><code>In [1]: from nltk import SExprTokenizer
...:
...:
...: sentences = [
...: """Well, I've tried to say "How Doth the Little Busy Bee," but it all came different!""",
...: """Alice replied in a very melancholy voice. She continued, 'I'll try again.'"""
...: ]
...:
...: tokenizer = SExprTokenizer(parens='""', strict=False)
...: for sentence in sentences:
...: for item in tokenizer.tokenize(sentence):
...: print(item)
...: print("----")
...:
Well,
I've
tried
to
say
"
How
Doth
the
Little
Busy
Bee,
"
but it all came different!
----
Alice replied in a very melancholy voice. She continued, 'I'll try again.'
</code></pre>
<hr>
<p>There were similar threads like <a href="http://stackoverflow.com/questions/37936461/how-to-extract-quotations-from-text-using-nltk">this</a> and <a href="http://stackoverflow.com/questions/171480/regex-grabbing-values-between-quotation-marks">this</a>, but all of them suggest a regex-based approach, but, I'm curious if this can be solved with <code>nltk</code> only - sounds like a common task in Natural Language Processing.</p>
| 1 | 2016-09-26T22:39:34Z | 39,730,272 | <p>Well, under the hood, <code>SExprTokenizer</code> is a regex-based approach as well, as can be seen from the source code you linked to.<br>
What also can be seen from the source is that the authors apparently didn't consider that the opening and closing "paren" are represented with the same character.
The depth of the nesting is increased and decreased in the same iteration, so the quote seen by the tokenizer is the empty string.</p>
<p>Identifying quotes is not that common in NLP, I think.
People use quotes in many different ways (especially if you deal with different languages...), so it's quite hard to get it right in a robust approach.
For many NLP applications quoting is just ignored, I'd say...</p>
| 1 | 2016-09-27T16:51:53Z | [
"python",
"nltk",
"tokenize"
]
|
Read/Write TCP options field | 39,713,540 | <p>I want to read and write custom data to TCP options field using Scapy. I know how to use TCP options field in Scapy in "normal" way as dictionary, but is it possible to write to it byte per byte?</p>
| 0 | 2016-09-26T22:46:17Z | 40,023,525 | <p>You can not directly write the TCP options field byte per byte, however you can either:</p>
<ul>
<li>write your entire TCP segment byte per byte: <code>TCP("\x01...\x0n")</code></li>
<li>add an option to Scapy's code manually in scapy/layers/inet.py <code>TCPOptions</code> structure</li>
</ul>
<p>These are workarounds and a definitive solution to this would be to implement a byte per byte TCP options field and commit on Scapy's github of course.</p>
| 1 | 2016-10-13T14:15:39Z | [
"python",
"tcp",
"scapy"
]
|
Issues with iteration in a recursive function | 39,713,568 | <p>The code below is used to find the shortest path based on the constraints (of maxTotalDist and maxDistOutdoors) given. I have got it <em>mostly</em> working, but the problem I am having is that even though I am able to find a path (on some given problems). I'm not able to <em>confirm</em> that I've found it. </p>
<p>For example, if a path is found, the "visited" list, in the code, should be the full path that I obtained (the path is then tested on whether or not it is good enough to be the "shortest path"). But, any print statements I try to implement to see which nodes I'm hitting aren't being printed. (i.e. Trying to find the path from Node 'A' to Node 'C', Node A has children ['b', 'c']. If I make the sample <em>if</em> statement (if visited[0] == 'c'), the print statement will not go off).</p>
<p>Anyway, I know this is probably really badly written, but any help would contribute. Thanks.</p>
<pre><code>def bruteForceSearch1(digraph, start, end, maxTotalDist, maxDistOutdoors, visited = [])
if not (digraph.hasNode(start) and digraph.hasNode(end)):
raise ValueError('Start or End not in graph')
path = [str(start)]
if start == end:
return path
shortest = None
#This is the iterator talked about in the question
for node in digraph.childrenOf(start):
if (str(node) not in visited):
visited = visited + [str(node)]
#Sample print statement
if visited[0] == 'nodeName':
print "Node is part of search"
newPath = bruteForceSearch1(digraph, node, end, maxTotalDist, maxDistOutdoors, visited)
if newPath == None:
continue
if shortest != None:
totalShort, outdoorShort = digraph.getDistances(shortest)
total, outdoor = digraph.getDistances(path + newPath)
if (total <= maxTotalDist) and (outdoor <= maxDistOutdoors):
if (shortest == None):
shortest = newPath
else:
if (len(newPath) < len(shortest)):
shortest = newPath
elif (len(newPath) == len(shortest)) and (total <= totalShort) and (outdoor <= outdoorShort):
shortest = newPath
if shortest != None:
path = path + shortest
else:
path = None
return path
</code></pre>
| 0 | 2016-09-26T22:50:05Z | 39,800,307 | <p>All of the recursive calls will see the same value as <code>visited[0]</code>. It will always be the first child of the original <code>start</code> value. Since you're appending all other values to the end of the list, nothing will ever replace that child as the first value in <code>visited</code>.</p>
<p>I think you probably want to check your specific name against <code>node</code> instead of against <code>visited[0]</code>.</p>
<p>There may be some other issues with your code. For instance, you'll never consider a path <code>a->c->b</code> if the path <code>a->b</code> exists (since <code>b</code> will be in <code>visited</code> after <code>a->b</code> is checked). In some kinds of graphs (where the triangle inequality doesn't hold), the first path might be shorter than the second. If it's not possible in your graph, you could be fine, but it's not something you should assume for all graphs.</p>
| 0 | 2016-09-30T21:25:10Z | [
"python",
"recursion",
"iteration",
"graph-theory"
]
|
Neo4j Transferring All Data from Postgres | 39,713,636 | <p>I'm attempting to transfer all data over to Neo4j, and am wondering if it would be alright to name all properties on nodes the same as in Postgres exactly. E.g id will be id, name will be name, and so on. Are there any conflicts with doing something like this?</p>
| 0 | 2016-09-26T22:59:21Z | 39,718,452 | <p>No, especially if you use the one of the clients to do the migration as they will automatically escape anything that needs to be escaped, but there's nothing I've come across.</p>
| 0 | 2016-09-27T07:19:02Z | [
"python",
"neo4j",
"neo4jclient",
"neo4jrestclient"
]
|
Assigning indicators based on observation quantile | 39,713,678 | <p>I am working with a pandas DataFrame. I would like to assign a column indicator variable to 1 when a particular condition is met. I compute quantiles for particular groups. If the value is outside the quantile, I want to assign the column indicator variable to 1. For example, the following code prints the quantiles for each group:</p>
<pre><code>df[df['LENGTH'] > 1].groupby(['CLIMATE', 'TEMP'])['LENGTH'].quantile(.95)]
</code></pre>
<p>Now for all observations in my dataframe which are larger than the grouped value I would like to set </p>
<pre><code>df['INDICATOR'] = 1
</code></pre>
<p>I tried using the following if statement:</p>
<pre><code>if df.groupby(['CLIMATE','BIN'])['LENGTH'] > df[df['LENGTH'] > 1].groupby(['CLIMATE','BIN'])['LENGTH'].quantile(.95):
df['INDICATOR'] = 1
</code></pre>
<p>This gives me the error: "ValueError: operands could not be broadcast together with shapes (269,) (269,2)". Any help would be appreciated! </p>
| 2 | 2016-09-26T23:03:59Z | 39,713,787 | <p>you want to use <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation" rel="nofollow"><code>transform</code></a> after your <code>groupby</code> to get an equivalently sized array. <code>gt</code> is greater than. <code>mul</code> is multiply. I multiply by <code>1</code> to get the boolean results from <code>gt</code> to <code>0</code> or <code>1</code>. </p>
<p>You can see other examples here
<a class='doc-link' href="http://stackoverflow.com/documentation/pandas/1822/grouping-data/14011/using-transform-to-get-group-level-statistics-while-preserving-the-original-data#t=201609262324317180894">using transform to get group-level statistics while preserving the original dataframe</a></p>
<p>Consider the dataframe <code>df</code></p>
<pre><code>df = pd.DataFrame(dict(labels=np.random.choice(list('abcde'), 100),
A=np.random.randn(100)))
</code></pre>
<p>I'd get the indicator like this</p>
<pre><code>df.A.gt(df.groupby('labels').A.transform(pd.Series.quantile, q=.95)).mul(1)
</code></pre>
<hr>
<p>In your case, I'd do</p>
<pre><code>df['INDICATOR'] = df['LENGTH'].gt(df.groupby(['CLIMATE','BIN'])['LENGTH'] \
.transform(pd.Series.quantile, q=.95)).mul(1)
</code></pre>
| 3 | 2016-09-26T23:17:22Z | [
"python",
"pandas",
"numpy",
"dataframe"
]
|
Using xargs for parallel Python scripts | 39,713,719 | <p>I currently have a bash script, script.sh, with two nested loops. The first enumerates possible values for a, and the second enumerates possible values for b, like</p>
<pre><code>#!/bin/sh
for a in {1..10}
do
for b in {1..10}
do
nohup python script.py $a $b &
done
done
</code></pre>
<p>So this spawns off 100 Python processes running script.py, one for each (a,b) pair. However, my machine only has 5 cores, so I want to cap the number of processes at 5 to avoid thrashing/wasteful switching. The goal is that I am always running 5 processes until all 100 processes are done.</p>
<p>xargs seems to be one way to do this, but I don't know how to pass these arguments to xargs. I've checked other similar questions but don't understand the surrounding bash jargon well enough to know what's happening. For example, I tried</p>
<pre><code>seq 1 | xargs -i --max-procs=5 bash script.sh
</code></pre>
<p>but this doesn't seem to do anything - script.sh runs as before and still spawns off 100 processes.</p>
<p>I assume I'm misunderstanding how xargs works.</p>
<p>Thanks!</p>
| 1 | 2016-09-26T23:08:42Z | 39,713,884 | <p>This would actually look more like:</p>
<pre><code>#!/bin/bash
for a in {1..10}; do
for b in {1..10}; do
printf '%s\0' "$a" "$b"
done
done | xargs -0 -x -n 2 -P 5 python script.py
</code></pre>
<p>Note that there's no <code>nohup</code>, nor any <code>&</code> -- to track the number of concurrent invocations, <code>xargs</code> needs to be directly executing the Python script, and that process can't exit until it's complete.</p>
<p>The non-standard (but widely available) <code>-0</code> extension requires input to be in NUL-delimited form (as created with <code>printf '%s\0'</code>); this ensures correct behavior with arguments having spaces, quotes, backslashes, etc.</p>
<p>The likewise non-standard <code>-P 5</code> sets the maximum number of processes (in a way slightly more portable than <code>--max-procs=5</code>, which is supported on GNU but not modern BSD xargs).</p>
<p>The <code>-n 2</code> indicates that each instance of the Python script receives only two arguments, thus starting one per pair of inputs.</p>
<p>The <code>-x</code> (used in conjunction with <code>-n 2</code>) indicates that if a single Python instance can't be given two arguments (for instance, if the arguments are so long that both can't fit on a single command line), this should be treated as a failure, rather than invoking a Python instance with only one argument.</p>
| 4 | 2016-09-26T23:31:19Z | [
"python",
"bash",
"parallel-processing",
"xargs"
]
|
Using xargs for parallel Python scripts | 39,713,719 | <p>I currently have a bash script, script.sh, with two nested loops. The first enumerates possible values for a, and the second enumerates possible values for b, like</p>
<pre><code>#!/bin/sh
for a in {1..10}
do
for b in {1..10}
do
nohup python script.py $a $b &
done
done
</code></pre>
<p>So this spawns off 100 Python processes running script.py, one for each (a,b) pair. However, my machine only has 5 cores, so I want to cap the number of processes at 5 to avoid thrashing/wasteful switching. The goal is that I am always running 5 processes until all 100 processes are done.</p>
<p>xargs seems to be one way to do this, but I don't know how to pass these arguments to xargs. I've checked other similar questions but don't understand the surrounding bash jargon well enough to know what's happening. For example, I tried</p>
<pre><code>seq 1 | xargs -i --max-procs=5 bash script.sh
</code></pre>
<p>but this doesn't seem to do anything - script.sh runs as before and still spawns off 100 processes.</p>
<p>I assume I'm misunderstanding how xargs works.</p>
<p>Thanks!</p>
| 1 | 2016-09-26T23:08:42Z | 39,713,996 | <p>If you use bash, then the following should work:</p>
<pre><code>#!/bin/bash
for a in {1..10}
do
for b in {1..10}
do
if [ `jobs | wc -l` -lt 6 ]; then # less than 6 background jobs
nohup python script.py $a $b &
else
wait -n # wait for any background job to terminate
fi
done
done
</code></pre>
| 1 | 2016-09-26T23:45:07Z | [
"python",
"bash",
"parallel-processing",
"xargs"
]
|
Using xargs for parallel Python scripts | 39,713,719 | <p>I currently have a bash script, script.sh, with two nested loops. The first enumerates possible values for a, and the second enumerates possible values for b, like</p>
<pre><code>#!/bin/sh
for a in {1..10}
do
for b in {1..10}
do
nohup python script.py $a $b &
done
done
</code></pre>
<p>So this spawns off 100 Python processes running script.py, one for each (a,b) pair. However, my machine only has 5 cores, so I want to cap the number of processes at 5 to avoid thrashing/wasteful switching. The goal is that I am always running 5 processes until all 100 processes are done.</p>
<p>xargs seems to be one way to do this, but I don't know how to pass these arguments to xargs. I've checked other similar questions but don't understand the surrounding bash jargon well enough to know what's happening. For example, I tried</p>
<pre><code>seq 1 | xargs -i --max-procs=5 bash script.sh
</code></pre>
<p>but this doesn't seem to do anything - script.sh runs as before and still spawns off 100 processes.</p>
<p>I assume I'm misunderstanding how xargs works.</p>
<p>Thanks!</p>
| 1 | 2016-09-26T23:08:42Z | 39,717,822 | <p>GNU Parallel is made for exactly these kinds of jobs:</p>
<pre><code>parallel python script.py ::: {1..10} ::: {1..10}
</code></pre>
<p>If you need $a and $b placed differently you can use {1} and {2} to refer to the two input sources:</p>
<pre><code>parallel python script.py --option-a {1} --option-b {2} ::: {1..10} ::: {1..10}
</code></pre>
<p>GNU Parallel is a general parallelizer and makes is easy to run jobs in parallel on the same machine or on multiple machines you have ssh access to. It can often replace a <code>for</code> loop.</p>
<p>If you have 32 different jobs you want to run on 4 CPUs, a straight forward way to parallelize is to run 8 jobs on each CPU:</p>
<p><img src="http://i.stack.imgur.com/uH0Dh.png" alt="Simple scheduling"></p>
<p>GNU Parallel instead spawns a new process when one finishes - keeping the CPUs active and thus saving time:</p>
<p><img src="http://i.stack.imgur.com/17FsG.png" alt="GNU Parallel scheduling"></p>
<p><strong>Installation</strong></p>
<p>If GNU Parallel is not packaged for your distribution, you can do a personal installation, which does not require root access. It can be done in 10 seconds by doing this:</p>
<pre><code>(wget -O - pi.dk/3 || curl pi.dk/3/ || fetch -o - http://pi.dk/3) | bash
</code></pre>
<p>For other installation options see <a href="http://git.savannah.gnu.org/cgit/parallel.git/tree/README" rel="nofollow">http://git.savannah.gnu.org/cgit/parallel.git/tree/README</a></p>
<p><strong>Learn more</strong></p>
<p>See more examples: <a href="http://www.gnu.org/software/parallel/man.html" rel="nofollow">http://www.gnu.org/software/parallel/man.html</a></p>
<p>Watch the intro videos: <a href="https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1" rel="nofollow">https://www.youtube.com/playlist?list=PL284C9FF2488BC6D1</a></p>
<p>Walk through the tutorial: <a href="http://www.gnu.org/software/parallel/parallel_tutorial.html" rel="nofollow">http://www.gnu.org/software/parallel/parallel_tutorial.html</a></p>
<p>Sign up for the email list to get support: <a href="https://lists.gnu.org/mailman/listinfo/parallel" rel="nofollow">https://lists.gnu.org/mailman/listinfo/parallel</a></p>
| 1 | 2016-09-27T06:47:07Z | [
"python",
"bash",
"parallel-processing",
"xargs"
]
|
How I can get current date in xml in odoo? | 39,713,720 | <p>I am adding group by filter of past due in accounting tab in odoo. And want to get context <strong>due_date < current date</strong>, but i am not getting current date anywhere, I don't know how i can get it, anybody can tell me that how to get current date in odoo? </p>
<p><strong>here is my group by filter</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><xpath expr="//filter[@string='Due Month']" position="after
<filter string="Past Due" context="{'group_by':'date_due < current date'}"/>
</xpath></code></pre>
</div>
</div>
</p>
<p><strong>and here is my other code in which i did it with computed field but don't how i can get current date</strong></p>
<pre><code>@api.depends('date_due')
@api.multi
def _compute_due_date(self):
for record in self:
record.past_due = record.date_due < record.date.today().strftime('%Y-%m-%d')
</code></pre>
| 1 | 2016-09-26T23:08:47Z | 39,713,941 | <pre><code><xpath expr="//filter[@string='Due Month']" position="after
<filter string="Past Due" name="past_due_filter" domain="[('date_due','&lt;',current_date)]" />
</xpath>
</code></pre>
| 2 | 2016-09-26T23:38:55Z | [
"python",
"xml",
"openerp",
"odoo-8"
]
|
How I can get current date in xml in odoo? | 39,713,720 | <p>I am adding group by filter of past due in accounting tab in odoo. And want to get context <strong>due_date < current date</strong>, but i am not getting current date anywhere, I don't know how i can get it, anybody can tell me that how to get current date in odoo? </p>
<p><strong>here is my group by filter</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><xpath expr="//filter[@string='Due Month']" position="after
<filter string="Past Due" context="{'group_by':'date_due < current date'}"/>
</xpath></code></pre>
</div>
</div>
</p>
<p><strong>and here is my other code in which i did it with computed field but don't how i can get current date</strong></p>
<pre><code>@api.depends('date_due')
@api.multi
def _compute_due_date(self):
for record in self:
record.past_due = record.date_due < record.date.today().strftime('%Y-%m-%d')
</code></pre>
| 1 | 2016-09-26T23:08:47Z | 39,717,919 | <p>you can use "context_today" or <code>time</code> module, examples:</p>
<pre><code><filter name="today" string="Today" domain="[('date','=',time.strftime('%%Y-%%m-%%d'))]"/>
<filter name="last_24h" string="Last 24h" domain="[('start_date','&gt;', (context_today() - datetime.timedelta(days=1)).strftime('%%Y-%%m-%%d') )]"/>
</code></pre>
| 2 | 2016-09-27T06:53:02Z | [
"python",
"xml",
"openerp",
"odoo-8"
]
|
Python - decode or regex? | 39,713,773 | <p>I have this <code>dict</code> being scraped from the web, but it comes with this <code>unicode</code> issue:</p>
<pre><code>{'track': [u'\u201cAnxiety\u201d',
u'\u201cLockjaw\u201d [ft. Kodak Black]',
u'\u201cMelanin Drop\u201d',
u'\u201cDreams\u201d',
u'\u201cIntern\u201d',
u'\u201cYou Don\u2019t Think You Like People Like Me\u201d',
u'\u201cFirst Day Out tha Feds\u201d',
u'\u201cFemale Vampire\u201d',
u'\u201cGirlfriend\u201d',
u'\u201cOpposite House\u201d',
u'\u201cGirls @\u201d [ft. Chance the Rapper]',
u'\u201cI Am a Nightmare\u201d']}
</code></pre>
<p>which is the best way of stripping out these characters, using <code>regex</code>, or is there some <code>decode</code> method?</p>
<p>and how?</p>
| 2 | 2016-09-26T23:15:06Z | 39,713,902 | <p>Those are quotes (â and â). If you just want to get rid of them at the beginning or end of the string, it is easiest to <code>strip</code> them.</p>
<pre><code>>>> u'\u201cAnxiety\u201d'.strip(u'\u201c\u201d')
u'Anxiety'
</code></pre>
<p>If you want to get rid of them anywhere in the string, <code>replace</code> them:</p>
<pre><code>>>> u'\u201cAnxiety\u201d'.replace(u'\u201c', '').replace(u'\u201d', '')
u'Anxiety'
</code></pre>
| 4 | 2016-09-26T23:32:42Z | [
"python",
"regex",
"unicode"
]
|
Python - decode or regex? | 39,713,773 | <p>I have this <code>dict</code> being scraped from the web, but it comes with this <code>unicode</code> issue:</p>
<pre><code>{'track': [u'\u201cAnxiety\u201d',
u'\u201cLockjaw\u201d [ft. Kodak Black]',
u'\u201cMelanin Drop\u201d',
u'\u201cDreams\u201d',
u'\u201cIntern\u201d',
u'\u201cYou Don\u2019t Think You Like People Like Me\u201d',
u'\u201cFirst Day Out tha Feds\u201d',
u'\u201cFemale Vampire\u201d',
u'\u201cGirlfriend\u201d',
u'\u201cOpposite House\u201d',
u'\u201cGirls @\u201d [ft. Chance the Rapper]',
u'\u201cI Am a Nightmare\u201d']}
</code></pre>
<p>which is the best way of stripping out these characters, using <code>regex</code>, or is there some <code>decode</code> method?</p>
<p>and how?</p>
| 2 | 2016-09-26T23:15:06Z | 39,713,915 | <pre><code>dict['track'] = list(map(lambda x: x.replace('\u201c','').replace('\u201d',''), dict['track']))
</code></pre>
| 0 | 2016-09-26T23:34:32Z | [
"python",
"regex",
"unicode"
]
|
Need some clarification on Kruskals and Union-Find | 39,713,798 | <p>Please help me fill any gaps in my knowledge(teaching myself):</p>
<p>So far I understand that given a graph of N vertices, and edges we want to form a MST that will have N-1 Edges</p>
<ol>
<li><p>We order the edges by their weight</p></li>
<li><p>We create a set of subsets where each vertice is given its own subset. So if we have {A,B,C,D} as our initial set of vertices, we now have {{A}, {B}, {C}, {D}}</p></li>
<li><p>We also create a set A that will hold the answer</p></li>
<li><p>We go down the list of ordered edges. We look at it's vertices, so V1 and V2. If they are in seperate subsets, we can join the two subsets, and add the edge into the set A that holds our edges. If they are in the same subset, we go to the next option (because its a cycle)</p></li>
<li><p>We continue this pattern until we reach the end of the Edge's list or we reach the Number of vertices - 1 for the length of our set A.</p></li>
</ol>
<p>If the above assertions are true, my following questions regard the implementation:</p>
<p>If we use a list[] to hold the subsets of the set that contains the vertice:</p>
<p>subsets = [[1][2][3][4][5][6][7]]</p>
<p>and each edge is composed of needing to look for two subsets
so we need to find (6,7)</p>
<p>the result would be</p>
<p>my_path = [(6,7)] #holds all paths
subsets = [[1][2][3][4][5][6,7]]</p>
<p>wouldn't finding the subset in subsets be taking too long to be O(nlog(n))</p>
<p>Is there a better approach or am i doing this correctly?</p>
| 0 | 2016-09-26T23:18:50Z | 39,714,030 | <blockquote>
<p>wouldn't finding the subset in subsets be taking too long to be O(nlog(n))</p>
</blockquote>
<p>Yes</p>
<blockquote>
<p>Is there a better approach or am i doing this correctly?</p>
</blockquote>
<p>The better approach is using <a href="https://en.wikipedia.org/wiki/Disjoint-set_data_structure#Disjoint-set_forests" rel="nofollow">Disjoint Set Forest</a> data structure with <em>Union by Rank</em> technique. Applying this technique yields a worst-case running-time of O(log n) for the Union or Find operation</p>
| 0 | 2016-09-26T23:50:07Z | [
"python",
"graph",
"kruskals-algorithm"
]
|
Need some clarification on Kruskals and Union-Find | 39,713,798 | <p>Please help me fill any gaps in my knowledge(teaching myself):</p>
<p>So far I understand that given a graph of N vertices, and edges we want to form a MST that will have N-1 Edges</p>
<ol>
<li><p>We order the edges by their weight</p></li>
<li><p>We create a set of subsets where each vertice is given its own subset. So if we have {A,B,C,D} as our initial set of vertices, we now have {{A}, {B}, {C}, {D}}</p></li>
<li><p>We also create a set A that will hold the answer</p></li>
<li><p>We go down the list of ordered edges. We look at it's vertices, so V1 and V2. If they are in seperate subsets, we can join the two subsets, and add the edge into the set A that holds our edges. If they are in the same subset, we go to the next option (because its a cycle)</p></li>
<li><p>We continue this pattern until we reach the end of the Edge's list or we reach the Number of vertices - 1 for the length of our set A.</p></li>
</ol>
<p>If the above assertions are true, my following questions regard the implementation:</p>
<p>If we use a list[] to hold the subsets of the set that contains the vertice:</p>
<p>subsets = [[1][2][3][4][5][6][7]]</p>
<p>and each edge is composed of needing to look for two subsets
so we need to find (6,7)</p>
<p>the result would be</p>
<p>my_path = [(6,7)] #holds all paths
subsets = [[1][2][3][4][5][6,7]]</p>
<p>wouldn't finding the subset in subsets be taking too long to be O(nlog(n))</p>
<p>Is there a better approach or am i doing this correctly?</p>
| 0 | 2016-09-26T23:18:50Z | 39,714,032 | <p>Actually the running time of the algorithm is O(E log(V)).</p>
<p>The key to its performance lies on your point 4, more specifically, the implementation of determining for a light edge e = (a, b) if 'a' and 'b' belongs to the same set and, if not, performing the union of their respective sets.</p>
<p>For more clarifications on the topic I recommend you the book: "Introduction to Algoritms", from MIT Press, ISBN 0-262-03293-7, pag 561 (for the general topic of MST) and pag 568 (for Kruskal's Algorithm).
As it states, and I quote: </p>
<blockquote>
<p>"The running time of Kruskalâs algorithm for a graph G = (V, E) depends
on the implementation of the disjoint-set data structure. We shall assume the
disjoint-set-forest implementation of Section 21.3 with the union-by-rank and
path-compression heuristics, since it is the asymptotically fastest implementation
known."</p>
</blockquote>
<p>Few lines later and with some simple "Time Complexity Theory" calculus, it proves its time complexity to be of O(E log(V)).</p>
| 0 | 2016-09-26T23:50:24Z | [
"python",
"graph",
"kruskals-algorithm"
]
|
Python class init function | 39,713,891 | <p>When I init a class in python, is it only called once? For example, if I write an if statement in the init area. </p>
<pre><code>class hi():
def __init__():
# bla bla bla
</code></pre>
<p>Does it only loop over once?</p>
| 0 | 2016-09-26T23:31:50Z | 39,713,966 | <p><code>__init__</code> is the constructor of the class. It is called once for every instance of the class you create.</p>
<pre><code>class A(object):
def __init__(self):
print('__init__ called')
a1 = A() # prints: __init__ called
a2 = A() # prints: __init__ called
</code></pre>
<p>BTW, this is not something specific to pygame, but python in general.</p>
<p>The constructor always takes at least one argument: <code>self</code>. It can take additional arguments. <code>self</code> is the instance which is being constructed and you can write initialization of it in the constructor.</p>
<pre><code>class A(object):
def __init__(self, value):
self.value = value
a1 = A(17)
print (a1.value) # prints: 17
</code></pre>
| 1 | 2016-09-26T23:41:59Z | [
"python",
"class"
]
|
Problems with pd.read_csv | 39,713,900 | <p>I have Anaconda 3 on Windows 10. I am using pd.read_csv() to load csv files but I get error messages. To begin with I tried <code>df = pd.read_csv('C:\direct_marketing.csv')</code> which worked and the file was imported.</p>
<p>Then I tried <code>df = pd.read_csv('C:\tutorial.csv')</code> and I received the following error message:</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-3-ce208cc2684f>", line 1, in <module>
df = pd.read_csv('C:\tutorial.csv')
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 562, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 315, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 645, in __init__
self._make_engine(self.engine)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 799, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 1213, in __init__
self._reader = _parser.TextReader(src, **kwds)
File "pandas\parser.pyx", line 358, in pandas.parser.TextReader.__cinit__ (pandas\parser.c:3427)
File "pandas\parser.pyx", line 628, in pandas.parser.TextReader._setup_parser_source (pandas\parser.c:6861)
OSError: File b'C:\tutorial.csv' does not exist
</code></pre>
<p>Then I moved the file to a new folder and renamed it and again used read.csv() to import it: </p>
<pre><code>df = pd.read_csv('C:\Users\test.csv')
</code></pre>
<p>This time I received a different error message:</p>
<pre><code> File "<ipython-input-5-03c6d380c174>", line 1
df = pd.read_csv('C:\Users\test.csv')
^
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
</code></pre>
<p>Could you help me understand what is going on and how to handle this situation?</p>
<p>Thanks a lot!</p>
| 0 | 2016-09-26T23:32:33Z | 39,713,944 | <p>Try escaping the backslashes:</p>
<pre><code>df = pd.read_csv('C:\\Users\\test.csv')
</code></pre>
| 2 | 2016-09-26T23:39:24Z | [
"python",
"csv"
]
|
Problems with pd.read_csv | 39,713,900 | <p>I have Anaconda 3 on Windows 10. I am using pd.read_csv() to load csv files but I get error messages. To begin with I tried <code>df = pd.read_csv('C:\direct_marketing.csv')</code> which worked and the file was imported.</p>
<p>Then I tried <code>df = pd.read_csv('C:\tutorial.csv')</code> and I received the following error message:</p>
<pre><code>Traceback (most recent call last):
File "<ipython-input-3-ce208cc2684f>", line 1, in <module>
df = pd.read_csv('C:\tutorial.csv')
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 562, in parser_f
return _read(filepath_or_buffer, kwds)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 315, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 645, in __init__
self._make_engine(self.engine)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 799, in _make_engine
self._engine = CParserWrapper(self.f, **self.options)
File "C:\Users\Alexandros_7\Anaconda3\lib\site-packages\pandas\io\parsers.py", line 1213, in __init__
self._reader = _parser.TextReader(src, **kwds)
File "pandas\parser.pyx", line 358, in pandas.parser.TextReader.__cinit__ (pandas\parser.c:3427)
File "pandas\parser.pyx", line 628, in pandas.parser.TextReader._setup_parser_source (pandas\parser.c:6861)
OSError: File b'C:\tutorial.csv' does not exist
</code></pre>
<p>Then I moved the file to a new folder and renamed it and again used read.csv() to import it: </p>
<pre><code>df = pd.read_csv('C:\Users\test.csv')
</code></pre>
<p>This time I received a different error message:</p>
<pre><code> File "<ipython-input-5-03c6d380c174>", line 1
df = pd.read_csv('C:\Users\test.csv')
^
SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
</code></pre>
<p>Could you help me understand what is going on and how to handle this situation?</p>
<p>Thanks a lot!</p>
| 0 | 2016-09-26T23:32:33Z | 39,713,956 | <p>try use two back-slash '\' instead of '\'. It might have take it as a escape sign.. ?</p>
| 1 | 2016-09-26T23:40:27Z | [
"python",
"csv"
]
|
Convert int to list in python | 39,713,904 | <p>Take a list, say for example this one:</p>
<pre><code>a = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
</code></pre>
<p>and write a program that prints out all the elements of the list as the list [1,1,2,3,5] that are less than 5.
Currently it prints as </p>
<pre><code>1
1
2
3
5
</code></pre>
<p>My code </p>
<pre><code>a = [1,1,2,3,5,8,13,21,34,55,89]
count = 0
for i in a:
if i<=5:
count+=1
print(i)
</code></pre>
| 0 | 2016-09-26T23:32:51Z | 39,713,932 | <p>If you want all to print out all the element that great and <strong>equal to</strong> 5 then you are doing it right. But if you want to print out less then 5 you want:</p>
<pre><code>for i in a:
if i < 5:
count += 1
print(i)
</code></pre>
| 0 | 2016-09-26T23:37:58Z | [
"python"
]
|
Convert int to list in python | 39,713,904 | <p>Take a list, say for example this one:</p>
<pre><code>a = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
</code></pre>
<p>and write a program that prints out all the elements of the list as the list [1,1,2,3,5] that are less than 5.
Currently it prints as </p>
<pre><code>1
1
2
3
5
</code></pre>
<p>My code </p>
<pre><code>a = [1,1,2,3,5,8,13,21,34,55,89]
count = 0
for i in a:
if i<=5:
count+=1
print(i)
</code></pre>
| 0 | 2016-09-26T23:32:51Z | 39,713,976 | <p>To have it print out as a list, keep it as a list. You can use list comprehension to accomplish this:</p>
<pre><code>>>> a = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
>>> [i for i in a if i<=5]
[1, 1, 2, 3, 5]
</code></pre>
<p>If we use <code>print</code>, it still looks the same:</p>
<pre><code>>>> print([i for i in a if i<=5])
[1, 1, 2, 3, 5]
</code></pre>
| 1 | 2016-09-26T23:43:28Z | [
"python"
]
|
Convert int to list in python | 39,713,904 | <p>Take a list, say for example this one:</p>
<pre><code>a = [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
</code></pre>
<p>and write a program that prints out all the elements of the list as the list [1,1,2,3,5] that are less than 5.
Currently it prints as </p>
<pre><code>1
1
2
3
5
</code></pre>
<p>My code </p>
<pre><code>a = [1,1,2,3,5,8,13,21,34,55,89]
count = 0
for i in a:
if i<=5:
count+=1
print(i)
</code></pre>
| 0 | 2016-09-26T23:32:51Z | 39,714,031 | <p>You should have the if condition be more strict if you want only element smaller than 5. It should be <code>if i<5:</code> instead of <code>i<=5</code>.</p>
<p>If you want to store the elements in a new list, see the example below.</p>
<pre><code>a = [1,1,2,3,5,8,13,21,34,55,89]
new_list=[]
for i in a:
if i<5:
new_list.append(i)
print new_list
</code></pre>
| 0 | 2016-09-26T23:50:08Z | [
"python"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.