title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
Getting unbound method error?
39,762,188
<p>I tried to run this code below </p> <pre><code>class TestStaticMethod: def foo(): print 'calling static method foo()' foo = staticmethod(foo) class TestClassMethod: def foo(cls): print 'calling class method foo()' print 'foo() is part of class: ', cls.__name__ foo = classmethod(foo) </code></pre> <p>After I ran this with the code below </p> <pre><code>tsm = TestStaticMethod() TestStaticMethod.foo() </code></pre> <pre><code>Traceback (most recent call last): File "&lt;pyshell#35&gt;", line 1, in &lt;module&gt; TestStaticMethod.foo() TypeError: unbound method foo() must be called with TestStaticMethod instance as first argument (got nothing instead) </code></pre> <pre><code>tsm.foo() </code></pre> <pre><code>Traceback (most recent call last): File "&lt;pyshell#36&gt;", line 1, in &lt;module&gt; ts.foo() TypeError: foo() takes no arguments (1 given) </code></pre> <p>I really don't get why I'm getting the unbound method. Can anyone help me? </p>
0
2016-09-29T05:37:56Z
39,763,418
<p>You should not indent </p> <blockquote> <p>foo = staticmethod(function_name)</p> </blockquote> <p>in function(foo) itself,</p> <p>Instead give this a try:</p> <pre><code>class TestStaticMethod: def foo(): print 'calling static method foo()' foo = staticmethod(foo) </code></pre> <p>or </p> <pre><code>class TestStaticMethod: @staticmethod def foo(): print 'calling static method foo()' </code></pre> <p>Both above solutions will work</p>
0
2016-09-29T06:55:42Z
[ "python" ]
How insert into a new table from another table cross joined with another table
39,762,285
<p>I'm using Python and SQLite to manipulate a database. </p> <p>I have a SQLite table <code>Movies</code> in database <code>Data</code> that looks like this:</p> <pre><code>| ID | Country +----------------+------------- | 1 | USA, Germany, Mexico | 2 | Brazil, Peru | 3 | Peru </code></pre> <p>I have a table <code>Countries</code> in the same database that looks like this</p> <pre><code>| ID | Country +----------------+------------- | 1 | USA | 1 | Germany | 1 | Mexico | 2 | Brazil | 2 | Peru | 3 | Peru </code></pre> <p>I want to insert from database <code>Data</code> all movies from Peru into a new database <code>PeruData</code> that looks like this</p> <pre><code>| ID | Country +----------------+------------- | 2 | Peru | 3 | Peru </code></pre> <p>I'm new to SQL and having trouble programming the right query. </p> <p>Here's my attempt:</p> <pre><code>con = sqlite3.connect("PeruData.db") cur = con.cursor() cur.execute("CREATE TABLE Movies (ID, Country);") cur.execute("ATTACH DATABASE 'Data.db' AS other;") cur.execute("\ INSERT INTO Movies \ (ID, Country) \ SELECT ID, Country FROM other.Movies CROSS JOIN other.Countries\ WHERE other.Movies.ID = other.Countries.ID AND other.Countries.Country = 'Peru'\ con.commit() con.close() </code></pre> <p>Clearly, I'm doing something wrong because I get the error</p> <blockquote> <p>sqlite3.OperationalError: no such table: other.Countries</p> </blockquote>
0
2016-09-29T05:44:50Z
39,763,337
<p>Here's a workaround which successfully got the result you wanted.</p> <hr> <p>Instead of having to write <code>con = sqlite3.connect("data.db")</code> and then having to write <code>con.commit()</code> and <code>con.close()</code>, you can shorten your code to written like this:</p> <pre><code>with sqlite3.connect("Data.db") as connection: c = connection.cursor() </code></pre> <p>This way you won't have to commit the changes and close each time you're working with a database. Just a nifty shortcut I learned. Now onto your code... <p> Personally, I'm unfamiliar with the SQL statement <strong>ATTACH DATABASE</strong>. I would incorporate your new database at the <em>end</em> of your program instead that way you can avoid any conflicts that you aren't knowledgeable of handling (such as your OperationalError given). So first I would begin to get the desired result <em>and then</em> insert that into your new table. Your third execution statement can be rewritten like so:</p> <pre><code>c.execute("""SELECT DISTINCT Movies.ID, Countries.Country FROM Movies CROSS JOIN Countries WHERE Movies.ID = Countries.ID AND Countries.Country = 'Peru' """) </code></pre> <p>This does the job, but you need to use <strong>fetchall()</strong> to return your result set in a list of tuples which can then be inserted into your new table. So you'd type this:</p> <pre><code>rows = c.fetchall() </code></pre> <p>Now you can open a new connection by creating the "PeruData.db" database, creating the table, and inserting the values.</p> <pre><code>with sqlite3.connect("PeruData.db") as connection: c = connection.cursor() c.execute("CREATE TABLE Movies (ID INT, Country TEXT)") c.executemany("INSERT INTO Movies VALUES(?, ?)", rows) </code></pre> <p>That's it. Hope I was able to answer your question!</p>
0
2016-09-29T06:51:25Z
[ "python", "sqlite" ]
How insert into a new table from another table cross joined with another table
39,762,285
<p>I'm using Python and SQLite to manipulate a database. </p> <p>I have a SQLite table <code>Movies</code> in database <code>Data</code> that looks like this:</p> <pre><code>| ID | Country +----------------+------------- | 1 | USA, Germany, Mexico | 2 | Brazil, Peru | 3 | Peru </code></pre> <p>I have a table <code>Countries</code> in the same database that looks like this</p> <pre><code>| ID | Country +----------------+------------- | 1 | USA | 1 | Germany | 1 | Mexico | 2 | Brazil | 2 | Peru | 3 | Peru </code></pre> <p>I want to insert from database <code>Data</code> all movies from Peru into a new database <code>PeruData</code> that looks like this</p> <pre><code>| ID | Country +----------------+------------- | 2 | Peru | 3 | Peru </code></pre> <p>I'm new to SQL and having trouble programming the right query. </p> <p>Here's my attempt:</p> <pre><code>con = sqlite3.connect("PeruData.db") cur = con.cursor() cur.execute("CREATE TABLE Movies (ID, Country);") cur.execute("ATTACH DATABASE 'Data.db' AS other;") cur.execute("\ INSERT INTO Movies \ (ID, Country) \ SELECT ID, Country FROM other.Movies CROSS JOIN other.Countries\ WHERE other.Movies.ID = other.Countries.ID AND other.Countries.Country = 'Peru'\ con.commit() con.close() </code></pre> <p>Clearly, I'm doing something wrong because I get the error</p> <blockquote> <p>sqlite3.OperationalError: no such table: other.Countries</p> </blockquote>
0
2016-09-29T05:44:50Z
39,765,291
<p>The current error is probably caused by a typo or other minor problem. I could create the databases described here and successfully do the insert after fixing minor errors: a missing backslash at an end of line and adding qualifiers for the selected columns.</p> <p>But I also advise your to use aliases for the tables in multi-table selects. The code that works in my test is:</p> <pre><code>cur.execute("\ INSERT INTO Movies \ (ID, Country) \ SELECT m.ID, c.Country\ FROM other.Movies m CROSS JOIN other.Countries c \ WHERE m.ID = c.ID AND c.Country = 'Peru'") </code></pre>
0
2016-09-29T08:33:41Z
[ "python", "sqlite" ]
Working of Cache-mem using Queue in Python
39,762,452
<p>I write a code to simulate the working of cache-memory. In this model, i try to realize the FIFO-algorhythm which allow to delete the last unused element (data, value, whatever). I wrote a special function, which give me a list <strong>o</strong> with numbers (these numbers are addresses in memory).</p> <pre><code>q=Queue.Queue(800)# Cach - memory. This is queue which is more likely help me to simulate FIFO-algorhythm QW=[] # External memory l=raw_input("Enter a desire operation:")#I enter my operation. for i in range(len(o)): time.sleep(0.4) u = time.time() k=o.pop(0) #o - is the list with numbers (these numbers are addresses in memory). Here is i get each address through pop. while l=='read': #If operation is "read" then i need to get my adress from q (cache-mem) or from QW (Is the external memory) and put it in q - (is the Cache-memory). if k not in q: if j in QW and k==j: q.put(j) else: q.get(k) while l=='record':#If operation is "record" then i need to write (append to QW) an address in QW or q, but only if the same address have existed already in QW or q. if k not in q: QW.append(k) print QW else: q.put(k) print q.get() </code></pre> <p>But i get the mistake: TypeError: argument of type 'instance' is not iterable at line</p> <pre><code>if k not in q </code></pre> <p>Please help me to solve this problem!</p>
0
2016-09-29T05:57:08Z
39,762,937
<p>You cant do that with Queues I think, instead use collections.dequeue which are almost similar while collections have fast append(), pop(). You can change your code first line to <code>q=collections.deque([],800)</code> and in place of q.put,q.get use <code>q.append</code> ,<code>q.popleft()</code>.</p>
0
2016-09-29T06:29:37Z
[ "python", "list", "caching", "queue" ]
How to run django server with ACTIVATED virtualenv using batch file (.bat)
39,762,490
<p>I found this post to be useful on <a href="http://stackoverflow.com/questions/3027160/how-to-code-a-batch-file-to-automate-django-web-server-start">how to code a batch file to automate django web server start</a>.</p> <p>But the problem is, there is <strong>no virtualenv</strong> activated, How can i activate it before the manage.py runserver inside the script?</p> <h3>I would like to run this server with virtualenv activated via batch file.</h3>
0
2016-09-29T06:00:01Z
39,762,750
<p>try <code>\path\to\env\Scripts\activate</code></p> <p>and look at <a href="https://virtualenv.pypa.io/en/stable/userguide/#activate-script" rel="nofollow">virtualenv docs</a></p>
1
2016-09-29T06:17:50Z
[ "python", "django", "windows", "batch-file" ]
How to run django server with ACTIVATED virtualenv using batch file (.bat)
39,762,490
<p>I found this post to be useful on <a href="http://stackoverflow.com/questions/3027160/how-to-code-a-batch-file-to-automate-django-web-server-start">how to code a batch file to automate django web server start</a>.</p> <p>But the problem is, there is <strong>no virtualenv</strong> activated, How can i activate it before the manage.py runserver inside the script?</p> <h3>I would like to run this server with virtualenv activated via batch file.</h3>
0
2016-09-29T06:00:01Z
39,764,400
<p>Call the <code>activate.bat</code> script in your batch file, before you run <code>manage.py</code>,</p> <pre><code>CALL \path\to\env\Scripts\activate.bat python manage.py runserver </code></pre>
1
2016-09-29T07:50:14Z
[ "python", "django", "windows", "batch-file" ]
How to run django server with ACTIVATED virtualenv using batch file (.bat)
39,762,490
<p>I found this post to be useful on <a href="http://stackoverflow.com/questions/3027160/how-to-code-a-batch-file-to-automate-django-web-server-start">how to code a batch file to automate django web server start</a>.</p> <p>But the problem is, there is <strong>no virtualenv</strong> activated, How can i activate it before the manage.py runserver inside the script?</p> <h3>I would like to run this server with virtualenv activated via batch file.</h3>
0
2016-09-29T06:00:01Z
39,908,152
<p>Found my solution by encoding this:</p> <pre><code>@echo off cmd /k "cd /d C:\Users\[user]\path\to\your\env\scripts &amp; activate &amp; cd /d C:\Users\[user]\path\to\your\env\[projectname] &amp; python manage.py runserver" </code></pre>
0
2016-10-07T01:49:54Z
[ "python", "django", "windows", "batch-file" ]
Numerically order a list based on a pattern
39,762,521
<p>I have a list of file names in the form:</p> <pre><code>['comm_1_1.txt', 'comm_1_10.txt', 'comm_1_11.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt'] </code></pre> <p>I wonder how to sort this list numerically to obtain the output:</p> <pre><code>['comm_1_1.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt', 'comm_1_10.txt', 'comm_1_11.txt'] </code></pre>
2
2016-09-29T06:02:56Z
39,762,687
<p>You should split needed numbers and convert them to <code>int</code></p> <pre><code>ss = ['comm_1_1.txt', 'comm_1_10.txt', 'comm_1_11.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt'] def numeric(i): return tuple(map(int, i.replace('.txt', '').split('_')[1:])) sorted(ss, key=numeric) # ['comm_1_1.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt', 'comm_1_10.txt', 'comm_1_11.txt'] </code></pre>
3
2016-09-29T06:14:15Z
[ "python", "sorting" ]
Numerically order a list based on a pattern
39,762,521
<p>I have a list of file names in the form:</p> <pre><code>['comm_1_1.txt', 'comm_1_10.txt', 'comm_1_11.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt'] </code></pre> <p>I wonder how to sort this list numerically to obtain the output:</p> <pre><code>['comm_1_1.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt', 'comm_1_10.txt', 'comm_1_11.txt'] </code></pre>
2
2016-09-29T06:02:56Z
39,762,770
<p>I don't really think its a best answer, but you can try it out. </p> <pre><code>l = ['comm_1_1.txt', 'comm_1_10.txt', 'comm_1_11.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt'] d = {} for i in l: filen = i.split('.') key = filen[0].split('_') d[int(key[2])] = i for key in sorted(d): print(d[key]) </code></pre>
1
2016-09-29T06:19:27Z
[ "python", "sorting" ]
Numerically order a list based on a pattern
39,762,521
<p>I have a list of file names in the form:</p> <pre><code>['comm_1_1.txt', 'comm_1_10.txt', 'comm_1_11.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt'] </code></pre> <p>I wonder how to sort this list numerically to obtain the output:</p> <pre><code>['comm_1_1.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt', 'comm_1_10.txt', 'comm_1_11.txt'] </code></pre>
2
2016-09-29T06:02:56Z
39,762,781
<p>One technique used for this kind of "human sorting" is to split keys to tuples and convert numeric parts to actual numbers:</p> <pre><code>ss = ['comm_1_1.txt', 'comm_1_10.txt', 'comm_1_11.txt', 'comm_1_4.txt', 'comm_1_5.txt', 'comm_1_6.txt'] print(sorted(ss, key=lambda x : map((lambda v: int(v) if "0" &lt;= v[0] &lt;= "9" else v), re.findall("[0-9]+|[^0-9]+", x)))) </code></pre> <p>or, more readable</p> <pre><code>def sortval(x): if "0" &lt;= x &lt;= "9": return int(x) else: return x def human_sort_key(x): return map(sortval, re.findall("[0-9]+|[^0-9]+", x)) print sorted(ss, key=human_sort_key) </code></pre> <p>the idea is to split between numeric and non-numeric parts and placing the parts in a list after converting the numeric parts to actual numbers (so that <code>10</code> comes after <code>2</code>).</p> <p>Lexicographically sorting the lists gives the expected result.</p>
2
2016-09-29T06:20:17Z
[ "python", "sorting" ]
NameError: name 'addition' is not defined
39,762,550
<p>I am getting <code>NameError: name 'addition' is not defined</code> while running following code</p> <pre><code>class Arithmetic: def __init__(self, a, b): self.a = a self.b = b def addition(self): c = a + b print"%d" %c def subtraction(self): c=a-b print "%d" % c add = addition(5, 4) add.addition() </code></pre>
-3
2016-09-29T06:04:49Z
39,762,630
<p>You first have to create object of class and then you can access class function. Try this:</p> <pre><code>a = Arithmatic() a.addition(5,4) </code></pre>
0
2016-09-29T06:10:24Z
[ "python" ]
NameError: name 'addition' is not defined
39,762,550
<p>I am getting <code>NameError: name 'addition' is not defined</code> while running following code</p> <pre><code>class Arithmetic: def __init__(self, a, b): self.a = a self.b = b def addition(self): c = a + b print"%d" %c def subtraction(self): c=a-b print "%d" % c add = addition(5, 4) add.addition() </code></pre>
-3
2016-09-29T06:04:49Z
39,762,813
<p>Check out this piece of code:</p> <pre><code>class Arithmetic(): def init(self, a, b): self.a = a self.b = b def addition(self): c = self.a + self.b print"addition %d" %c def subtraction(self): c = self.a - self.b print"substraction %d" %c obj = Arithmetic() obj.init(5, 4) obj.addition() obj.subtraction() </code></pre>
0
2016-09-29T06:21:57Z
[ "python" ]
NameError: name 'addition' is not defined
39,762,550
<p>I am getting <code>NameError: name 'addition' is not defined</code> while running following code</p> <pre><code>class Arithmetic: def __init__(self, a, b): self.a = a self.b = b def addition(self): c = a + b print"%d" %c def subtraction(self): c=a-b print "%d" % c add = addition(5, 4) add.addition() </code></pre>
-3
2016-09-29T06:04:49Z
39,763,813
<p>If you want to use your 'addition' method, you first need to instantiate an Arithmetic() object and use dot notation to call their functions. Make sure you properly indent your code because not only is it breaking a lot of PEP 8 rules but it just looks plain messy. In your first definition, don't forget you have to type <strong>__init__</strong> not <strong>init</strong>. Here's the code which should be applied:</p> <pre><code>class Arithmetic(object): def __init__(self, a, b): self.a = a self.b = b def addition(self): c = self.a + self.b print c def subtraction(self): c = self.a - self.b print c a = Arithmetic(5, 4) a.addition() a.subtraction() </code></pre>
1
2016-09-29T07:17:29Z
[ "python" ]
Export BeautifulSoup scraping results to CSV; scrape + include image values in column
39,762,565
<p>For this project, I am scraping data from a database and attempting to export this data to a spreadsheet for further analysis.</p> <p>While my code seems mostly to work well, when it comes to the last bit--exporting to CSV--I am having no luck. This question has been asked a few times, however it seems the answers were geared towards different approaches, and I didn't have any luck adapting their answers. </p> <p>My code is below:</p> <pre><code>from bs4 import BeautifulSoup import requests import re url1 = "http://www.elections.ca/WPAPPS/WPR/EN/NC?province=-1&amp;distyear=2013&amp;district=-1&amp;party=-1&amp;pageno=" url2 = "&amp;totalpages=55&amp;totalcount=1368&amp;secondaryaction=prev25" date1 = [] date2 = [] date3 = [] party=[] riding=[] candidate=[] winning=[] number=[] for i in range(1, 56): r = requests.get(url1 + str(i) + url2) data = r.text cat = BeautifulSoup(data) links = [] for link in cat.find_all('a', href=re.compile('selectedid=')): links.append("http://www.elections.ca" + link.get('href')) for link in links: r = requests.get(link) data = r.text cat = BeautifulSoup(data) date1.append(cat.find_all('span')[2].contents) date2.append(cat.find_all('span')[3].contents) date3.append(cat.find_all('span')[5].contents) party.append(re.sub("[\n\r/]", "", cat.find("legend").contents[2]).strip()) riding.append(re.sub("[\n\r/]", "", cat.find_all('div', class_="group")[2].contents[2]).strip()) cs= cat.find_all("table")[0].find_all("td", headers="name/1") elected=[] for c in cs: elected.append(c.contents[0].strip()) number.append(len(elected)) candidate.append(elected) winning.append(cs[0].contents[0].strip()) import csv file = "" for i in range(0,len(date1)): file = [file,date1[i],date2[i],date3[i],party[i],riding[i],"\n"] with open ('filename.csv','rb') as file: writer=csv.writer(file) for row in file: writer.writerow(row) </code></pre> <p>Really--any tips would be GREATLY appreciated. Thanks a lot.</p> <p>*PART 2: Another question: I previously thought that finding the winning candidate in the table could be simplified by just always selecting the first name that appears in the table, as I thought the "winners" always appeared first. However, this is not the case. Whether or not a candidate was elected is stored in the form of a picture in the first column. How would I scrape this and store it in a spreadsheet? It's located under &lt; td headers > as:</p> <pre><code>&lt; img src="/WPAPPS/WPR/Content/Images/selected_box.gif" alt="contestant won this nomination contest" &gt; </code></pre> <p>I had an idea for attempting some sort of Boolean sorting measure, but I am unsure of how to implement. Thanks a lot.* UPDATE: This question is now a separate post <a href="http://stackoverflow.com/questions/39771874/scraping-add-data-stored-as-a-picture-to-csv-file-in-python-3-5">here</a>.</p>
0
2016-09-29T06:05:48Z
39,763,317
<p>The following should correctly export your data to a CSV file:</p> <pre><code>from bs4 import BeautifulSoup import requests import re import csv url = "http://www.elections.ca/WPAPPS/WPR/EN/NC?province=-1&amp;distyear=2013&amp;district=-1&amp;party=-1&amp;pageno={}&amp;totalpages=55&amp;totalcount=1368&amp;secondaryaction=prev25" rows = [] for i in range(1, 56): print(i) r = requests.get(url.format(i)) data = r.text cat = BeautifulSoup(data, "html.parser") links = [] for link in cat.find_all('a', href=re.compile('selectedid=')): links.append("http://www.elections.ca" + link.get('href')) for link in links: r = requests.get(link) data = r.text cat = BeautifulSoup(data, "html.parser") lspans = cat.find_all('span') cs = cat.find_all("table")[0].find_all("td", headers="name/1") elected = [] for c in cs: elected.append(c.contents[0].strip()) rows.append([ lspans[2].contents[0], lspans[3].contents[0], lspans[5].contents[0], re.sub("[\n\r/]", "", cat.find("legend").contents[2]).strip(), re.sub("[\n\r/]", "", cat.find_all('div', class_="group")[2].contents[2]).strip().encode('latin-1'), len(elected), cs[0].contents[0].strip().encode('latin-1') ]) with open('filename.csv', 'w', newline='') as f_output: csv_output = csv.writer(f_output) csv_output.writerows(rows) </code></pre> <p>Giving you the following kind of output in your CSV file:</p> <pre><code>"September 17, 2016","September 13, 2016","September 17, 2016",Liberal,Medicine Hat--Cardston--Warner,1,Stanley Sakamoto "June 25, 2016","May 12, 2016","June 25, 2016",Conservative,Medicine Hat--Cardston--Warner,6,Brian Benoit "September 28, 2015","September 28, 2015","September 28, 2015",Liberal,Cowichan--Malahat--Langford,1,Luke Krayenhoff </code></pre> <p>There is no need to build up lots of separate lists for each column of your data, it is easier just to build a list of <code>rows</code> directly. This can then easily be written to a CSV in one go (or written a row at a time as your are gathering the data).</p>
1
2016-09-29T06:50:17Z
[ "python", "html", "python-3.x", "csv", "beautifulsoup" ]
Dataframe returning None value
39,762,576
<p>I was returning a dataframe of characters from GOT such that they were alive and predicted to die, but only if they have some house name. (important person). I was expecting it to skip NaN's, but it returned them as well. I've attached screenshot of output. Please help.</p> <p>PS I haven't attached any spoilers so you may go ahead.</p> <pre><code>import pandas df=pandas.read_csv('character-predictions.csv') a=df[((df['actual']==1) &amp; (df['pred']==0)) &amp; (df['house'] !=None)] b=a[['name', 'house']] </code></pre> <p><a href="http://i.stack.imgur.com/xhUEb.png" rel="nofollow"><img src="http://i.stack.imgur.com/xhUEb.png" alt="enter image description here"></a></p>
3
2016-09-29T06:06:25Z
39,762,595
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.notnull.html" rel="nofollow"><code>notnull</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a> for selecting columns:</p> <pre><code>b = df.ix[((df['actual']==1) &amp; (df['pred']==0)) &amp; (df['house'].notnull()), ['name', 'house']] </code></pre> <p>Sample:</p> <pre><code>df = pd.DataFrame({'house':[None,'a','b'], 'pred':[0,0,5], 'actual':[1,1,5], 'name':['J','B','C']}) print (df) actual house name pred 0 1 None J 0 1 1 a B 0 2 5 b C 5 b = df.ix[((df['actual']==1) &amp; (df['pred']==0)) &amp; (df['house'].notnull()), ['name', 'house']] print (b) name house 1 B a </code></pre> <p>You can also check <a href="http://pandas.pydata.org/pandas-docs/stable/missing_data.html#values-considered-missing" rel="nofollow">pandas documentation</a>:</p> <p><strong>Warning</strong></p> <blockquote> <p>One has to be mindful that in python (and numpy), the nan's don’t compare equal, but None's do. Note that Pandas/numpy uses the fact that np.nan != np.nan, and treats None like np.nan.</p> </blockquote> <pre><code>In [11]: None == None Out[11]: True In [12]: np.nan == np.nan Out[12]: False </code></pre> <blockquote> <p>So as compared to above, a scalar equality comparison versus a None/np.nan doesn’t provide useful information.</p> </blockquote> <pre><code>In [13]: df2['one'] == np.nan Out[13]: a False b False c False d False e False f False g False h False Name: one, dtype: bool </code></pre>
2
2016-09-29T06:07:49Z
[ "python", "pandas", "indexing", "multiple-columns", null ]
Python Issue after Windows 10 Update
39,762,666
<p>I'm going to make this question as succinct and to the point as possible. I have a jupyter notebook that worked perfectly yesterday. Today, my windows 10 machine demanded an update and after updating, the jupyter notebook now cannot run. I can import libraries and define functions, but when I actually go about using the libraries and functions for computations, the [*] denoting python is busy never goes away, meaning the code is stuck (or is super slow). Even after 20 minutes the first code cell is not carried out. Yesterday, this notebook would run all 40 (roughly) cells in seconds. I have no idea why this happened, where to start trouble shooting from, or who to turn to for support. Has anyone else encountered this issue?</p> <p>Windows: 10 (Version: 1607, OS Build: 14393) Python: Anaconda (Python 3.5) Architecture: 64 bit Jupyter Notebook File + Resorces: <a href="https://github.com/diggetybo/ICA-Attachments" rel="nofollow">https://github.com/diggetybo/ICA-Attachments</a></p>
0
2016-09-29T06:12:38Z
39,763,786
<p>Maybe the windows update meant well, but it created a lot of upheaval on my system. I had to reconfigure a lot of settings ranging from privacy to monitor display resolution. I believe some registry entries are different, although I typically don't look registry entries on a day to day basis. Some registry switcharoo is my best guess as to why Python suffered from the update.</p> <p>Good news is I got it functional again by reinstalling anaconda. It was a time consuming and desperate approach to the problem. At someone's leisure, any technical explanations as to why this might have occurred could still be worth posting. I'm hoping I won't have to do this every windows update. </p> <p>I'd like to exempt myself from user error, but I'm not sure whether others have this problem or not. Please share any similar experiences in the comments.</p> <p>Thanks for reading.</p>
0
2016-09-29T07:16:03Z
[ "python", "windows", "jupyter-notebook" ]
Parse dictionary looking for a specific string
39,762,673
<p>I'm writing a python script to make an API call to our Cisco ASA firewall and the information returned from the firewall is placed into a dictionary. I then need to parse through this dictionary looking for a specific string. The problem is, there's 1 key and what looks to be one large value. I've inserted an example of the output I receive from the firewall. </p> <blockquote> <p>{u'response': [u'object-group network ng-enc-incoming-ftp-outside\nnetwork-object 1.1.1.1 255.255.255.128\n network-object host 2.2.2.2\n network-object host 3.3.3.3\n network-object host 4.4.4.4\n network-object host 5.5.5.5]}</p> </blockquote> <p>Ideally, I would like to search this output looking for a specific IP and if that IP was present, present a message indicating as much. I can't seem to find a good way to parse through a really long value looking for a specific text string. </p>
0
2016-09-29T06:13:11Z
39,762,762
<p>If you only want to know whether or not a specific IP is in the response string, you can use the <code>in</code> operator:</p> <pre><code>if '2.2.2.2' in resp_dict['response'][0]: print('Found') </code></pre> <p>Or generalized:</p> <pre><code>ip = '2.2.2.2' if ip in resp_dict['response'][0]: print('{} found'.format(ip)) </code></pre>
2
2016-09-29T06:18:57Z
[ "python", "dictionary", "networking" ]
Parse dictionary looking for a specific string
39,762,673
<p>I'm writing a python script to make an API call to our Cisco ASA firewall and the information returned from the firewall is placed into a dictionary. I then need to parse through this dictionary looking for a specific string. The problem is, there's 1 key and what looks to be one large value. I've inserted an example of the output I receive from the firewall. </p> <blockquote> <p>{u'response': [u'object-group network ng-enc-incoming-ftp-outside\nnetwork-object 1.1.1.1 255.255.255.128\n network-object host 2.2.2.2\n network-object host 3.3.3.3\n network-object host 4.4.4.4\n network-object host 5.5.5.5]}</p> </blockquote> <p>Ideally, I would like to search this output looking for a specific IP and if that IP was present, present a message indicating as much. I can't seem to find a good way to parse through a really long value looking for a specific text string. </p>
0
2016-09-29T06:13:11Z
39,762,858
<p>Extract all IP and then you can search for a particular IP address in the list or loop through all of them. Alternately you can search directly in the string as @DeepSpace says</p> <pre><code>import re d = {u'response': [u'object-group network ng-enc-incoming-ftp-outside\n network-object 1.1.1.1 255.255.255.128\n network-object host 2.2.2.2\n network-object host 3.3.3.3\n network-object host 4.4.4.4\n network-object host 5.5.5.5']} ip = re.findall(r'(\d+\.\d+\.\d+\.\d+)', d['response'][0]) &gt;&gt;&gt; [u'1.1.1.1', u'255.255.255.128', u'2.2.2.2', u'3.3.3.3', u'4.4.4.4', u'5.5.5.5'] '1.1.1.1' in ip &gt;&gt;&gt; True '1.1.1.2' in ip &gt;&gt;&gt; False </code></pre>
0
2016-09-29T06:25:10Z
[ "python", "dictionary", "networking" ]
Copy file from URI format path to local path
39,763,063
<p>I am trying to copy a file which is on a server and all I've got is it's URI format path.<br> I've been trying to implement copying in C# .NET 4.5, but seems like CopyFile is not good with handling URI formats.<br> So I've used IronPython with shutil, but seems like it is also not good with URI format paths. </p> <p>How do I get that file local?</p> <pre><code>private string CopyFile(string from, string to, string pythonLibDir, string date) { var dateTime = DateTime.Today; if (dateTime.ToString("yy-MM-dd") == date) { return ""; } var pyEngine = Python.CreateEngine(); var paths = pyEngine.GetSearchPaths(); paths.Add(pythonLibDir); pyEngine.SetSearchPaths(paths); pyEngine.Execute("import shutil\n" + "shutil.copyfile('" + from + "', '" + to + "')"); return dateTime.ToString("yy-MM-dd"); } </code></pre> <p>I take all paths from xml config file.</p>
0
2016-09-29T06:36:51Z
39,763,131
<p>you can use a webclient and then get the file on a particular folder.</p> <pre><code>using (WebClient wc = new WebClient()) wc.DownloadFile("http://sitec.com/web/myfile.jpg", @"c:\images\xyz.jpg"); </code></pre> <p>or you can also use: <code>HttpWebRequest</code> inc ase you just want to read the content from a file from a server.</p> <pre><code>var http = (HttpWebRequest)WebRequest.Create("http://sitetocheck.com"); var response = http.GetResponse(); var stream = response.GetResponseStream(); var sr = new StreamReader(stream); var content = sr.ReadToEnd(); </code></pre>
1
2016-09-29T06:40:35Z
[ "c#", "python", ".net", "ironpython" ]
Copy file from URI format path to local path
39,763,063
<p>I am trying to copy a file which is on a server and all I've got is it's URI format path.<br> I've been trying to implement copying in C# .NET 4.5, but seems like CopyFile is not good with handling URI formats.<br> So I've used IronPython with shutil, but seems like it is also not good with URI format paths. </p> <p>How do I get that file local?</p> <pre><code>private string CopyFile(string from, string to, string pythonLibDir, string date) { var dateTime = DateTime.Today; if (dateTime.ToString("yy-MM-dd") == date) { return ""; } var pyEngine = Python.CreateEngine(); var paths = pyEngine.GetSearchPaths(); paths.Add(pythonLibDir); pyEngine.SetSearchPaths(paths); pyEngine.Execute("import shutil\n" + "shutil.copyfile('" + from + "', '" + to + "')"); return dateTime.ToString("yy-MM-dd"); } </code></pre> <p>I take all paths from xml config file.</p>
0
2016-09-29T06:36:51Z
39,763,140
<p>With Python</p> <pre><code>import urllib urllib.urlretrieve("http://www.myserver.com/myfile", "myfile.txt") </code></pre> <p><a href="https://docs.python.org/2/library/urllib.html#urllib.urlretrieve" rel="nofollow"><code>urlretrieve</code></a></p> <blockquote> <p>Copy a network object denoted by a URL to a local file, if necessary. If the URL points to a local file, or a valid cached copy of the object exists, the object is not copied. </p> </blockquote>
1
2016-09-29T06:41:02Z
[ "c#", "python", ".net", "ironpython" ]
How to extract subjects in a sentence and their respective dependent phrases?
39,763,091
<p>I am trying to work on subject extraction in a sentence, so that I can get the sentiments in accordance with the subject. I am using <code>nltk</code> in python2.7 for this purpose. Take the following sentence as an example:</p> <p><code>Donald Trump is the worst president of USA, but Hillary is better than him</code></p> <p>He we can see that <code>Donald Trump</code> and <code>Hillary</code> are the two subjects, and sentiments related to <code>Donald Trump</code> is negative but related to <code>Hillary</code> are positive. Till now, I am able to break this sentence into chunks of noun phrases, and I am able to get the following:</p> <pre><code>(S (NP Donald/NNP Trump/NNP) is/VBZ (NP the/DT worst/JJS president/NN) in/IN (NP USA,/NNP) but/CC (NP Hillary/NNP) is/VBZ better/JJR than/IN (NP him/PRP)) </code></pre> <p>Now, how do I approach in finding the subjects from these noun phrases? Then how do I group the phrases meant for both the subjects together? Once I have the <strong>phrases meant for both the subjects separately</strong>, I can perform sentiment analysis on both of them separately.</p> <p><strong>EDIT</strong></p> <p>I looked into the library mentioned by @Krzysiek (<code>spacy</code>), and it gave me dependency trees as well in the sentences. </p> <p>Here is the code:</p> <pre><code>from spacy.en import English parser = English() example = u"Donald Trump is the worst president of USA, but Hillary is better than him" parsedEx = parser(example) # shown as: original token, dependency tag, head word, left dependents, right dependents for token in parsedEx: print(token.orth_, token.dep_, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights]) </code></pre> <p>Here are the dependency trees:</p> <pre><code>(u'Donald', u'compound', u'Trump', [], []) (u'Trump', u'nsubj', u'is', [u'Donald'], []) (u'is', u'ROOT', u'is', [u'Trump'], [u'president', u',', u'but', u'is']) (u'the', u'det', u'president', [], []) (u'worst', u'amod', u'president', [], []) (u'president', u'attr', u'is', [u'the', u'worst'], [u'of']) (u'of', u'prep', u'president', [], [u'USA']) (u'USA', u'pobj', u'of', [], []) (u',', u'punct', u'is', [], []) (u'but', u'cc', u'is', [], []) (u'Hillary', u'nsubj', u'is', [], []) (u'is', u'conj', u'is', [u'Hillary'], [u'better']) (u'better', u'acomp', u'is', [], [u'than']) (u'than', u'prep', u'better', [], [u'him']) (u'him', u'pobj', u'than', [], []) </code></pre> <p>This gives in depth insights into the dependencies of the different tokens of the sentences. Here is the <a href="http://www.mathcs.emory.edu/~choi/doc/clear-dependency-2012.pdf" rel="nofollow">link</a> to the paper which describes the dependencies between different pairs. How can I use this tree to attach the contextual words for different subjects to them?</p>
4
2016-09-29T06:38:17Z
39,764,285
<p>I was recently just solving very similar problem - I needed to extract subject(s), action, object(s). And I open sourced my work so you can check this library: <a href="https://github.com/krzysiekfonal/textpipeliner" rel="nofollow">https://github.com/krzysiekfonal/textpipeliner</a></p> <p>This based on spacy(opponent to nltk) but it also based on sentence tree.</p> <p>So for instance let's get this doc embedded in spacy as example:</p> <pre><code>import spacy nlp = spacy.load("en") doc = nlp(u"The Empire of Japan aimed to dominate Asia and the " \ "Pacific and was already at war with the Republic of China " \ "in 1937, but the world war is generally said to have begun on " \ "1 September 1939 with the invasion of Poland by Germany and " \ "subsequent declarations of war on Germany by France and the United Kingdom. " \ "From late 1939 to early 1941, in a series of campaigns and treaties, Germany conquered " \ "or controlled much of continental Europe, and formed the Axis alliance with Italy and Japan. " \ "Under the Molotov-Ribbentrop Pact of August 1939, Germany and the Soviet Union partitioned and " \ "annexed territories of their European neighbours, Poland, Finland, Romania and the Baltic states. " \ "The war continued primarily between the European Axis powers and the coalition of the United Kingdom " \ "and the British Commonwealth, with campaigns including the North Africa and East Africa campaigns, " \ "the aerial Battle of Britain, the Blitz bombing campaign, the Balkan Campaign as well as the " \ "long-running Battle of the Atlantic. In June 1941, the European Axis powers launched an invasion " \ "of the Soviet Union, opening the largest land theatre of war in history, which trapped the major part " \ "of the Axis' military forces into a war of attrition. In December 1941, Japan attacked " \ "the United States and European territories in the Pacific Ocean, and quickly conquered much of " \ "the Western Pacific.") </code></pre> <p>You can now create a simple pipes structure(more about pipes in readme of this project):</p> <pre><code>pipes_structure = [SequencePipe([FindTokensPipe("VERB/nsubj/*"), NamedEntityFilterPipe(), NamedEntityExtractorPipe()]), FindTokensPipe("VERB"), AnyPipe([SequencePipe([FindTokensPipe("VBD/dobj/NNP"), AggregatePipe([NamedEntityFilterPipe("GPE"), NamedEntityFilterPipe("PERSON")]), NamedEntityExtractorPipe()]), SequencePipe([FindTokensPipe("VBD/**/*/pobj/NNP"), AggregatePipe([NamedEntityFilterPipe("LOC"), NamedEntityFilterPipe("PERSON")]), NamedEntityExtractorPipe()])])] engine = PipelineEngine(pipes_structure, doc, [0,1,2]) engine.process() </code></pre> <p>And in the result you will get:</p> <pre><code>&gt;&gt;&gt;[([Germany], [conquered], [Europe]), ([Japan], [attacked], [the, United, States])] </code></pre> <p>Actually it based strongly (the finding pipes) on another library - grammaregex. You can read about it from a post: <a href="https://medium.com/@krzysiek89dev/grammaregex-library-regex-like-for-text-mining-49e5706c9c6d#.zgx7odhsc" rel="nofollow">https://medium.com/@krzysiek89dev/grammaregex-library-regex-like-for-text-mining-49e5706c9c6d#.zgx7odhsc</a></p> <p><strong>EDITED</strong></p> <p>Actually the example I presented in readme discards adj, but all you need is to adjust pipe structure passed to engine according to your needs. For instance for your sample sentences I can propose such structure/solution which give you tuple of 3 elements(subj, verb, adj) per every sentence:</p> <pre><code>import spacy from textpipeliner import PipelineEngine from textpipeliner.pipes import * pipes_structure = [SequencePipe([FindTokensPipe("VERB/nsubj/NNP"), NamedEntityFilterPipe(), NamedEntityExtractorPipe()]), AggregatePipe([FindTokensPipe("VERB"), FindTokensPipe("VERB/xcomp/VERB/aux/*"), FindTokensPipe("VERB/xcomp/VERB")]), AnyPipe([FindTokensPipe("VERB/[acomp,amod]/ADJ"), AggregatePipe([FindTokensPipe("VERB/[dobj,attr]/NOUN/det/DET"), FindTokensPipe("VERB/[dobj,attr]/NOUN/[acomp,amod]/ADJ")])]) ] engine = PipelineEngine(pipes_structure, doc, [0,1,2]) engine.process() </code></pre> <p>It will give you result:</p> <pre><code>[([Donald, Trump], [is], [the, worst])] </code></pre> <p>A little bit complexity is in the fact you have compound sentence and the lib produce one tuple per sentence - I'll soon add possibility(I need it too for my project) to pass a list of pipe structures to engine to allow produce more tuples per sentence. But for now you can solve it just by creating second engine for compounded sents which structure will differ only of VERB/conj/VERB instead of VERB(those regex starts always from ROOT, so VERB/conj/VERB lead you to just second verb in compound sentence):</p> <pre><code>pipes_structure_comp = [SequencePipe([FindTokensPipe("VERB/conj/VERB/nsubj/NNP"), NamedEntityFilterPipe(), NamedEntityExtractorPipe()]), AggregatePipe([FindTokensPipe("VERB/conj/VERB"), FindTokensPipe("VERB/conj/VERB/xcomp/VERB/aux/*"), FindTokensPipe("VERB/conj/VERB/xcomp/VERB")]), AnyPipe([FindTokensPipe("VERB/conj/VERB/[acomp,amod]/ADJ"), AggregatePipe([FindTokensPipe("VERB/conj/VERB/[dobj,attr]/NOUN/det/DET"), FindTokensPipe("VERB/conj/VERB/[dobj,attr]/NOUN/[acomp,amod]/ADJ")])]) ] engine2 = PipelineEngine(pipes_structure_comp, doc, [0,1,2]) </code></pre> <p>And now after you run both engines you will get expected result :)</p> <pre><code>engine.process() engine2.process() [([Donald, Trump], [is], [the, worst])] [([Hillary], [is], [better])] </code></pre> <p>This is what you need I think. Of course I just quickly created a pipe structure for given example sentence and it won't work for every case, but I saw a lot of sentence structures and it will already fulfil quite nice percentage, but then you can just add more FindTokensPipe etc for cases which won't work currently and I'm sure after a few adjustment you will cover really good number of possible sentences(english is not too complex so...:)</p>
3
2016-09-29T07:42:41Z
[ "python", "nlp", "nltk", "spacy" ]
How to extract subjects in a sentence and their respective dependent phrases?
39,763,091
<p>I am trying to work on subject extraction in a sentence, so that I can get the sentiments in accordance with the subject. I am using <code>nltk</code> in python2.7 for this purpose. Take the following sentence as an example:</p> <p><code>Donald Trump is the worst president of USA, but Hillary is better than him</code></p> <p>He we can see that <code>Donald Trump</code> and <code>Hillary</code> are the two subjects, and sentiments related to <code>Donald Trump</code> is negative but related to <code>Hillary</code> are positive. Till now, I am able to break this sentence into chunks of noun phrases, and I am able to get the following:</p> <pre><code>(S (NP Donald/NNP Trump/NNP) is/VBZ (NP the/DT worst/JJS president/NN) in/IN (NP USA,/NNP) but/CC (NP Hillary/NNP) is/VBZ better/JJR than/IN (NP him/PRP)) </code></pre> <p>Now, how do I approach in finding the subjects from these noun phrases? Then how do I group the phrases meant for both the subjects together? Once I have the <strong>phrases meant for both the subjects separately</strong>, I can perform sentiment analysis on both of them separately.</p> <p><strong>EDIT</strong></p> <p>I looked into the library mentioned by @Krzysiek (<code>spacy</code>), and it gave me dependency trees as well in the sentences. </p> <p>Here is the code:</p> <pre><code>from spacy.en import English parser = English() example = u"Donald Trump is the worst president of USA, but Hillary is better than him" parsedEx = parser(example) # shown as: original token, dependency tag, head word, left dependents, right dependents for token in parsedEx: print(token.orth_, token.dep_, token.head.orth_, [t.orth_ for t in token.lefts], [t.orth_ for t in token.rights]) </code></pre> <p>Here are the dependency trees:</p> <pre><code>(u'Donald', u'compound', u'Trump', [], []) (u'Trump', u'nsubj', u'is', [u'Donald'], []) (u'is', u'ROOT', u'is', [u'Trump'], [u'president', u',', u'but', u'is']) (u'the', u'det', u'president', [], []) (u'worst', u'amod', u'president', [], []) (u'president', u'attr', u'is', [u'the', u'worst'], [u'of']) (u'of', u'prep', u'president', [], [u'USA']) (u'USA', u'pobj', u'of', [], []) (u',', u'punct', u'is', [], []) (u'but', u'cc', u'is', [], []) (u'Hillary', u'nsubj', u'is', [], []) (u'is', u'conj', u'is', [u'Hillary'], [u'better']) (u'better', u'acomp', u'is', [], [u'than']) (u'than', u'prep', u'better', [], [u'him']) (u'him', u'pobj', u'than', [], []) </code></pre> <p>This gives in depth insights into the dependencies of the different tokens of the sentences. Here is the <a href="http://www.mathcs.emory.edu/~choi/doc/clear-dependency-2012.pdf" rel="nofollow">link</a> to the paper which describes the dependencies between different pairs. How can I use this tree to attach the contextual words for different subjects to them?</p>
4
2016-09-29T06:38:17Z
40,014,532
<p>I was going through spacy library more, and I finally figured out the solution through dependency management. Thanks to <a href="https://github.com/NSchrading/intro-spacy-nlp/blob/master/subject_object_extraction.py" rel="nofollow">this</a> repo, I figured out how to include adjectives as well in my subjective verb object (making it SVAO's), as well as taking out compound subjects in the query. Here goes my solution:</p> <pre><code>from nltk.stem.wordnet import WordNetLemmatizer from spacy.en import English SUBJECTS = ["nsubj", "nsubjpass", "csubj", "csubjpass", "agent", "expl"] OBJECTS = ["dobj", "dative", "attr", "oprd"] ADJECTIVES = ["acomp", "advcl", "advmod", "amod", "appos", "nn", "nmod", "ccomp", "complm", "hmod", "infmod", "xcomp", "rcmod", "poss"," possessive"] COMPOUNDS = ["compound"] PREPOSITIONS = ["prep"] def getSubsFromConjunctions(subs): moreSubs = [] for sub in subs: # rights is a generator rights = list(sub.rights) rightDeps = {tok.lower_ for tok in rights} if "and" in rightDeps: moreSubs.extend([tok for tok in rights if tok.dep_ in SUBJECTS or tok.pos_ == "NOUN"]) if len(moreSubs) &gt; 0: moreSubs.extend(getSubsFromConjunctions(moreSubs)) return moreSubs def getObjsFromConjunctions(objs): moreObjs = [] for obj in objs: # rights is a generator rights = list(obj.rights) rightDeps = {tok.lower_ for tok in rights} if "and" in rightDeps: moreObjs.extend([tok for tok in rights if tok.dep_ in OBJECTS or tok.pos_ == "NOUN"]) if len(moreObjs) &gt; 0: moreObjs.extend(getObjsFromConjunctions(moreObjs)) return moreObjs def getVerbsFromConjunctions(verbs): moreVerbs = [] for verb in verbs: rightDeps = {tok.lower_ for tok in verb.rights} if "and" in rightDeps: moreVerbs.extend([tok for tok in verb.rights if tok.pos_ == "VERB"]) if len(moreVerbs) &gt; 0: moreVerbs.extend(getVerbsFromConjunctions(moreVerbs)) return moreVerbs def findSubs(tok): head = tok.head while head.pos_ != "VERB" and head.pos_ != "NOUN" and head.head != head: head = head.head if head.pos_ == "VERB": subs = [tok for tok in head.lefts if tok.dep_ == "SUB"] if len(subs) &gt; 0: verbNegated = isNegated(head) subs.extend(getSubsFromConjunctions(subs)) return subs, verbNegated elif head.head != head: return findSubs(head) elif head.pos_ == "NOUN": return [head], isNegated(tok) return [], False def isNegated(tok): negations = {"no", "not", "n't", "never", "none"} for dep in list(tok.lefts) + list(tok.rights): if dep.lower_ in negations: return True return False def findSVs(tokens): svs = [] verbs = [tok for tok in tokens if tok.pos_ == "VERB"] for v in verbs: subs, verbNegated = getAllSubs(v) if len(subs) &gt; 0: for sub in subs: svs.append((sub.orth_, "!" + v.orth_ if verbNegated else v.orth_)) return svs def getObjsFromPrepositions(deps): objs = [] for dep in deps: if dep.pos_ == "ADP" and dep.dep_ == "prep": objs.extend([tok for tok in dep.rights if tok.dep_ in OBJECTS or (tok.pos_ == "PRON" and tok.lower_ == "me")]) return objs def getAdjectives(toks): toks_with_adjectives = [] for tok in toks: adjs = [left for left in tok.lefts if left.dep_ in ADJECTIVES] adjs.append(tok) adjs.extend([right for right in tok.rights if tok.dep_ in ADJECTIVES]) tok_with_adj = " ".join([adj.lower_ for adj in adjs]) toks_with_adjectives.extend(adjs) return toks_with_adjectives def getObjsFromAttrs(deps): for dep in deps: if dep.pos_ == "NOUN" and dep.dep_ == "attr": verbs = [tok for tok in dep.rights if tok.pos_ == "VERB"] if len(verbs) &gt; 0: for v in verbs: rights = list(v.rights) objs = [tok for tok in rights if tok.dep_ in OBJECTS] objs.extend(getObjsFromPrepositions(rights)) if len(objs) &gt; 0: return v, objs return None, None def getObjFromXComp(deps): for dep in deps: if dep.pos_ == "VERB" and dep.dep_ == "xcomp": v = dep rights = list(v.rights) objs = [tok for tok in rights if tok.dep_ in OBJECTS] objs.extend(getObjsFromPrepositions(rights)) if len(objs) &gt; 0: return v, objs return None, None def getAllSubs(v): verbNegated = isNegated(v) subs = [tok for tok in v.lefts if tok.dep_ in SUBJECTS and tok.pos_ != "DET"] if len(subs) &gt; 0: subs.extend(getSubsFromConjunctions(subs)) else: foundSubs, verbNegated = findSubs(v) subs.extend(foundSubs) return subs, verbNegated def getAllObjs(v): # rights is a generator rights = list(v.rights) objs = [tok for tok in rights if tok.dep_ in OBJECTS] objs.extend(getObjsFromPrepositions(rights)) potentialNewVerb, potentialNewObjs = getObjFromXComp(rights) if potentialNewVerb is not None and potentialNewObjs is not None and len(potentialNewObjs) &gt; 0: objs.extend(potentialNewObjs) v = potentialNewVerb if len(objs) &gt; 0: objs.extend(getObjsFromConjunctions(objs)) return v, objs def getAllObjsWithAdjectives(v): # rights is a generator rights = list(v.rights) objs = [tok for tok in rights if tok.dep_ in OBJECTS] if len(objs)== 0: objs = [tok for tok in rights if tok.dep_ in ADJECTIVES] objs.extend(getObjsFromPrepositions(rights)) potentialNewVerb, potentialNewObjs = getObjFromXComp(rights) if potentialNewVerb is not None and potentialNewObjs is not None and len(potentialNewObjs) &gt; 0: objs.extend(potentialNewObjs) v = potentialNewVerb if len(objs) &gt; 0: objs.extend(getObjsFromConjunctions(objs)) return v, objs def findSVOs(tokens): svos = [] verbs = [tok for tok in tokens if tok.pos_ == "VERB" and tok.dep_ != "aux"] for v in verbs: subs, verbNegated = getAllSubs(v) # hopefully there are subs, if not, don't examine this verb any longer if len(subs) &gt; 0: v, objs = getAllObjs(v) for sub in subs: for obj in objs: objNegated = isNegated(obj) svos.append((sub.lower_, "!" + v.lower_ if verbNegated or objNegated else v.lower_, obj.lower_)) return svos def findSVAOs(tokens): svos = [] verbs = [tok for tok in tokens if tok.pos_ == "VERB" and tok.dep_ != "aux"] for v in verbs: subs, verbNegated = getAllSubs(v) # hopefully there are subs, if not, don't examine this verb any longer if len(subs) &gt; 0: v, objs = getAllObjsWithAdjectives(v) for sub in subs: for obj in objs: objNegated = isNegated(obj) obj_desc_tokens = generate_left_right_adjectives(obj) sub_compound = generate_sub_compound(sub) svos.append((" ".join(tok.lower_ for tok in sub_compound), "!" + v.lower_ if verbNegated or objNegated else v.lower_, " ".join(tok.lower_ for tok in obj_desc_tokens))) return svos def generate_sub_compound(sub): sub_compunds = [] for tok in sub.lefts: if tok.dep_ in COMPOUNDS: sub_compunds.extend(generate_sub_compound(tok)) sub_compunds.append(sub) for tok in sub.rights: if tok.dep_ in COMPOUNDS: sub_compunds.extend(generate_sub_compound(tok)) return sub_compunds def generate_left_right_adjectives(obj): obj_desc_tokens = [] for tok in obj.lefts: if tok.dep_ in ADJECTIVES: obj_desc_tokens.extend(generate_left_right_adjectives(tok)) obj_desc_tokens.append(obj) for tok in obj.rights: if tok.dep_ in ADJECTIVES: obj_desc_tokens.extend(generate_left_right_adjectives(tok)) return obj_desc_tokens </code></pre> <p>Now when you pass query such as:</p> <pre><code>from spacy.en import English parser = English() sentence = u""" Donald Trump is the worst president of USA, but Hillary is better than him """ parse = parser(sentence) print(findSVAOs(parse)) </code></pre> <p>You will get the following:</p> <pre><code>[(u'donald trump', u'is', u'worst president'), (u'hillary', u'is', u'better')] </code></pre> <p>Thank you @Krzysiek for your solution too, I actually was unable to go deep into your library to modify it. I rather tried modifying the above mentioned link to solve my problem.</p>
0
2016-10-13T07:12:48Z
[ "python", "nlp", "nltk", "spacy" ]
How to find the base case for a recursive function that changes a string of characters in a certain way
39,763,109
<p>Lately I have been working with recursion as, according to my professor, it represents pure functional programming approach as neither changes on variables nor side effects take place. Through my previous two questions <a href="http://stackoverflow.com/questions/39704084/how-to-write-a-recursive-function-that-takes-a-list-and-return-the-same-list-wit?noredirect=1#comment66726638_39704084">HERE</a> and <a href="http://stackoverflow.com/questions/39641229/how-to-find-a-given-element-in-nested-lists/39648238?noredirect=1#comment66624227_39648238">HERE</a> I have come to realise that its not the recursive definition per say is my problem, I understand how a recursive definition work and I have tried solving many mathematics related questions using the recursive definition and managed to solve them on first try. Because in mathematics you always have a crystal clear base case such as <code>0!</code> is 1 etc. However when it comes to working with <code>string</code> it seems to be always the case that i have no idea how constitute my base case in form of :</p> <pre><code>if (something): return something else: invoke the recursive function </code></pre> <p>for example give a <code>list</code> of <code>string</code>, or <code>char</code> use a recursive definition to remove the vowels or alphanumeric <code>char</code> etc. As mention earlier its functional programming so no side effects no variable changes are permitted. Which raises the question, such questions are not mathematical how can one come up with base case? </p> <p>Thanks everybody in advance for helping me to figure out my misery </p>
1
2016-09-29T06:39:05Z
39,763,357
<p>Well, you're iterating over a list of characters, so your base case can be an empty string. Here's a quick example of a recursive function that removes vowels from a string:</p> <pre><code>def strip_vowels(str): if not str: return '' if str[0] in ['a', 'e', 'i', 'o', 'u']: return strip_vowels(str[1:]) else: return str[0] + strip_vowels(str[1:]) </code></pre>
3
2016-09-29T06:52:31Z
[ "python", "recursion", "functional-programming", "discrete-mathematics" ]
How to add additional fields on the base of inputs in django rest framework mongoengine
39,763,119
<p>I am developing an <code>API</code>, using <code>django-rest-framework-mongoengine</code> with <code>MongoDb</code>, I want to append additional fields to the request from the <code>serializer</code> on the base of user inputs, for example If user enters <code>keyword</code>=<code>@rohit49khatri</code>, I want to append two more fields to the request by manipulating <code>keyword</code>, like <code>type</code>=<code>username</code>, <code>username</code>=<code>rohit49khatri</code></p> <p>Here's my code:</p> <p><strong>Serializer</strong></p> <pre class="lang-py prettyprint-override"><code>class SocialFeedCreateSerializer(DocumentSerializer): type = 'username' class Meta: model = SocialFeedSearchTerm fields = [ 'keyword', 'type', ] read_only_fields = [ 'type' ] </code></pre> <p><strong>View</strong></p> <pre class="lang-py prettyprint-override"><code>class SocialFeedCreateAPIView(CreateAPIView): queryset = SocialFeed.objects.all() serializer_class = SocialFeedCreateSerializer def perform_create(self, serializer): print(self.request.POST.get('type')) </code></pre> <p>But when I print <code>type</code> parameter, It gives <code>None</code></p> <p>Please help me save some time. Thanks.</p>
0
2016-09-29T06:39:31Z
39,763,861
<p>for the additional question: how to get <code>type</code> parameter?</p> <pre><code># access it via `django rest framework request` self.request.data.get('type', None) # or via `django request` self.request.request.POST.get('type', None) </code></pre> <p>for the original question:</p> <p>situation 1) IMHO for you situation, <code>perform_create</code> can handle it:</p> <pre><code> def perform_create(self, serializer): foo = self.request.data.get('foo', None) bar = extract_bar_from_foo(foo) serializer.save( additional_foo='abc', additional_bar=bar, ) </code></pre> <p>situation 2) If you need to <strong>manipulate</strong> it before the data goes to serializer (so that the manipulated data will pass through serializer validation):</p> <pre><code>class SocialFeedCreateAPIView(CreateAPIView): queryset = SocialFeed.objects.all() serializer_class = SocialFeedCreateSerializer def create(self, request, *args, **kwargs): # you can check the original snipeet in rest_framework/mixin # original: serializer = self.get_serializer(data=request.data) request_data = self.get_create_data() if hasattr(self, 'get_create_data') else request.data serializer = self.get_serializer(data=request_data) serializer.is_valid(raise_exception=True) self.perform_create(serializer) headers = self.get_success_headers(serializer.data) return Response(serializer.data, status=status.HTTP_201_CREATED, headers=headers) def get_create_data(self): data = self.request.data.copy() # manipulte your data data['foo'] = 'foo' return data </code></pre> <p>situation 3) If you do need to <strong>manipulate</strong> the request: (here's just an example, you can try to find out another place to manipulate the request.)</p> <pre><code>class SocialFeedCreateAPIView(CreateAPIView): queryset = SocialFeed.objects.all() serializer_class = SocialFeedCreateSerializer def initial(self, request, *args, **kwargs): # or any other condition you want if self.request.method.lower() == 'post': data = self.request.data # manipulate it data['foo'] = 'foo' request._request.POST = data return super(SocialFeedCreateAPIView, self).initial(request, *args, **kwargs) </code></pre>
1
2016-09-29T07:19:54Z
[ "python", "django", "mongodb", "django-rest-framework", "mongoengine" ]
apply function on groups of k elements of a pandas Series
39,763,436
<p>I have a pandas Series:</p> <pre><code>0 1 1 5 2 20 3 -1 </code></pre> <p>Lets say I want to apply <code>mean()</code> on every two elements, so I get something like this:</p> <pre><code>0 3.0 1 9.5 </code></pre> <p>Is there an elegant way to do this?</p>
4
2016-09-29T06:57:02Z
39,763,472
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.groupby.html" rel="nofollow"><code>groupby</code></a> by <code>index</code> divide by <code>k=2</code>:</p> <pre><code>k = 2 print (s.index // k) Int64Index([0, 0, 1, 1], dtype='int64') print (s.groupby([s.index // k]).mean()) name 0 3.0 1 9.5 </code></pre>
3
2016-09-29T06:58:51Z
[ "python", "pandas", "group-by", "mean", "series" ]
apply function on groups of k elements of a pandas Series
39,763,436
<p>I have a pandas Series:</p> <pre><code>0 1 1 5 2 20 3 -1 </code></pre> <p>Lets say I want to apply <code>mean()</code> on every two elements, so I get something like this:</p> <pre><code>0 3.0 1 9.5 </code></pre> <p>Is there an elegant way to do this?</p>
4
2016-09-29T06:57:02Z
39,763,606
<p>If you are using this over large series and many times, you'll want to consider a fast approach. This solution uses all numpy functions and will be fast.</p> <p>Use <code>reshape</code> and construct new <code>pd.Series</code></p> <p>consider the <code>pd.Series</code> <code>s</code></p> <pre><code>s = pd.Series([1, 5, 20, -1]) </code></pre> <hr> <p><strong><em>generalized function</em></strong></p> <pre><code>def mean_k(s, k): pad = (k - s.shape[0] % k) % k nan = np.repeat(np.nan, pad) val = np.concatenate([s.values, nan]) return pd.Series(np.nanmean(val.reshape(-1, k), axis=1)) </code></pre> <p><strong><em>demonstration</em></strong></p> <pre><code>mean_k(s, 2) 0 3.0 1 9.5 dtype: float64 </code></pre> <hr> <pre><code>mean_k(s, 3) 0 8.666667 1 -1.000000 dtype: float64 </code></pre>
2
2016-09-29T07:05:44Z
[ "python", "pandas", "group-by", "mean", "series" ]
apply function on groups of k elements of a pandas Series
39,763,436
<p>I have a pandas Series:</p> <pre><code>0 1 1 5 2 20 3 -1 </code></pre> <p>Lets say I want to apply <code>mean()</code> on every two elements, so I get something like this:</p> <pre><code>0 3.0 1 9.5 </code></pre> <p>Is there an elegant way to do this?</p>
4
2016-09-29T06:57:02Z
39,763,641
<p>You can do this:</p> <pre><code>(s.iloc[::2].values + s.iloc[1::2])/2 </code></pre> <p>if you want you can also reset the index afterwards, so you have 0, 1 as the index, using:</p> <pre><code>((s.iloc[::2].values + s.iloc[1::2])/2).reset_index(drop=True) </code></pre>
1
2016-09-29T07:07:57Z
[ "python", "pandas", "group-by", "mean", "series" ]
Scapy: How to manipulate Host in http header?
39,763,486
<p>I wrote this piece of code to get http header and set Host:</p> <pre><code>http_layer = packet.getlayer(http.HTTPRequest).fields http_layer['Host'] = "newHostName" return packet </code></pre> <p>After running the afforementioned code,the new host name has been set correctly, but the problem is that when I write the packet in pcap file, I still see the previous host in http fields, Is there an absolute way to manipulate <code>http_layer['Host']</code> ? Any help would be appreciated. Regards.</p>
0
2016-09-29T06:59:29Z
39,806,000
<p>After all, found the answer. The key is that <code>scapy</code> firstly parses <code>HTTP Request</code> and shows the dict of its fields. So when we try to assign a new field like <code>Host</code>, it changes the <code>Host</code> which it has already parsed and does not change the original field value. So, this is the way to modify <code>Host</code> or any other respective fields:</p> <pre><code>str_headers = pkt['HTTP']['HTTP Request'].fields['Headers'] str_headers = str_headers.replace('Host: ' + pkt['HTTP']['HTTP Request'].fields['Host'], 'Host: ' + new_val) pkt['HTTP']['HTTP Request'].fields['Headers'] = str_headers return pkt </code></pre>
0
2016-10-01T11:22:57Z
[ "python", "packet", "scapy" ]
rpy2, package 'outliers' functions not working
39,763,558
<p>I have a problem with almost all functions from R`s package: outliers. The "choosen one" function which working correctly is outliers</p> <pre><code>list_ = ['chisq.out.test','cochran.test', 'dixon.test', 'grubbs.test', 'outlier', 'qcochran'] y = some data without brackets like 0.0, 0.0, 0.0, 0.48416666667, for f in list_: try: code = """ y=c({0}); require(outliers); {1}(y);""".format(y, f) </code></pre> <p>I received message: </p> <blockquote> <p>could not find function "complete.cases"</p> </blockquote> <p>I tried also:</p> <pre><code>y = FloatVector([0.0, 0.0, 0.0, 0.48416666667, 0.48716666667]) outliers = importr('outliers') outliers.outlier(y) //works outliers.cochran.test(y) //not working -&gt; syntax is different </code></pre> <p>Do you have any suggestions, how can I solve that? Thanks in advance </p>
0
2016-09-29T07:03:06Z
39,768,032
<p>In R, the "dot" can be used in variable names. It cannot in Python.</p> <p><code>importr</code> is trying to help with this as described here: <a href="http://rpy2.readthedocs.io/en/version_2.8.x/robjects_rpackages.html" rel="nofollow">http://rpy2.readthedocs.io/en/version_2.8.x/robjects_rpackages.html</a></p>
0
2016-09-29T10:38:41Z
[ "python", "rpy2", "outliers" ]
imp.load_source loads wrong module
39,763,626
<p>I have two supposedly identical systems. On both systems, I run the same software, but on one of the two it doesn't function correctly.</p> <p>I'm trying to run function in a user-supplied <code>.py</code> file. I've reduced this to the following basic code that reproduces the error:</p> <pre><code>import imp with open("test_scripts/load_offsets.py") as fp: module = imp.load_source("load_offsets", "test_scripts", fp) dir(module) </code></pre> <p>On the first system, the output is correct:</p> <pre><code>['__builtins__', '__doc__', '__file__', '__name__', '__package__', 'os', 'test_load_offsets'] </code></pre> <p>I see one function called <code>test_load_offsets</code>, as expected.</p> <p>On the second system, I get the following output:</p> <pre><code>['__builtins__', '__doc__', '__file__', '__name__', '__package__', 'test_reset_position'] </code></pre> <p>Note that I see a different function now: <code>test_reset_position</code>. However, on both systems, the file <code>test_scripts/load_offsets.py</code> is identical! More importantly, there is no function <code>test_reset_position</code> in this file. </p> <p>There is, however, a function <code>test_reset_position</code> in a different file, namely <code>test_scripts/reset_position.py</code>. In addition, in the directory where I executed the code sample, there is a file <code>test_scriptsc</code> (this is the case on both systems). It contains, I think, byte code, but on the system where I see the problem I can recognise parts of the file <code>test_scripts/test_reset_position.py</code> in it. If I remove <code>test_scriptsc</code>, it works fine again, until load a different file and then the problem starts again.</p> <p>So, my conclusion is that on the system with the problem, the file <code>test_scriptsc</code> is not updated correctly. However, I do not understand why, nor do I understand why the problem is only on one of the two systems. The only difference I can find between the two systems is that the problematic system runs Python Python 2.7.12 on Ubuntu Linux 16.04.1 while the system without the problem runs Python 2.7.11+ on Ubuntu 16.04.</p> <p>Can someone help me find out what is going on here? I have no clue what to look for...</p>
0
2016-09-29T07:07:01Z
39,772,942
<p>The reason why it is failing is because I'm doing it wrong. The second argument to <code>load_source</code> should be the full path to the source file, not just to the directory containing it <a href="https://docs.python.org/2/library/imp.html#imp.load_source" rel="nofollow" title="Python 2.7.12 documentation">Python 2.7.12 documentation</a>. I'm not sure why I came up with my implementation, nor do I know why Python 2.7.11 was accepting it anyway, but the solution is to do it the right way:</p> <pre><code>with open("test_scripts/load_offsets.py") as fp: m = imp.load_module("load_offsets", fp, "test_scripts/load_offsets.py", ("py","r",imp.PY_SOURCE)) dir(m) </code></pre> <p>This correctly gives:</p> <pre><code>['__builtins__', '__doc__', '__file__', '__name__', '__package__', 'os', 'test_load_offsets'] </code></pre> <p>I think the file <code>test_scriptsc</code> was an attempt of byte compiling a <code>.py</code> file, but since I didn't give the full path, I ended up with something that was a sort-of-compiled directory. Apparently Python 2.7.11 handled this differently than Python 2.7.12.</p>
0
2016-09-29T14:22:16Z
[ "python", "python-2.7", "python-import" ]
Error in webdriver.Chrome() After updated my Ubuntu from 14.04 to 16.04
39,763,630
<p>I have recently updated my Ubuntu from 14.04 to 16.04 when I am trying to run the driver = webdriver.Chrome() </p> <pre><code>I am using the Python Selenium and getting the following error: File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/chrome/webdriver.py", line 67, in __init__ desired_capabilities=desired_capabilities) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 87, in __init__ self.start_session(desired_capabilities, browser_profile) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 141, in start_session 'desiredCapabilities': desired_capabilities, File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/webdriver.py", line 201, in execute self.error_handler.check_response(response) File "/usr/local/lib/python2.7/dist-packages/selenium/webdriver/remote/errorhandler.py", line 181, in check_response raise exception_class(message, screen, stacktrace) selenium.common.exceptions.WebDriverException: Message: unknown error: unrecognized Blink revision: d65a1c2f313b54fb0b90bb8a1082e4f5ecba2dda (Driver info: chromedriver=2.10.267518,platform=Linux 4.4.0-21-generic x86_64) </code></pre> <p>Can any one pleas help me out with a solution. </p>
0
2016-09-29T07:07:15Z
39,773,862
<p>That's an old version of chromedriver, first things first, download the latest chromedriver, add it to the system path.</p>
0
2016-09-29T15:04:28Z
[ "python", "ubuntu-16.04" ]
in learning data mining with python ch5 error
39,763,709
<p><strong>code:</strong></p> <pre><code>import os import pandas as pd data_folder = os.path.join(os.path.expanduser("~"),"data","Ads") data_filename = os.path.join(data_folder,"ad.data") def convert_number(x): try: return float(x) except ValueError: return np.nan from collections import defaultdict converters = defaultdict(convert_number) converters[1558] = lambda x:1 if x.strip() == "ad." else 0 ads = pd.read_csv(data_filename,header=None,converters=converters) ads[:5] x = ads.drop(1558,axis=1).values y = ads[1558] from sklearn.decomposition import PCA pca = PCA(n_components=5) xd = pca.fit_transform(x) import numpy as np np.set_printoptions(precision=3,suppress=True) pca.explained_variance_ratio_ </code></pre> <p><strong>error:</strong></p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-10-f726f2ff6f29&gt; in &lt;module&gt;() 1 from sklearn.decomposition import PCA 2 pca = PCA(n_components=5) ----&gt; 3 xd = pca.fit_transform(x) 4 import numpy as np 5 np.set_printoptions(precision=3,suppress=True) /home/kongnian/anaconda3/lib/python3.5/site-packages/sklearn/decomposition/pca.py in fit_transform(self, X, y) 239 240 """ --&gt; 241 U, S, V = self._fit(X) 242 U = U[:, :self.n_components_] 243 /home/kongnian/anaconda3/lib/python3.5/site-packages/sklearn/decomposition/pca.py in _fit(self, X) 266 requested. 267 """ --&gt; 268 X = check_array(X) 269 n_samples, n_features = X.shape 270 X = as_float_array(X, copy=self.copy) /home/kongnian/anaconda3/lib/python3.5/site-packages/sklearn/utils/validation.py in check_array(array, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, warn_on_dtype, estimator) 371 force_all_finite) 372 else: --&gt; 373 array = np.array(array, dtype=dtype, order=order, copy=copy) 374 375 if ensure_2d: ValueError: could not convert string to float: '?' </code></pre> <p><strong>dataset:</strong> <a href="http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements" rel="nofollow">http://archive.ics.uci.edu/ml/datasets/Internet+Advertisements</a> download Advertisements dataset <strong>os information:</strong> Linux ubuntu 4.4.0-40-generic #60-Ubuntu SMP Fri Sep 23 16:45:45 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux</p>
0
2016-09-29T07:11:51Z
39,766,654
<p>The data source you linked to contains <code>?</code> symbols - I presume these are missing values. I suggest to filter them during reading from csv stage, like so:</p> <pre><code>ads = pd.read_csv(data_filename,header=None,converters=converters, na_values='?') </code></pre> <p>You may find more info on how to deal with missing values in <a href="http://pandas.pydata.org/pandas-docs/stable/" rel="nofollow">pandas docs</a></p>
0
2016-09-29T09:34:01Z
[ "python" ]
Python random sampling in multiple indices
39,764,092
<p>I have a data frame according to below:</p> <pre> id_1 id_2 value 1 0 1 1 1 2 1 2 3 2 0 4 2 1 1 3 0 5 3 1 1 4 0 5 4 1 1 4 2 6 4 3 7 11 0 8 11 1 14 13 0 10 13 1 9 </pre> <p>I would like to take out a random sample of size n, without replacement, from this table based on <em>id_1</em>. This row needs to be unique with respect to the <em>id_1</em> column and can only occur once. </p> <p>End result something like:</p> <pre> id_1 id_2 value 1 1 2 2 0 4 4 3 7 13 0 10 </pre> <p>I have tried to do a group by and use the indices to take out a row through <em>random.sample</em> but it dosent go all the way. </p> <p>Can someone give me a pointer on how to make this work? Code for DF below!</p> <p>As always, thanks for time and input!</p> <p>/swepab</p> <pre><code>df = pd.DataFrame({'id_1' : [1,1,1,2,2,3,3,4,4,4,4,11,11,13,13], 'id_2' : [0,1,2,0,1,0,1,0,1,2,3,0,1,0,1], 'value_col' : [1,2,3,4,1,5,1,5,1,6,7,8,14,10,9]}) </code></pre>
1
2016-09-29T07:32:30Z
39,764,476
<p>This samples one random per id:</p> <pre><code>for id in sorted(set(df["id_1"])): print(df[df["id_1"] == id].sample(1)) </code></pre> <p>PS:</p> <p>translated above solution using pythons list comprehension, returning a list of of indices:</p> <pre><code>idx = [df[df["id_1"] == val].sample(1).index[0] for val in sorted(set(df["id_1"]))] </code></pre>
1
2016-09-29T07:54:41Z
[ "python", "sampling" ]
Python random sampling in multiple indices
39,764,092
<p>I have a data frame according to below:</p> <pre> id_1 id_2 value 1 0 1 1 1 2 1 2 3 2 0 4 2 1 1 3 0 5 3 1 1 4 0 5 4 1 1 4 2 6 4 3 7 11 0 8 11 1 14 13 0 10 13 1 9 </pre> <p>I would like to take out a random sample of size n, without replacement, from this table based on <em>id_1</em>. This row needs to be unique with respect to the <em>id_1</em> column and can only occur once. </p> <p>End result something like:</p> <pre> id_1 id_2 value 1 1 2 2 0 4 4 3 7 13 0 10 </pre> <p>I have tried to do a group by and use the indices to take out a row through <em>random.sample</em> but it dosent go all the way. </p> <p>Can someone give me a pointer on how to make this work? Code for DF below!</p> <p>As always, thanks for time and input!</p> <p>/swepab</p> <pre><code>df = pd.DataFrame({'id_1' : [1,1,1,2,2,3,3,4,4,4,4,11,11,13,13], 'id_2' : [0,1,2,0,1,0,1,0,1,2,3,0,1,0,1], 'value_col' : [1,2,3,4,1,5,1,5,1,6,7,8,14,10,9]}) </code></pre>
1
2016-09-29T07:32:30Z
39,764,555
<p>You can do this using vectorized functions (not loops) using </p> <pre><code>import numpy as np uniqued = df.id_1.reindex(np.random.permutation(df.index)).drop_duplicates() df.ix[np.random.choice(uniqued.index, 1, replace=False)] </code></pre> <p><code>uniqued</code> is created by a random shuffle + choice of a unique element by <code>id_1</code>. Then, a random sample (without replacement) is generated on it.</p>
1
2016-09-29T07:58:02Z
[ "python", "sampling" ]
Django grappelli change column width to full screen in admin interface
39,764,109
<p>I'm using django for the first time. I've experience with python but not with web development. Now I'm trying to design an admin page with grappelli. Only grappelli doesn't show the tables full screen (column width too small) and it looks horrible. Only a third of my screen is used. Is there a way to set the column width keeping the users screen size in mind. It look somewhat like this only worse. I can' t post any of the real data since it' s for scientific purposes. I tried to fins the answer as well but couldn't find any that work for me. I'm using django 1.10, python 2.7 and grappelli 2.8.2. The reason I'm using grappelli is because of the drop down filter. If anyone knows how to make a drop down filter in django that' s also fine by me. </p> <p>Grapelli interface:</p> <p><a href="http://i.stack.imgur.com/pD0PB.png" rel="nofollow"><img src="http://i.stack.imgur.com/pD0PB.png" alt="enter image description here"></a></p>
-1
2016-09-29T07:33:18Z
39,764,151
<p>Try <a href="http://stackoverflow.com/questions/12309788/how-to-fix-set-column-width-in-a-django-modeladmin-change-list-table-when-a-list">this one</a>, or <a href="https://www.sitepoint.com/community/t/changing-widths-of-django-admin-forms/34855/2" rel="nofollow">this one</a>. They are describing how to change the width of your columns in django admin.</p> <hr> <p>EDIT</p> <p>Install your virtual environment only for your project. if you do not have installed virtualenv in your main pip execute:</p> <p><code>pip install virtualenv</code></p> <p>then create your virtual environment just for your project with:</p> <ol> <li><code>virtualenv env</code></li> <li>Activate just created env with: <code>source env/bin/activate</code></li> <li>Install your dependencies like django or grappelli <code>pip install django</code> <code>pip install django-grappelli</code></li> <li>Now you can find your files inside newly created env</li> </ol> <hr> <p>EDIT 2</p> <p>If you want to add your custom styles to your template, there is a block which basically looks like this:</p> <pre><code>{% block stylesheets %} {{ block.super }} {{ media.css }} {% endblock %} </code></pre> <p>So if you want to add your custom css, just add path to your <code>.css</code> file. Or simply make custom css like this:</p> <pre><code>{% block stylesheets %} {{ block.super }} {{ media.css }} &lt;style&gt; .column-your_columns_name { width: 500px; } {% endblock %} </code></pre> <p>note that, every column can be accessed by class. Each column class is always named <code>column-fieldname</code>. So for example, if you have a column named <code>price</code> (column names are taken from models) you can access it in your styles by:</p> <p><code>.column-price{}</code></p>
0
2016-09-29T07:35:25Z
[ "python", "django", "dropdown", "django-grappelli" ]
Summing the values of one element of a dictionary based upon the values of another element
39,764,110
<p>Using Python, I have a list of two-element dictionaries which I would like to sum all the values of one element based upon the values of another element. ie.</p> <pre><code>[{'elev': 0.0, 'area': 3.52355755017894}, {'elev': 0.0, 'area': 3.5235575501288667}] </code></pre> <p>This is the format (although there are much more entries than this), and for each different <code>elev</code> I would like to have the sum of all the <code>area</code> values that correspond to it. For the <code>elev</code> value of <code>0.0</code> I would like the sum of all values, same for <code>elev</code> of <code>0.1</code> etc</p>
0
2016-09-29T07:33:21Z
39,764,210
<p>Here's a short code sample that puts the relevant sums into a dictionary. It simply iterates over each dictionary in the input list, adding the <code>area</code> values to the appropriate <code>elev</code> key.</p> <pre><code>from collections import defaultdict summed_dict = defaultdict(float) for tup in input_list: summed_dict[tup['elev']] += tup['area'] </code></pre>
0
2016-09-29T07:38:55Z
[ "python", "dictionary" ]
Summing the values of one element of a dictionary based upon the values of another element
39,764,110
<p>Using Python, I have a list of two-element dictionaries which I would like to sum all the values of one element based upon the values of another element. ie.</p> <pre><code>[{'elev': 0.0, 'area': 3.52355755017894}, {'elev': 0.0, 'area': 3.5235575501288667}] </code></pre> <p>This is the format (although there are much more entries than this), and for each different <code>elev</code> I would like to have the sum of all the <code>area</code> values that correspond to it. For the <code>elev</code> value of <code>0.0</code> I would like the sum of all values, same for <code>elev</code> of <code>0.1</code> etc</p>
0
2016-09-29T07:33:21Z
39,764,249
<p>You can try:</p> <pre><code>&gt;&gt;&gt; l = [{'elev': 0.0, 'area': 3.52355755017894}, {'elev': 0.0, 'area': 3.5235575501288667}] &gt;&gt;&gt; added_dict = {} &gt;&gt;&gt; for i in l: if i['elev'] in added_dict: added_dict[i['elev']] += i['area'] else: added_dict[i['elev']] = i['area'] &gt;&gt;&gt; added_dict {0.0: 7.0471151003078063} </code></pre>
0
2016-09-29T07:40:42Z
[ "python", "dictionary" ]
Summing the values of one element of a dictionary based upon the values of another element
39,764,110
<p>Using Python, I have a list of two-element dictionaries which I would like to sum all the values of one element based upon the values of another element. ie.</p> <pre><code>[{'elev': 0.0, 'area': 3.52355755017894}, {'elev': 0.0, 'area': 3.5235575501288667}] </code></pre> <p>This is the format (although there are much more entries than this), and for each different <code>elev</code> I would like to have the sum of all the <code>area</code> values that correspond to it. For the <code>elev</code> value of <code>0.0</code> I would like the sum of all values, same for <code>elev</code> of <code>0.1</code> etc</p>
0
2016-09-29T07:33:21Z
39,764,331
<p>Using a <code>defaultdict</code>, you don't need to the <code>if/else</code> statement:</p> <pre><code>from collections import defaultdict mylist = [{'elev': 0.0, 'area': 3.52355755017894}, {'elev': 0.0, 'area': 3.5235575501288667}] sumdict = defaultdict(float) for d in mylist: sumdict[d['elev']] += d.get('area', 0.0) print dict(sumdict) </code></pre>
1
2016-09-29T07:46:09Z
[ "python", "dictionary" ]
Summing the values of one element of a dictionary based upon the values of another element
39,764,110
<p>Using Python, I have a list of two-element dictionaries which I would like to sum all the values of one element based upon the values of another element. ie.</p> <pre><code>[{'elev': 0.0, 'area': 3.52355755017894}, {'elev': 0.0, 'area': 3.5235575501288667}] </code></pre> <p>This is the format (although there are much more entries than this), and for each different <code>elev</code> I would like to have the sum of all the <code>area</code> values that correspond to it. For the <code>elev</code> value of <code>0.0</code> I would like the sum of all values, same for <code>elev</code> of <code>0.1</code> etc</p>
0
2016-09-29T07:33:21Z
39,764,420
<p>this is very easily achieved using pandas. Sample code:</p> <pre><code>import pandas as pd df = pd.DataFrame([{'elev': 0.0, 'area': 3.52355755017894}, {'elev': 0.0, 'area': 3.5235575501288667}]) </code></pre> <p>which gives the following dataframe:</p> <pre><code> area elev 0 3.523558 0.0 1 3.523558 0.0 </code></pre> <p>Then group by the elev columns and sum the area's:</p> <pre><code>desired_output = df.groupby('elev').sum() </code></pre> <p>which gives:</p> <pre><code> area elev 0.0 7.047115 </code></pre> <p>If you want you can then output this dataframe back to a dictionary in a useful format using:</p> <pre><code>desired_output.to_dict('index') </code></pre> <p>which returns</p> <pre><code>{0.0: {'area': 7.0471151003078063}} </code></pre>
2
2016-09-29T07:51:16Z
[ "python", "dictionary" ]
How to get all objects in a Django model(Databse table) based on a filter using AngularJs
39,764,294
<p>Hi I have a Django model as below:</p> <pre><code>from __future__ import unicode_literals from django.db import models from django.contrib.auth.models import User # Create your models here. class Journal(models.Model): name = models.CharField(max_length=120) created_by = models.ForeignKey(User, related_name='+') date_created = models.DateTimeField(auto_now=False, auto_now_add=True) date_modified = models.DateTimeField(auto_now=True, auto_now_add=False ) def __unicode__(self): return (self.name) </code></pre> <p>And my angularJs below:</p> <pre><code>var app = angular.module("journalApp", []); app.controller("myCtrl", function($scope) { }); </code></pre> <p>How can I make a query in AngularJs so that i get all the objects from my Django model based on the user who logged in? Any idea guys? Thanks in advance.</p>
0
2016-09-29T07:43:16Z
39,765,355
<p>To retrieve all Journal objects, you can create a view with <a href="http://www.django-rest-framework.org/" rel="nofollow">django REST framework</a> and on angular side simply create a <code>Service</code> and use <a href="https://docs.angularjs.org/api/ngResource/service/$resource" rel="nofollow"><code>$resource</code></a> or <a href="https://docs.angularjs.org/api/ng/service/$http" rel="nofollow"><code>$http</code></a> or go for recommended <a href="https://github.com/mgonto/restangular" rel="nofollow"><code>restangular</code></a> to retrieve the data from django rest api. <a href="http://www.django-rest-framework.org/" rel="nofollow">django REST framework</a> can also be used for authentication.</p>
1
2016-09-29T08:36:22Z
[ "python", "angularjs", "django" ]
How to get all objects in a Django model(Databse table) based on a filter using AngularJs
39,764,294
<p>Hi I have a Django model as below:</p> <pre><code>from __future__ import unicode_literals from django.db import models from django.contrib.auth.models import User # Create your models here. class Journal(models.Model): name = models.CharField(max_length=120) created_by = models.ForeignKey(User, related_name='+') date_created = models.DateTimeField(auto_now=False, auto_now_add=True) date_modified = models.DateTimeField(auto_now=True, auto_now_add=False ) def __unicode__(self): return (self.name) </code></pre> <p>And my angularJs below:</p> <pre><code>var app = angular.module("journalApp", []); app.controller("myCtrl", function($scope) { }); </code></pre> <p>How can I make a query in AngularJs so that i get all the objects from my Django model based on the user who logged in? Any idea guys? Thanks in advance.</p>
0
2016-09-29T07:43:16Z
39,765,559
<p>Try the this method on server side</p> <pre><code>import json from django.http import JsonResponse from django.views.generic View from .models import Journal class JSONResponseMixin(object): """ A mixin that can be used to render a JSON response. """ def render_to_json_response(self, context, **response_kwargs): """ Returns a JSON response, transforming 'context' to make the payload. """ return JsonResponse( self.get_data(context), **response_kwargs ) def get_data(self, context): """ Returns an object that will be serialized as JSON by json.dumps(). """ return context class JournalView(JSONResponseMixin, View): def get(self, request, *args, **kwargs): response_data = {} response_data['data'] = Journal.objects.filter(created_by=self.request.user).values() return self.render_to_json_response(dict(response=response_data)) </code></pre> <p>Of course you need to add the view it to your urls.py</p> <p>Client side try this method</p> <pre><code>var app = angular.module("journalApp", []); app.controller("myCtrl", function($scope, $http) { var url = "/your-url-to-view"; # {% url 'init_data' %}} $scope.initData = function(){ $http.get(url).then(function(data){ $scope.data = data.data.response; }) } }); </code></pre>
1
2016-09-29T08:46:08Z
[ "python", "angularjs", "django" ]
Django Error - django.db.utils.DatabaseError: Data truncated for column 'applied' at row 1
39,764,318
<p>I am getting a weird issue when executing <code>python manage.py migrate</code>. </p> <p>Below is the error. </p> <blockquote> <p>django.db.utils.DatabaseError: Data truncated for column 'applied' at row 1</p> </blockquote> <p><a href="http://i.stack.imgur.com/fG00J.jpg" rel="nofollow"><img src="http://i.stack.imgur.com/fG00J.jpg" alt="enter image description here"></a></p> <p>Can anyone help me?</p> <p>Thanks</p> <p>Here is my models.py data</p> <pre><code># This is an auto-generated Django model module. # You'll have to do the following manually to clean this up: # * Rearrange models' order # * Make sure each model has one field with primary_key=True # * Make sure each ForeignKey has `on_delete` set to the desired behavior. # * Remove `managed = False` lines if you wish to allow Django to create, modify, and delete the table # Feel free to rename the models, but don't rename db_table values or field names. from __future__ import unicode_literals from django.db import models class Threads(models.Model): date = models.DateTimeField(db_column='Date', blank=True, null=True) # Field name made lowercase. supplier = models.CharField(db_column='Supplier', max_length=45, blank=True, null=True) # Field name made lowercase. activethreads = models.CharField(db_column='ActiveThreads', max_length=45, blank=True, null=True) # Field name made lowercase. threadscreated = models.CharField(db_column='ThreadsCreated', max_length=45, blank=True, null=True) # Field name made lowercase. ipaddress = models.CharField(db_column='IPAddress', max_length=45, blank=True, null=True) # Field name made lowercase. class Meta: managed = False db_table = 'Threads' class DjangoContentType(models.Model): name = models.CharField(max_length=100) app_label = models.CharField(max_length=100) model = models.CharField(max_length=100) class Meta: managed = False db_table = 'django_content_type' unique_together = (('app_label', 'model'),) class DjangoMigrations(models.Model): app = models.CharField(max_length=255) name = models.CharField(max_length=255) applied = models.DateTimeField() class Meta: managed = False db_table = 'django_migrations' </code></pre>
1
2016-09-29T07:45:23Z
39,766,959
<p>Solution of my problem is mentioned on the below URL, this is a bug with the latest version of Django. You need to change USE_TZ = False in settings.py</p> <p><a href="http://stackoverflow.com/questions/34716360/incorrect-datetime-value-when-setting-up-django-with-mysql">Incorrect datetime value when setting up Django with MySQL</a></p> <p>After doing the above change then you will encounter a different issue while running "python manage.py migrate" which will give you the below error</p> <blockquote> <p>TypeError: can't multiply sequence by non-int of type 'tuple'</p> </blockquote> <p>For this issue please refer Chris Barrett solution mentioned below</p> <p><a href="https://bitbucket.org/Manfre/django-mssql/issues/80/error-when-using-django-19" rel="nofollow">https://bitbucket.org/Manfre/django-mssql/issues/80/error-when-using-django-19</a></p> <p>You need to make the required changes in versions/3.5.2/lib/python3.5/site-packages/mysql/connector/django/operations.py (Check your setup) and make the changes</p> <pre><code>def bulk_insert_sql(self, fields, placeholder_rows): """ Format the SQL for bulk insert """ placeholder_rows_sql = (", ".join(row) for row in placeholder_rows) values_sql = ", ".join("(%s)" % sql for sql in placeholder_rows_sql) return "VALUES " + values_sql </code></pre>
1
2016-09-29T09:47:45Z
[ "python", "django", "python-3.5" ]
Invisible unicode characters loaded to DB in python
39,764,520
<p>There are many questions and fixes for this but none seems to work for me. My problem is I am reading a file with strings and loading each line into DB.</p> <p>In file it is looking like normal text,while in DB it is read as a unicode space. I tried replacing it with a space and similar options but none worked.</p> <p>For example in text file the string will be like:</p> <pre><code>The abrupt departure </code></pre> <p>After inserted in DB, there it is looking like:</p> <pre><code>The abrupt departure </code></pre> <p>When I am trying to run query for data in DB, it is looking like:</p> <pre><code>"The abrupt\xc2\xa0departure" </code></pre> <p>I tried the following:</p> <pre><code>if "\xc2\xa0" in str: str.replace('\xa0', ' ') str.replace('\xc2', ' ') print str </code></pre> <p>the above code is printing the string like:</p> <pre><code>The abrupt departure </code></pre> <p>but while inserting back to DB, it is still the same.</p> <p>Any help is appreciated.</p>
0
2016-09-29T07:56:37Z
39,764,638
<p>Try this:</p> <p>This will remove <code>Unicode</code> character</p> <pre><code>&gt;&gt;&gt; s = "The abrupt departure" &gt;&gt;&gt; s = s.decode('unicode_escape').encode('ascii','ignore') &gt;&gt;&gt; s 'The abrupt departure' </code></pre> <p>Or, You can try with replace as you have tried. But you forget to reassign to same variable. </p> <pre><code>&gt;&gt;&gt; s = "The abrupt departure" &gt;&gt;&gt; s = s.replace('\xc2', '').replace('\xa0','') &gt;&gt;&gt; s 'The abrupt departure' </code></pre>
1
2016-09-29T08:02:33Z
[ "python", "mysql", "string", "unicode", "replace" ]
Invisible unicode characters loaded to DB in python
39,764,520
<p>There are many questions and fixes for this but none seems to work for me. My problem is I am reading a file with strings and loading each line into DB.</p> <p>In file it is looking like normal text,while in DB it is read as a unicode space. I tried replacing it with a space and similar options but none worked.</p> <p>For example in text file the string will be like:</p> <pre><code>The abrupt departure </code></pre> <p>After inserted in DB, there it is looking like:</p> <pre><code>The abrupt departure </code></pre> <p>When I am trying to run query for data in DB, it is looking like:</p> <pre><code>"The abrupt\xc2\xa0departure" </code></pre> <p>I tried the following:</p> <pre><code>if "\xc2\xa0" in str: str.replace('\xa0', ' ') str.replace('\xc2', ' ') print str </code></pre> <p>the above code is printing the string like:</p> <pre><code>The abrupt departure </code></pre> <p>but while inserting back to DB, it is still the same.</p> <p>Any help is appreciated.</p>
0
2016-09-29T07:56:37Z
39,765,157
<p>The point is strings are immutable, you need to assign the return value from <code>replace</code>:</p> <pre><code> s = s.replace('\xa0', ' ') s = s.replace('\xc2', ' ') </code></pre> <p>Also, don't use <code>str</code> as a variable name.</p>
1
2016-09-29T08:26:49Z
[ "python", "mysql", "string", "unicode", "replace" ]
Invisible unicode characters loaded to DB in python
39,764,520
<p>There are many questions and fixes for this but none seems to work for me. My problem is I am reading a file with strings and loading each line into DB.</p> <p>In file it is looking like normal text,while in DB it is read as a unicode space. I tried replacing it with a space and similar options but none worked.</p> <p>For example in text file the string will be like:</p> <pre><code>The abrupt departure </code></pre> <p>After inserted in DB, there it is looking like:</p> <pre><code>The abrupt departure </code></pre> <p>When I am trying to run query for data in DB, it is looking like:</p> <pre><code>"The abrupt\xc2\xa0departure" </code></pre> <p>I tried the following:</p> <pre><code>if "\xc2\xa0" in str: str.replace('\xa0', ' ') str.replace('\xc2', ' ') print str </code></pre> <p>the above code is printing the string like:</p> <pre><code>The abrupt departure </code></pre> <p>but while inserting back to DB, it is still the same.</p> <p>Any help is appreciated.</p>
0
2016-09-29T07:56:37Z
39,800,679
<p><code>C2A0</code> is a "NO-BREAK SPACE". <code>'Â '</code> is what you see if your <code>CHARATER SET</code> settings are inconsistent.</p> <p>Doing a <code>replace()</code> is merely masking the problem, and not helping when a different funny character comes into your table.</p> <p>Since you have not provided enough info to say what you have done correctly versus incorrectly, let me point you at two references:</p> <ul> <li><p><a href="http://mysql.rjweb.org/doc.php/charcoll#python" rel="nofollow">Things to check for in Python</a></p></li> <li><p><a href="http://stackoverflow.com/a/38363567/1766831">The things you <em>should</em> do for utf8, and what may have gone wrong to get "Mojibake"</a></p></li> </ul>
1
2016-09-30T22:01:35Z
[ "python", "mysql", "string", "unicode", "replace" ]
How to read stream of a subprocess in python?
39,764,581
<p>I create a process with <code>subprocess.Popen</code> and get its <code>stdout</code>. I want to read content from <code>stdout</code> and print it out in another thread. Since my purpose it to make a an interactive program later, I cannot use <code>subprocess.communicate</code>.</p> <p>My basic requirement is: Once subprocess output something, the thread should immediately print it out to the screen.</p> <p>Here is the code</p> <pre><code>import subprocess import thread import time def helper(s): print "start",time.gmtime() while True: print "here" print s.next() print "finished",time.gmtime() thread.start_new_thread(helper,(out_stream,)) def process_main(): global stdin_main global stdout_main p = subprocess.Popen("/bin/bash", shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, bufsize=0) stdin_main = p.stdin stdout_main = p.stdout th1 = thread_print_stream(stdout_main) stdin_main.write("ls -la\n") stdin_main.flush() time.sleep(30) p.terminate() process_main() </code></pre> <p>Time elapse between "start" and "finished" should be very fast. However it is 30 seconds which is exactly the same as time before process terminated. I cannot understand why the output is not instance. Or how can I make it instancely?</p>
-1
2016-09-29T07:59:20Z
39,766,920
<p>As cdarke mentioned. This program is suffered from buffering. However it is more like a bug in python 2.x. The logic is nothing wrong in the current program. </p> <p>To fix the issue, you have to reopen the stdout. Like the code below:</p> <pre><code>def thread_print_stream(out_stream): def helper(s): print "start",time.gmtime() for line in io.open(out_stream.fileno()): print line print "finished",time.gmtime() thread.start_new_thread(helper,(out_stream,)) </code></pre> <p>And add</p> <pre><code>stdin_main.close() stdout_main.close() </code></pre> <p>before subprocess terminated, to make sure no error rise.</p>
0
2016-09-29T09:45:44Z
[ "python", "multithreading", "subprocess" ]
"Densifying" very large sparse matrices by rearranging rows/columns
39,764,582
<p>I have a very large sparse matrix (240k*4.5k, ≤1% non-zero elements), which I would like to "densify" by rearranging its rows and columns in a way that the upper left region is enriched in non-zero elements as much as possible. (To make it more manageable and visually assessable.) I would prefer <code>scipy</code> and related tools to do this.</p> <ul> <li>A good suggestion was already made <a href="http://stackoverflow.com/questions/15155276/rearrange-sparse-arrays-by-swapping-rows-and-columns">here</a> for a solution to "manually" swap rows/columns of sparse matrices, but it does not cover the challenge of identifying which rows/columns to swap to get an optimal enrichment (dense block) in the upper left corner.</li> <li>Note that a simple sorting of rows/columns based on the number of non-zero elements does not solve the problem. (If I take <em>e.g.</em> the two rows with the most elements, there will not necessarily be any overlap between them in terms of where - <em>i.e.</em> in which columns - are the elements located.)</li> <li>I'm also curious about the optimal sparse matrix representation in <code>scipy.sparse</code> for this task.</li> </ul> <p>Any suggestions or specific implementation ideas are welcome.</p>
3
2016-09-29T07:59:22Z
39,791,339
<p>It looks like you can already swap rows preserving sparsity so the missing part is the algorithm to sort the rows. So you need a function that gives you a "leftness" score. A heuristic that could work is the following:</p> <ol> <li>first get a mask of the non-zero elements (you don't care about the actual values, only about its non-zeroness).</li> <li><p>Estimate the density distribution of the non-zero values along the column axis:</p> <pre><code>def density(row, window): padded = np.insert(row, 0, 0) cumsum = np.cumsum(padded) return (cumsum[window:] - cumsum[:-window]) / window </code></pre></li> <li><p>Calculate the leftness score as the column with the maximum left-penalised density (looking from the right):</p> <pre><code>def leftness_score(row): n = len(a) window = n / 10 # 10 is a tuneable hyper parameter smoothness = 1 # another parameter to play with d = density(row) penalization = np.exp(-smoothness * np.arange(n)) return n - (penalization * d).argmax() </code></pre></li> </ol> <p>This algorithm gives higher score to rows having a high density of values as long as the max value of this density is not too far to the right. Some ideas to take it further: improve the density estimation, play with different penalization functions (instead of neg exp), fit the parameters to some synthethic data reflecting your expected sorting, etc. </p>
1
2016-09-30T12:16:33Z
[ "python", "numpy", "matrix", "scipy", "scikit-learn" ]
"Densifying" very large sparse matrices by rearranging rows/columns
39,764,582
<p>I have a very large sparse matrix (240k*4.5k, ≤1% non-zero elements), which I would like to "densify" by rearranging its rows and columns in a way that the upper left region is enriched in non-zero elements as much as possible. (To make it more manageable and visually assessable.) I would prefer <code>scipy</code> and related tools to do this.</p> <ul> <li>A good suggestion was already made <a href="http://stackoverflow.com/questions/15155276/rearrange-sparse-arrays-by-swapping-rows-and-columns">here</a> for a solution to "manually" swap rows/columns of sparse matrices, but it does not cover the challenge of identifying which rows/columns to swap to get an optimal enrichment (dense block) in the upper left corner.</li> <li>Note that a simple sorting of rows/columns based on the number of non-zero elements does not solve the problem. (If I take <em>e.g.</em> the two rows with the most elements, there will not necessarily be any overlap between them in terms of where - <em>i.e.</em> in which columns - are the elements located.)</li> <li>I'm also curious about the optimal sparse matrix representation in <code>scipy.sparse</code> for this task.</li> </ul> <p>Any suggestions or specific implementation ideas are welcome.</p>
3
2016-09-29T07:59:22Z
39,957,830
<p>I finally ended up with the solution below:<br> - First of all, based on the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.sparse.lil_matrix.html#scipy.sparse.lil_matrix" rel="nofollow">scipy documentation</a>, the LiL (linked list) format seems to be ideal for this sort of operations. (However I never did any actual comparison!)<br> - I've used the functions described <a href="http://stackoverflow.com/a/15162737/6869965">here</a> to swap rows and columns.<br> - Following the <a href="http://stackoverflow.com/a/39791339/6869965">suggestion of elyase</a>, I defined a 200*200 'window' in the 'upper left' corner of the matrix and implemented a 'window score' which was simply equal to the number of non-zero elements inside the window.<br> - To identify the columns to swap, I checked which column contains the least non-zero elements inside the window, and which column contains the most non-zero elements outside the window. In case of a tie, the number of non-zero elements in the whole column was the tie-breaker (if this was tied as well, I've chosen randomly).<br> - The method for swapping rows was identical.</p> <pre><code>import numpy as np import scipy.sparse import operator def swap_rows(mat, a, b): ''' See link in description''' def swap_cols(mat, a, b) : ''' See link in description''' def windowScore(lilmatrix,window): ''' Return no. of non-zero elements inside window. ''' a=lilmatrix.nonzero() return sum([1 for i,j in list(zip(a[0],a[1])) if i&lt;window and j&lt;window]) def colsToSwap(lilmatrix,window): ''' Determine columns to be swapped. In: lil_matrix, window (to what col_no is it considered "left") Out: (minColumnLeft,maxColumnRight) columns inside/outside of window w/ least/most NZ elements''' # Locate non-zero elements a=lilmatrix.nonzero() totalCols=lilmatrix.get_shape()[1] # Store no. of NZ elements for each column {in the window,in the whole table}, initialize with zeros colScoreWindow=np.zeros(totalCols) colScoreWhole=np.zeros(totalCols) ### Set colScoreWindow scores # Unique row indices rows_uniq={k for k in a[0] if k&lt;window} for k in rows_uniq: # List of tuples w/ location of each NZ element in current row gen=((row,col) for row,col in list(zip(a[0],a[1])) if row==k) for row,col in gen: # Increment no. of NZ elements in current column in colScoreWindow colScoreWindow[col]+=1 ### Set colScoreWhole scores # Unique row indices rows_uniq={k for k in a[0]} for k in rows_uniq: # List of tuples w/ location of each NZ element in current row gen=((row,col) for row,col in list(zip(a[0],a[1])) if row==k) for row,col in gen: # Increment no. of NZ elements in current column in colScoreWhole colScoreWhole[col]+=1 # Column inside of window w/ least NZ elements minColumnLeft=sorted(list(zip(np.arange(totalCols),colScoreWindow,colScoreWhole,np.random.rand(totalCols)))[:window], key=operator.itemgetter(1,2,3))[0][0] # Column outside of window w/ most NZ elements maxColumnRight=sorted(list(zip(np.arange(totalCols),colScoreWindow,colScoreWhole,np.random.rand(totalCols)))[window:], key=operator.itemgetter(1,2,3))[-1][0] return (minColumnLeft,maxColumnRight) def rowsToSwap(lilmatrix,window): ''' Same as colsToSwap, adjusted for rows.''' </code></pre> <p>After running a suitable number of iterations of <code>colsToSwap</code> and <code>rowsToSwap</code> and the actual swapping functions, the number of non-zero elements inside the window converges to a maximum. Note that the method is not optimized at all, and there is much room for improvement. For example, I suspect that reducing the number of sparse matrix type conversions and/or the <code>a=lilmatrix.nonzero()</code> call would significantly speed it up.</p>
0
2016-10-10T12:01:45Z
[ "python", "numpy", "matrix", "scipy", "scikit-learn" ]
Merging 2 dataframe using similar columns
39,764,652
<p>I have 2 dataframe listed as follow</p> <p>df</p> <pre><code> Type Breed Common Color Other Color Behaviour Golden Big Gold White Fun Corgi Small Brown White Crazy Bulldog Medium Black Grey Strong </code></pre> <p>df2</p> <pre><code> Type Breed Behaviour Bark Sound Pug Small Sleepy Ak German Shepard Big Cool Woof Puddle Small Aggressive Ek </code></pre> <p>I wanted to merge 2 dataframe by columns <code>Type</code>, <code>Breed</code> and <code>Behavior</code>.</p> <p>Therefore, my desire output would be:</p> <pre><code>Type Breed Behavior Golden Big Fun Corgi Small Crazy Bulldog Medium Strong Pug Small Sleepy German Shepard Big Cool Puddle Small Aggressive </code></pre>
3
2016-09-29T08:03:04Z
39,764,707
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow"><code>concat</code></a>:</p> <pre><code>print (pd.concat([df1[['Type','Breed','Behaviour']], df2[['Type','Breed','Behaviour']]], ignore_index=True)) Type Breed Behaviour 0 Golden Big Fun 1 Corgi Small Crazy 2 Bulldog Medium Strong 3 Pug Small Sleepy 4 German Shepard Big Cool 5 Puddle Small Aggressive </code></pre> <p>More general is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.intersection.html" rel="nofollow"><code>intersection</code></a> for columns of both <code>DataFrames</code>:</p> <pre><code>cols = df1.columns.intersection(df2.columns) print (cols) Index(['Type', 'Breed', 'Behaviour'], dtype='object') print (pd.concat([df1[cols], df2[cols]], ignore_index=True)) Type Breed Behaviour 0 Golden Big Fun 1 Corgi Small Crazy 2 Bulldog Medium Strong 3 Pug Small Sleepy 4 German Shepard Big Cool 5 Puddle Small Aggressive </code></pre> <p>More general if <code>df1</code> and <code>df2</code> have no <code>NaN</code> values use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html" rel="nofollow"><code>dropna</code></a> for removing columns with <code>NaN</code>:</p> <pre><code>print (pd.concat([df1 ,df2], ignore_index=True)) Bark Sound Behaviour Breed Common Color Other Color Type 0 NaN Fun Big Gold White Golden 1 NaN Crazy Small Brown White Corgi 2 NaN Strong Medium Black Grey Bulldog 3 Ak Sleepy Small NaN NaN Pug 4 Woof Cool Big NaN NaN German Shepard 5 Ek Aggressive Small NaN NaN Puddle print (pd.concat([df1 ,df2], ignore_index=True).dropna(1)) Behaviour Breed Type 0 Fun Big Golden 1 Crazy Small Corgi 2 Strong Medium Bulldog 3 Sleepy Small Pug 4 Cool Big German Shepard 5 Aggressive Small Puddle </code></pre>
4
2016-09-29T08:05:44Z
[ "python", "pandas", "multiple-columns", "intersection", "concat" ]
Merging 2 dataframe using similar columns
39,764,652
<p>I have 2 dataframe listed as follow</p> <p>df</p> <pre><code> Type Breed Common Color Other Color Behaviour Golden Big Gold White Fun Corgi Small Brown White Crazy Bulldog Medium Black Grey Strong </code></pre> <p>df2</p> <pre><code> Type Breed Behaviour Bark Sound Pug Small Sleepy Ak German Shepard Big Cool Woof Puddle Small Aggressive Ek </code></pre> <p>I wanted to merge 2 dataframe by columns <code>Type</code>, <code>Breed</code> and <code>Behavior</code>.</p> <p>Therefore, my desire output would be:</p> <pre><code>Type Breed Behavior Golden Big Fun Corgi Small Crazy Bulldog Medium Strong Pug Small Sleepy German Shepard Big Cool Puddle Small Aggressive </code></pre>
3
2016-09-29T08:03:04Z
39,764,866
<p>using <code>join</code> dropping columns that don't overlap</p> <pre><code>df1.T.join(df2.T, lsuffix='_').dropna().T.reset_index(drop=True) </code></pre> <p><a href="http://i.stack.imgur.com/wb6AL.png" rel="nofollow"><img src="http://i.stack.imgur.com/wb6AL.png" alt="enter image description here"></a></p>
4
2016-09-29T08:13:25Z
[ "python", "pandas", "multiple-columns", "intersection", "concat" ]
Template doesnot exist error in Django
39,764,739
<p>Am developing an angular frontend and Django backend app.I don't know where i am going wrong but Django cant seem to locate the template and displays a template doesn't exist message.The project directory looks like this.The backend server is in the "django project" folder</p> <p><a href="http://i.stack.imgur.com/do8es.png" rel="nofollow"><img src="http://i.stack.imgur.com/do8es.png" alt="project directory"></a></p> <p>base.py(settings)</p> <pre><code>import environ project_root = environ.Path(__file__) - 3 env = environ.Env(DEBUG=(bool, False),) CURRENT_ENV = 'dev' # 'dev' is the default environment # read the .env file associated with the settings that're loaded env.read_env('./mysite/{}.env'.format(CURRENT_ENV)) #Database DATABASES = { 'default': env.db() } SECRET_KEY = env('SECRET_KEY') DEBUG = env('DEBUG') INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', # Django Packages 'rest_framework', 'mysite.applications.games', ] ROOT_URLCONF = 'mysite.urls' STATIC_URL = '/static/' STATICFILES_FINDERS = [ 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', ] STATICFILES_DIRS = [ env('FRONTEND_ROOT') ] TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [env('FRONTEND_ROOT')], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] </code></pre> <p>Environment variables file(dev.env)</p> <pre><code>DATABASE_URL=sqlite:///mysite.db DEBUG=True FRONTEND_ROOT= ('C:/downloads/mysite/frontend/') SECRET_KEY= '##########################' </code></pre> <p>urls.py</p> <pre><code>from django.contrib import admin from django.conf.urls import include, url from mysite.applications.api.v1.routes import api_router from django.views.generic import TemplateView urlpatterns = [ url(r'^admin/', admin.site.urls), # Web App Entry url(r'^$', TemplateView.as_view(template_name="/app/index.html"), name='index'), ] </code></pre>
0
2016-09-29T08:07:10Z
39,766,897
<p>I changed template DIRS setting to point to 'C:/downloads/mysite/frontend/' and also the template name to point to "/app/index.html".app is a folder inside frontend.</p>
0
2016-09-29T09:44:45Z
[ "python", "django" ]
Linear Programming with Anaconda
39,764,740
<p>I have installed Anaconda on my windows 10 and I am using it for Python. I have a class in Mathematical optimization and need a good package for basic LP. <strong>Is there a "pre-installed" package that is good for LP in Anaconda, that I can just import to my python file, or do I have to install packages?</strong> I the latter case, any suggestions on which packages that are good and available for Anaconda on Windows 10? I have heard that PulP is adequate, but also that it doesn't come "pre-installed" with the Anaconda. </p>
0
2016-09-29T08:07:18Z
39,764,865
<p>If you wish to install PuLP on top of Anaconda on Windows it looks like you need to run:</p> <blockquote> <p>pip install pulp</p> </blockquote> <p>See pulp <a href="https://pythonhosted.org/PuLP/main/installing_pulp_at_home.html" rel="nofollow">docs</a></p>
0
2016-09-29T08:13:18Z
[ "python", "python-3.x", "anaconda", "linear-programming" ]
Python: How to convert Numpy array item value type to array with this value?
39,764,773
<p>What is the best way to convert such ndarray:</p> <pre><code>[[1,2,3], [4,5,6]] </code></pre> <p>to the:</p> <pre><code>[[[1],[2],[3]], [[4],[5],[6]]] </code></pre> <p>just wrap each value to array</p>
2
2016-09-29T08:09:03Z
39,764,823
<p>You can introduce a new axis with <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/arrays.indexing.html#numpy.newaxis" rel="nofollow"><code>np.newaxis/None</code></a> at the end, like so -</p> <pre><code>arr[...,None] </code></pre> <p>Sample run -</p> <pre><code>In [6]: arr = np.array([[1,2,3], [4,5,6]]) In [7]: arr[...,None] Out[7]: array([[[1], [2], [3]], [[4], [5], [6]]]) In [8]: arr[...,None].tolist() # To show it as a list for expected o/p format Out[8]: [[[1], [2], [3]], [[4], [5], [6]]] </code></pre>
3
2016-09-29T08:11:10Z
[ "python", "arrays", "numpy" ]
Python: How to convert Numpy array item value type to array with this value?
39,764,773
<p>What is the best way to convert such ndarray:</p> <pre><code>[[1,2,3], [4,5,6]] </code></pre> <p>to the:</p> <pre><code>[[[1],[2],[3]], [[4],[5],[6]]] </code></pre> <p>just wrap each value to array</p>
2
2016-09-29T08:09:03Z
39,764,915
<p>You can do it by recursively exploring the list of lists:</p> <pre><code>def wrap_values(list_of_lists): if isinstance(list_of_lists, list): return [wrap_values(el) for _,el in enumerate(list_of_lists)] else: return [list_of_lists] xx = [[1,2,3], [4,5,6]] yy = wrap_values(xx) # [[[1], [2], [3]], [[4], [5], [6]]] </code></pre>
0
2016-09-29T08:15:37Z
[ "python", "arrays", "numpy" ]
error: (-215) _src.type() == CV_8UC1 in function equalizeHist when trying to equalize a float64 image
39,764,885
<p>I'm trying to equalize a 1 one channel image like so:</p> <pre><code>img = cv2.equalizeHist(img) </code></pre> <p>But since it's a float64 img, I get the following error:</p> <blockquote> <p>error: (-215) _src.type() == CV_8UC1 in function equalizeHist</p> </blockquote> <p>How do I go about this?</p>
0
2016-09-29T08:14:19Z
39,766,080
<p>The function equalizeHist is histogram equalization of images and only implemented for CV_8UC1 type, which is a single channel 8 bit unsigned integral type.</p> <p>To convert your image to this type you can use the function <code>convertTo</code> with the target type (must be the same number of channels).</p> <p>Make sure that the source image has the right value range, typically floating point images are interpret as 0 = black and 1 = white and the gray range is in between, while integral images are interpreted as 0 = black and maximum value = white (which would be 255 for unsigned 8 bit type). So you'll often have to multiply your source image by 255 to fit the range. Function <code>convertTo</code> has a parameter to scale your values during conversion, which could give you a speed improvement compared to manual scaling.</p>
0
2016-09-29T09:09:49Z
[ "python", "opencv", "numpy" ]
Aligning LaTeX ticklabels in Matplotlib
39,764,959
<p>I have a simple Matplotlib plot with LaTeX ticklabels. I'd like these to be centre-aligned so they all look even, but even with <code>va='center'</code> they appear to be at different vertical locations:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import matplotlib as mpl # Some Matplotlib settings so the font is consistent mpl.rc('font', **{'family': 'serif', 'serif': ['Computer Modern'], 'size': 20}) mpl.rc('text', usetex=True) theta = np.linspace(-np.pi, np.pi, 100) fig, ax = plt.subplots() ax.plot(theta, np.cos(theta)) ax.set_xticks(np.linspace(-np.pi, np.pi, 5)) ax.set_xticklabels((r'$-\pi$', r'$-\pi/2$', '0', r'$\pi/2$', r'$\pi$'), va='center') ax.tick_params(axis='x', which='major', pad=20) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/6UoRw.png" rel="nofollow"><img src="http://i.stack.imgur.com/6UoRw.png" alt="enter image description here"></a></p> <p>What can I do to align my xticklabels the way I want them?</p>
0
2016-09-29T08:17:34Z
39,765,819
<p>Try using the alignment keyword <code>va='baseline'</code> for the vertical alignment. This aligns the text on the baseline, meaning the bottom line of all letters without descender. Theoretically all πs should line up nicely. </p> <pre><code>ax.set_xticklabels((r'$-\pi$', r'$-\pi/2$', '0', r'$\pi/2$', r'$\pi$'), va='baseline') </code></pre> <p>This improved the plot for me, even though the labels still do not align perfectly. More information can be found <a href="https://github.com/matplotlib/matplotlib/issues/1734/" rel="nofollow">here</a>. <a href="http://i.stack.imgur.com/m6qof.png" rel="nofollow"><img src="http://i.stack.imgur.com/m6qof.png" alt="baseline aligned plot"></a></p> <h2>Edit:</h2> <blockquote> <p>This doesn't work for me, unfortunately. [...] I suspect this is a backend issue. - xnx</p> </blockquote> <p>The position of the labels change with the used backend. Here is a comparison between the output generated by svg, png and pdf backend, all using the baseline keyword. Note that the pdf and svg labels line up almost perfectly so the pdf labels are not clearly visible. </p> <p><a href="http://i.stack.imgur.com/huVJo.png" rel="nofollow"><img src="http://i.stack.imgur.com/huVJo.png" alt="Comparison of backends"></a></p> <p>So for all purposes outside of the interactive viewer, using the pdf or svg backend looks good. When using the png backend the labels are all over the place.</p>
0
2016-09-29T08:57:53Z
[ "python", "matplotlib" ]
mpi4py compilation error on a Suse system
39,764,986
<p>After I compile the mpi4py with the openmpi from the server I get a runtime error.</p> <pre><code> OS: SuSe GCC: 4.8.5 OpenMPI: 1.10.1 HDF5: 1.8.11 mpi4py: 2.0.0 Python: 2.7.9 </code></pre> <p><strong>Environment Settings:</strong> I use virtualenv (no admin permission of the server)</p> <pre><code>(ENV) username@servername:~/test&gt; echo $PATH /opt/local/tools/hdf5/hdf5-1.8.11_openmpi-1.10.1_gcc-4.8.5/bin:/opt/local/mpi/openmpi/openmpi-1.10.1_gcc-4.8.5/bin:/home/username/test/virtualenv-15.0.3/ENV/bin: [other libs ] :/opt/local/bin:/usr/lib64/mpi/gcc/openmpi/bin:/usr/local/bin:/usr/bin:/bin (ENV) username@servername:echo $LD_LIBRARY_PATH /opt/local/tools/hdf5/hdf5-1.8.11_openmpi-1.10.1_gcc-4.8.5/lib:/opt/local/mpi/openmpi/openmpi-1.10.1_gcc-4.8.5/lib (ENV) username@servername:~/test&gt; pip freeze cycler==0.10.0 Cython==0.24.1 dill==0.2.5 matplotlib==1.5.3 multiprocessing==2.6.2.1 numpy==1.11.1 pyfits==3.4 pyparsing==2.1.9 python-dateutil==2.5.3 pytz==2016.6.1 scipy==0.18.1 six==1.10.0 </code></pre> <p>Compile and install mpi4py:</p> <pre><code>(ENV) username@servername:~/test&gt; wget https://bitbucket.org/mpi4py/mpi4py/downloads/mpi4py-2.0.0.tar.gz (ENV) username@servername:~/test&gt; tar xzvf mpi4py-2.0.0.tar.gz (ENV) username@servername:~/test&gt; cd mpi4py-2.0.0/ (ENV) username@servername:~/test&gt;vim mpi.cfg </code></pre> <p>In mpi.cfg I added a section for my custom Open MPI:</p> <pre><code>[mpi] mpi_dir = /opt/local/mpi/openmpi/openmpi-1.10.1_gcc-4.8.5 mpicc = %(mpi_dir)s/bin/mpicc mpicxx = %(mpi_dir)s/bin/mpicxx library_dirs = %(mpi_dir)s/lib runtime_library_dirs = %(library_dirs)s </code></pre> <p>Compile </p> <pre><code>(ENV) username@servername:python setup.py build --mpi=mpi </code></pre> <p>Install</p> <pre><code>(ENV) username@servername:python setup.py install </code></pre> <p>First basic test (ok)</p> <pre><code>(ENV) username@servername: mpiexec -n 5 python -m mpi4py helloworld Hello, World! I am process 0 of 5 on servername. Hello, World! I am process 1 of 5 on servername. Hello, World! I am process 2 of 5 on servername. Hello, World! I am process 3 of 5 on servername. Hello, World! I am process 4 of 5 on servername. </code></pre> <p>Second basic test generates ERROR:</p> <pre><code>(ENV) username@servername: python &gt;&gt;&gt;from mpi4py import MPI -------------------------------------------------------------------------- Error obtaining unique transport key from ORTE orte_precondition_transports not present in the environment). Local host: servername -------------------------------------------------------------------------- -------------------------------------------------------------------------- It looks like MPI_INIT failed for some reason; your parallel process is likely to abort. There are many reasons that a parallel process can fail during MPI_INIT; some of which are due to configuration or environment problems. This failure appears to be an internal failure; here's some additional information (which may only be relevant to an Open MPI developer): PML add procs failed --&gt; Returned "Error" (-1) instead of "Success" (0) ------------------------------------------------------------------------- *** An error occurred in MPI_Init_thread *** on a NULL communicator *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort, *** and potentially your MPI job) [servername:165332] Local abort before MPI_INIT completed successfully; not able to aggregate error messages, and not able to guarantee that all other processes were killed! (ENV) username@servername:~/test/mpi4py-2.0.0&gt; </code></pre> <p>Update: during the compilation of mpi4py I get this error </p> <pre><code>checking for library 'lmpe' ... /opt/local/mpi/openmpi/openmpi-1.10.1_gcc-4.8.5/bin/mpicc -pthread -fno-strict-aliasing -fmessage-length=0 -grecord-gcc-switches -fstack- protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous- unwind-tables -g -DNDEBUG -fmessage-length=0 -grecord-gcc-switches -fstack-protector -O2 -Wall -D_FORTIFY_SOURCE=2 -funwind-tables -fasynchronous-unwind tables -g -DOPENSSL_LOAD_CONF -fPIC -I/opt/local /mpi/openmpi/openmpi-1.10.1_gcc-4.8.5/include -c _configtest.c -o _configtest.o /opt/local/mpi/openmpi/openmpi-1.10.1_gcc-4.8.5/bin/mpicc -pthread _configtest.o -L/opt/local/mpi/openmpi/openmpi-1.10.1_gcc-4.8.5/lib -Wl,-R/opt/local/mpi/openmpi/openmpi-1.10.1_gcc-4.8.5/lib -llmpe -o _configtest /usr/lib64/gcc/x86_64-suse-linux/4.8/../../../../x86_64-suse-linux bin/ld: cannot find -llmpe collect2: error: ld returned 1 exit status failure. </code></pre>
0
2016-09-29T08:18:37Z
39,793,296
<p>See: <a href="https://bitbucket.org/mpi4py/mpi4py/issues/52/mpi4py-compilation-error" rel="nofollow">https://bitbucket.org/mpi4py/mpi4py/issues/52/mpi4py-compilation-error</a></p> <p>It seems the issue was not because of an mpi4py bug but from the psm transport layer from the OpenMPI</p> <p>In my case setting </p> <pre><code>export OMPI_MCA_mtl=^psm </code></pre> <p>solved the above runtime error.</p>
0
2016-09-30T13:59:27Z
[ "python", "openmpi", "mpi4py" ]
Python: writing a text file with a lot of variables
39,765,047
<p>I need to create (from Python code) a text file with in each line some 50 variables, separated by comm's. I take the canonical way to be output.write ("{},{},{},{},{},{},{},{},{},{}, ... \n".format(v,v,v,v,... But that will be hard to read and difficult to maintain with such a lot of variables. Any other suggestions? I have thought of using the csv module, after all what I am writing is (kind of) a csv file, but thought I'd hear around for other suggestions first.</p>
0
2016-09-29T08:22:01Z
39,765,304
<h2>Using lists</h2> <p>When reaching a handful of variables that are related to each other, it is common to use a <code>list</code> or a <code>dict</code>. If you create a list:</p> <pre><code>myrow = [] myrow.append(v1) ... </code></pre> <p>This also allows for easier looping over each value. Once you have done that you can easily concatenate it to a string:</p> <pre><code>f.write(','.join(myrow)) </code></pre> <p>In case your row might contain any commas (or whatever you use as a delimiter) you must ensure escaping. In this case a CSV modules helps:</p> <pre><code>import csv with open('myfile.csv', 'w') as f: fw = csv.writer(f) fw.writerow(myrow) # where myrow is a list </code></pre> <h2>Using dicts</h2> <p>Some people prefer to add additional structure, e.g.:</p> <pre><code>myrow = {} myrow['speed'] = speed_value myrow['some_other_row'] = other_value import csv with open('myfile.csv', 'w') as f: fw = csv.writer(f) fw.writerow(myrow) </code></pre>
0
2016-09-29T08:34:01Z
[ "python", "csv", "file-writing" ]
Python: writing a text file with a lot of variables
39,765,047
<p>I need to create (from Python code) a text file with in each line some 50 variables, separated by comm's. I take the canonical way to be output.write ("{},{},{},{},{},{},{},{},{},{}, ... \n".format(v,v,v,v,... But that will be hard to read and difficult to maintain with such a lot of variables. Any other suggestions? I have thought of using the csv module, after all what I am writing is (kind of) a csv file, but thought I'd hear around for other suggestions first.</p>
0
2016-09-29T08:22:01Z
39,767,093
<p>If you can sort of your variables, you could use locals().</p> <p>for example:</p> <p>I set three variables:</p> <pre><code>var1 = 'xx' var2 = 'yy' var3 = 'zz' </code></pre> <p>and I can sort of them by sorted().</p> <pre><code>def sort(x): if len(x) != 4: return '99' else: return x[-1] sortVars = sorted(locals(), key=sort) </code></pre> <p>Then, using <strong><code>for</code></strong> combination them.</p> <pre><code>result = '' for i in sortVars[:3]: result += locals()[i] print(result) </code></pre>
0
2016-09-29T09:54:46Z
[ "python", "csv", "file-writing" ]
Can I use np.arange with lists as my inputs?
39,765,093
<p>The relevant excerpt of my code is as follows:</p> <pre><code>import numpy as np def create_function(duration, start, stop): rates = np.linspace(start, stop, duration*1000) return rates def generate_spikes(duration, start, stop): rates = [create_function(duration, start, stop)] array = [np.arange(0, (duration*1000), 1)] start_value = [np.repeat(start, duration*1000)] double_array = [np.add(array,array)] times = np.arange(np.add(start_value,array), np.add(start_value,double_array), rates) return times/1000. </code></pre> <p>I know this is really inefficient coding (especially the start_value and double_array stuff), but it's all a product of trying to somehow use <code>arange</code> with lists as my inputs.</p> <p>I keep getting this error:</p> <pre><code>Type Error: int() argument must be a string, a bytes-like element, or a number, not 'list' </code></pre> <p>Essentially, an example of what I'm trying to do is this:</p> <p>I had two arrays <code>a = [1, 2, 3, 4]</code> and <code>b = [0.1, 0.2, 0.3, 0.4]</code>, I'd want to use <code>np.arange</code> to generate <code>[1.1, 1.2, 1.3, 2.2, 2.4, 2.6, 3.3, 3.6, 3.9, 4.4, 4.8, 5.2]</code>? (I'd be using a different step size for every element in the array.)</p> <p>Is this even possible? And if so, would I have to flatten my list?</p>
0
2016-09-29T08:24:10Z
39,765,303
<p>You can use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>broadcasting</code></a> there for efficiency purposes -</p> <pre><code>(a + (b[:,None] * a)).ravel('F') </code></pre> <p>Sample run -</p> <pre><code>In [52]: a Out[52]: array([1, 2, 3, 4]) In [53]: b Out[53]: array([ 0.1, 0.2, 0.3, 0.4]) In [54]: (a + (b[:,None] * a)).ravel('F') Out[54]: array([ 1.1, 1.2, 1.3, 1.4, 2.2, 2.4, 2.6, 2.8, 3.3, 3.6, 3.9, 4.2, 4.4, 4.8, 5.2, 5.6]) </code></pre> <p>Looking at the expected output, it seems you are using just the first three elements off <code>b</code> for the computation. So, to achieve that target, we just slice the first three elements and do that computation, like so -</p> <pre><code>In [55]: (a + (b[:3,None] * a)).ravel('F') Out[55]: array([ 1.1, 1.2, 1.3, 2.2, 2.4, 2.6, 3.3, 3.6, 3.9, 4.4, 4.8, 5.2]) </code></pre>
1
2016-09-29T08:33:59Z
[ "python", "python-3.x", "numpy" ]
Altering packets on the fly with scapy as a MITM
39,765,107
<p>Assuming I managed to be in the middle of the communication between a client and a server (let's say that I open up a hotspot and cause the client to connect to the server only through my machine).</p> <p>How can I alter packets that my client sends and receives without interrupting my own communication with other services? There must be a way to route all of the packets the client both sends and is about to receive (before forwarding them to him) through my script.</p> <p>I think that the correct direction of going about accomplishing this is with <code>iptables</code> but not sure exactly what arguments would fit to make this work. I already have the following simple script:</p> <pre><code>hotspotd start #a script that runs dnsmasq as both a DNS and DHCP server, configures and starts a hotspot iptables -P FORWARD ACCEPT iptables --append FORWARD --in-interface wlan0 -j ACCEPT iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE #wlan0 is the interface on which the hotspot is. #eth0 is the interface that is connected to the internet </code></pre> <p>Now, this perfectly works for a passive MITM - I can see everything that the client sends and receives. But now I want to step it up and redirect every message he sends and receives through me.</p> <p>My eventual purpose is to get to a level where I could execute the following script:</p> <pre><code>from scapy.all import * from scapy_http.http import * def callback(pkt): #Process the packet here, see source and destination addresses, ports, data send(pkt) sniff(filter='port 666', prn=callback) #Assuming all relevant packets are redirected to port 666 </code></pre> <p>How do I accomplish redirecting every packet the client sends and is-about-to-receive?</p>
1
2016-09-29T08:24:47Z
39,800,766
<p>You can use <a href="http://www.netfilter.org/projects/libnetfilter_queue/doxygen/" rel="nofollow">NFQUEUE</a> which has <a href="https://pypi.python.org/pypi/NetfilterQueue" rel="nofollow">python bindings</a>.</p> <p>NFQUEUE is a userspace queue that is a valid iptables target. You can redirect some traffic to the NFQUQUE:</p> <p><code>iptables -I INPUT -d 192.168.0.0/24 -j NFQUEUE --queue-num 1</code></p> <p>Then access the packets from your code:</p> <pre><code>from netfilterqueue import NetfilterQueue def print_and_accept(pkt): print(pkt) pkt.accept() nfqueue = NetfilterQueue() nfqueue.bind(1, print_and_accept) try: nfqueue.run() except KeyboardInterrupt: print('') nfqueue.unbind() </code></pre> <p>Note the <code>pkt.accept()</code> call. This returns a <a href="http://www.netfilter.org/projects/libnetfilter_queue/doxygen/group__Queue.html#gae36aee5b74d0c88d2f8530e356f68b79" rel="nofollow">verdict</a> to the nfqueue, telling it that it should accept the packet - i.e. allow it to continue along its normal route in the kernel. To modify a packet, instead of <code>accept</code>ing it, you'd need to copy it, return a <code>drop</code> verdict, and finally resend it with the included modifications.</p>
1
2016-09-30T22:09:42Z
[ "python", "linux", "iptables", "scapy", "man-in-the-middle" ]
Google Task Queue REST pull returning 500 occasionally
39,765,136
<p>I have Python a process leasing tasks from the <a href="https://cloud.google.com/appengine/docs/python/taskqueue/rest/" rel="nofollow">Google TaskQueue REST API</a> every second in the unlimited loop:</p> <pre><code>credentials = GoogleCredentials.get_application_default() task_api = googleapiclient.discovery.build('taskqueue', 'v1beta2', credentials=credentials) while True: tasks = task_api.tasks().lease(...).execute() time.sleep(1) </code></pre> <p>The process sometimes run well for hours. But occasionally crashes often by one of HTTP error:</p> <ul> <li>500 Backend Error</li> <li>503 Backend Error</li> <li>500 An internal error happened in the backend</li> </ul> <p>The process is running on the Google Computing Engine server. It uses a service account key, specified by the GOOGLE_APPLICATION_CREDENTIALS env variable. Is this a Google Task Queue bug or do I miss something? E.g. do I need to reread the credentials before every lease request?</p>
1
2016-09-29T08:26:00Z
39,792,576
<p>Since the @DalmTo has just answered in comments, I sum up his answers and add the Python solution.</p> <p>The Google 5xx backed error is flood protection and Google recommends to implement <a href="https://developers.google.com/drive/v3/web/handle-errors#500_backend_error" rel="nofollow">exponential backoff</a>. Despite the link points to Google Drive API, the Google errors are system wide for all the APIs (GAE including). It rarely takes more then 6 retries for it to kick in and respond.</p> <p>After digging the googleapiclient sources, I've found that the exponential backoff is already implemented in this library, so the solution is dead simple:</p> <pre><code>tasks = task_api.tasks().lease(...).execute(num_retries=6) </code></pre> <p>The sources of the <code>http.py::_should_retry_response()</code> shows, that beside HTTP 5xx errors the request is repeated also when the JSON response contains <code>userRateLimitExceeded</code> or <code>rateLimitExceeded</code> error.</p>
1
2016-09-30T13:21:02Z
[ "python", "google-app-engine", "google-compute-engine", "task-queue" ]
Display values that does not end with ".0" Python Pandas
39,765,264
<p>I have a float column that contains <code>NaN</code> values and float values. How do i filter out those values that does not end with <code>.0</code>?</p> <p>For example:</p> <pre><code>Col1 0.7 1.0 1.1 9.0 9.5 NaN </code></pre> <p>Desire result will be:</p> <pre><code>Col1 0.7 1.1 9.2 </code></pre>
3
2016-09-29T08:31:56Z
39,765,337
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow"><code>boolean indexing</code></a>:</p> <pre><code>#convert to string and compare last value print ((df.Col1.astype(str).str[-1] != '0') &amp; (df.Col1.notnull())) 0 True 1 False 2 True 3 False 4 True 5 False Name: Col1, dtype: bool print (df[(df.Col1.astype(str).str[-1] != '0') &amp; (df.Col1.notnull())]) Col1 0 0.7 2 1.1 4 9.5 </code></pre> <p>Another solution for comparing converted value to <code>ìnt</code>, but first need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.fillna.html" rel="nofollow"><code>fillna</code></a>:</p> <pre><code>s = df.Col1.fillna(1) print (df[s.astype(int) != s]) Col1 0 0.7 2 1.1 4 9.5 </code></pre> <p><strong>Timings</strong>:</p> <pre><code>#[30000 rows x 1 columns] df = pd.concat([df]*10000).reset_index(drop=True) def jez2(df): s = df.Col1.fillna(1) return (df[s.astype(int) != s]) In [179]: %timeit (df[(df.Col1.astype(str).str[-1] != '0') &amp; (df.Col1.notnull())]) 10 loops, best of 3: 80.2 ms per loop In [180]: %timeit (jez2(df)) 1000 loops, best of 3: 1.16 ms per loop In [181]: %timeit (df[df.Col1 // 1 != df.Col1].dropna()) 100 loops, best of 3: 3.04 ms per loop In [182]: %timeit (df[df['Col1'].mod(1) &gt; 0].dropna()) 100 loops, best of 3: 2.58 ms per loop </code></pre>
3
2016-09-29T08:35:23Z
[ "python", "pandas", "indexing", "condition", null ]
Display values that does not end with ".0" Python Pandas
39,765,264
<p>I have a float column that contains <code>NaN</code> values and float values. How do i filter out those values that does not end with <code>.0</code>?</p> <p>For example:</p> <pre><code>Col1 0.7 1.0 1.1 9.0 9.5 NaN </code></pre> <p>Desire result will be:</p> <pre><code>Col1 0.7 1.1 9.2 </code></pre>
3
2016-09-29T08:31:56Z
39,765,379
<p>use <code>//</code> division</p> <pre><code>df[df.Col1 // 1 != df.Col1].dropna() </code></pre> <p><a href="http://i.stack.imgur.com/ZhlRy.png" rel="nofollow"><img src="http://i.stack.imgur.com/ZhlRy.png" alt="enter image description here"></a></p>
3
2016-09-29T08:37:25Z
[ "python", "pandas", "indexing", "condition", null ]
Display values that does not end with ".0" Python Pandas
39,765,264
<p>I have a float column that contains <code>NaN</code> values and float values. How do i filter out those values that does not end with <code>.0</code>?</p> <p>For example:</p> <pre><code>Col1 0.7 1.0 1.1 9.0 9.5 NaN </code></pre> <p>Desire result will be:</p> <pre><code>Col1 0.7 1.1 9.2 </code></pre>
3
2016-09-29T08:31:56Z
39,765,388
<p>Another method is to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.mod.html#pandas.Series.mod" rel="nofollow"><code>mod(1)</code></a> to calculate the modulo with 1:</p> <pre><code>In [60]: df[df['Col1'].mod(1) &gt; 0].dropna() Out[60]: Col1 0 0.7 2 1.1 4 9.5 </code></pre> <p>here we see the effect of <code>mod</code>, whole numbers become <code>0</code> whilst fractional portions will remain:</p> <pre><code>In [62]: df['Col1'].mod(1) Out[62]: 0 0.7 1 0.0 2 0.1 3 0.0 4 0.5 5 NaN Name: Col1, dtype: float64 </code></pre>
2
2016-09-29T08:38:02Z
[ "python", "pandas", "indexing", "condition", null ]
Elasticsearch filtering for the same range brings back different amount of results
39,765,346
<p>I want to perform the following query</p> <pre><code>SELECT * FROM logs WHERE dst != "-" AND @timestamp &gt; "a date before" AND @timestamp &lt; "now" </code></pre> <p>I use python elasticsearch sdk, and formed two queries for testing</p> <pre><code>from elasticsearch import Elasticsearch from datetime import datetime, timedelta now = datetime.now() four_hours_before = now - timedelta(hours=4) es = Elasticsearch("http://es.domain.com:9200") query_bool_filter = { 'query': { 'bool': {' filter': { 'bool': { 'must_not': { 'term': { 'dst': '-' } }, 'must': { 'range': { '@timestamp': { 'gte': four_hours_before, 'lte': now } } } } } } } } </code></pre> <p>and a second query that uses the must_not separate from the filter</p> <pre><code>query_bool_and_filter = { 'query': { 'bool': { 'filter': { 'range': { '@timestamp': { 'gte': four_hours_before, 'lte': now } } }, 'must_not': { 'term': { 'dst': '-' } } } } } </code></pre> <p>When I execute the queries using search from the python sdk I and compare total field in the returning results and it's different like so:</p> <pre><code>res1 = es.search(index="myindex", body=query_bool_filter) res2 = es.search(index="myindex", body=query_bool_and_filter) res1.get('hits').get('total') #prints 43197 res2.get('hits').get('total') #prints 43215 </code></pre> <p>Why do I get different numbers since the range is the same?</p>
0
2016-09-29T08:35:56Z
39,765,809
<p>You can try <a href="https://www.elastic.co/guide/en/elasticsearch/guide/current/logging.html" rel="nofollow">logging</a> to see what is really happening with your elastic search queries.</p>
1
2016-09-29T08:57:32Z
[ "python", "elasticsearch" ]
Adding an extension in Sphinx (Python Documentation Generator) configuration file
39,765,520
<p>I want to use Sphinx as a documentation generator. When I try to run the <strong>make html</strong> command, I have the following error : </p> <p><code>Extension error: Could not import extension sphinxcontrib.httpdomain (exception: No module named sphinxcontrib.httpdomain) make: *** [html] Error 1</code></p> <p>I've found this web page explaining that I have to manually add the extension to the Sphinx configuration file <a href="https://pythonhosted.org/sphinxcontrib-httpdomain/#module-sphinxcontrib.httpdomain" rel="nofollow">https://pythonhosted.org/sphinxcontrib-httpdomain/#module-sphinxcontrib.httpdomain</a></p> <p>But I can't find this configuration file. </p> <p>Do you have any idea where I could find it ? I'm on Mac OS X</p>
0
2016-09-29T08:44:06Z
39,770,308
<p>The configuration is in the <code>source</code> folder of your Sphinx project. It is named <code>conf.py</code> and contains an <code>extensions</code> option which should look like this:</p> <pre><code># Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.todo', ... ] </code></pre>
0
2016-09-29T12:26:04Z
[ "python", "osx", "python-sphinx" ]
Variable not defined PYTHON 3.5.2
39,765,529
<pre><code>def GameMode():#creates a function with name of gamemode global wrong_answer_1 for keys,values in keywords.items():#checks for the right answer if keys == code:#if the keys equal to the code value keyword_c = values global keyword_c for keys,values in definition.items():#checks for the right answer if keys == code + 1:#if the keys equal the code add 1 definition_c = values#set s to be the values for keys,values in definition.items():#checks for the right answer if inc == keys:#if the keys equal the code add 1 wrong_answer_1 = values#set s to be the values for keys,value in definition.items():#For the keys in the dictionary if keys == inc2:#if the keys equal to a value global wrong_answer_2 wrong_answer_2 = value print(wrong_answer_2, "Hi") </code></pre> <p>I am trying to get my variables keyword_c, definition_c, wrong_answer_1 and wrong_answer_2 to be global so I can use them in another function but I can't seem to get it to work, I am a bit of a newb when it comes to python. I've tried using "global" as you can see above and I have tried to call variables and pass them but I don't fully understand it enough to be able to figure it out.</p> <pre><code>keyword_c = '' definition_c = '' wrong_answer_1 = '' wrong_answer_2 = '' </code></pre> <p>I have already predefined the variables as well to see if that did anything, they are there but its now just an empty string so my program is picking up the fact that the variable is being defined.</p> <pre><code>Traceback (most recent call last): File "D:\Program.py", line 67, in &lt;module&gt; GameMode()#calls the function File "D:\Program.py", line 55, in GameMode print(wrong_answer_2, "Hi") NameError: name 'wrong_answer_2' is not defined </code></pre> <p>here is the error I get if I remove the original line that sets it as a blank string</p>
-1
2016-09-29T08:44:20Z
39,765,684
<p>Why didn't you want to create a class with variables:</p> <pre><code>self.keyword_c = '' self.definition_c = '' self.wrong_answer_1 = '' self.wrong_answer_2 = '' </code></pre> <p>so variables would be global (if every method and variable is inside class) and with your method:</p> <pre><code>def GameMode(self):#creates a function with name of gamemode for keys,values in keywords.items():#checks for the right answer if keys == code:#if the keys equal to the code value self.keyword_c = values #(...) </code></pre> <p>And actually, here is a mistake which might produce error(commented it):</p> <pre><code>def GameMode():#creates a function with name of gamemode global wrong_answer_1 for keys,values in keywords.items():#checks for the right answer if keys == code:#if the keys equal to the code value keyword_c = values # keyword_c doesn't exist here, you cannot assign here global keyword_c # its created here, so it now exists </code></pre>
0
2016-09-29T08:52:32Z
[ "python", "variables", "global-scope", "python-3.5.2" ]
Variable not defined PYTHON 3.5.2
39,765,529
<pre><code>def GameMode():#creates a function with name of gamemode global wrong_answer_1 for keys,values in keywords.items():#checks for the right answer if keys == code:#if the keys equal to the code value keyword_c = values global keyword_c for keys,values in definition.items():#checks for the right answer if keys == code + 1:#if the keys equal the code add 1 definition_c = values#set s to be the values for keys,values in definition.items():#checks for the right answer if inc == keys:#if the keys equal the code add 1 wrong_answer_1 = values#set s to be the values for keys,value in definition.items():#For the keys in the dictionary if keys == inc2:#if the keys equal to a value global wrong_answer_2 wrong_answer_2 = value print(wrong_answer_2, "Hi") </code></pre> <p>I am trying to get my variables keyword_c, definition_c, wrong_answer_1 and wrong_answer_2 to be global so I can use them in another function but I can't seem to get it to work, I am a bit of a newb when it comes to python. I've tried using "global" as you can see above and I have tried to call variables and pass them but I don't fully understand it enough to be able to figure it out.</p> <pre><code>keyword_c = '' definition_c = '' wrong_answer_1 = '' wrong_answer_2 = '' </code></pre> <p>I have already predefined the variables as well to see if that did anything, they are there but its now just an empty string so my program is picking up the fact that the variable is being defined.</p> <pre><code>Traceback (most recent call last): File "D:\Program.py", line 67, in &lt;module&gt; GameMode()#calls the function File "D:\Program.py", line 55, in GameMode print(wrong_answer_2, "Hi") NameError: name 'wrong_answer_2' is not defined </code></pre> <p>here is the error I get if I remove the original line that sets it as a blank string</p>
-1
2016-09-29T08:44:20Z
39,765,846
<p>I am not sure what your function is supposed to be doing; it is hard to guess with all the syntax errors, but the following fixes the syntax errors, using a single global at the top of the function for everything which is not defined within the function.</p> <pre><code>def GameMode():#creates a function with name of gamemode global inc, inc2, code, wrong_answer_1, wrong_answer_2, keyword_c for keys,values in keywords.items():#checks for the right answer if keys == code:#if the keys equal to the code value keyword_c = values for keys,values in definition.items():#checks for the right answer if keys == code + 1:#if the keys equal the code add 1 definition_c = values#set s to be the values for keys,values in definition.items():#checks for the right answer if inc == keys:#if the keys equal the code add 1 wrong_answer_1 = values#set s to be the values for keys,value in definition.items():#For the keys in the dictionary if keys == inc2:#if the keys equal to a value wrong_answer_2 = value print(wrong_answer_2, "Hi") keywords = { 1: 'a', } definition = { 1: 'a', } code = 1 inc = 1 inc2 = 1 wrong_answer_1 = None wrong_answer_2 = None keyword_c = None GameMode() print(inc, inc2, code, wrong_answer_1, wrong_answer_2, keyword_c) </code></pre> <p>That will output:</p> <pre><code>a Hi 1 1 1 a a a </code></pre>
0
2016-09-29T08:59:39Z
[ "python", "variables", "global-scope", "python-3.5.2" ]
Form validation in another route Flask
39,765,548
<p>I have a flask wtform. I am able to validate the form in a route where it is instantiated. But I want to validate it from another route. Without using session variable, Is there any other way by which I can access the form object from the other route?</p> <p><strong>form_class.py</strong></p> <pre><code>class Fruit ( Form): Fruit = SelectField ( choices = [ ( 0,"Select Fruit"), ( 1,'Apple'), (2,'Grape'), (3,'Orange') ], coerce = int, id="Fruit", validators = [Required()]) </code></pre> <p><strong>views.py</strong>: </p> <pre><code>@app.route('/fruit', methods = ['GET', 'POST']) def fruit(): form = Fruit() if form.validate_on_submit(): return render_template("output.html") return render_template("name.html", form = form) </code></pre> <p><strong>name.html</strong></p> <pre><code>&lt;form action="" method="POST"&gt; {{ form.hidden_tag() }} {{form.Fruit}} {{form.Fruit.errors}} &lt;input type = "submit"&gt; &lt;/form&gt; </code></pre> <p>The above code works completely fine. But I want to do something like the below: </p> <pre><code>@app.route('/fruit') def fruit(): form = Fruit() '''if form.validate_on_submit(): return render_template("output.html")''' return render_template("name.html", form = form) @app.route('/fruit_submit', methods = ['GET', 'POST']) def fruit_submit(): print request.form if form.validate_on_submit(): return render_template("output.html") </code></pre> <p>I am getting the following error upon trying the above code.</p> <pre><code>NameError: global name 'form' is not defined </code></pre>
1
2016-09-29T08:45:22Z
39,857,330
<p>You have to call the Fruit class with request.form</p> <pre><code>@app.route('/fruit_submit', methods = ['GET', 'POST']) def fruit_submit(): print Fruit(request.form) if Fruit(request.form).validate_on_submit(): return render_template("output.html") </code></pre>
0
2016-10-04T16:22:20Z
[ "python", "flask", "flask-wtforms" ]
Python to receive huge string as argument from rabbitmq
39,765,673
<p>On Raspberry 3 I run a Rabbit-Mq listener.py that receives a large string (json) consisting of 14000 key/value pairs. The listener.py script will grab this string and pass it along to another script(database.py) that will encode it back to json(python dict object), parse it and store the values to a Mariadb database.</p> <p>The listener.py calls:</p> <pre><code>os.system("python %s %s" % (database.py, body)) </code></pre> <p>where "body" is the string received from rabbitmq. </p> <p>I have a concern that the 14000 objects json might be to big to pass to database.py as <code>sys argument</code>. Are there any other methods I could achieve my goal? I just heard about pickle, is it better for my purpose? </p>
0
2016-09-29T08:52:04Z
39,766,722
<p>A simple way would be to use multiprocessing.connection with its <a href="https://docs.python.org/3/library/multiprocessing.html?highlight=multiprocessing#module-multiprocessing.connection" rel="nofollow">Listener and Client</a>. These methods use pickle internally.</p>
1
2016-09-29T09:36:53Z
[ "python", "json", "rabbitmq", "pickle" ]
Blaze with Scikit Learn K-Means
39,765,738
<p>I am trying to fit Blaze data object to scikit kmeans function.</p> <pre><code>from blaze import * from sklearn.cluster import KMeans data_numeric = Data('data.csv') data_cluster = KMeans(n_clusters=5) data_cluster.fit(data_numeric) </code></pre> <p>Data Sample:</p> <pre><code>A B C 1 32 34 5 57 92 89 67 21 </code></pre> <p>Its throwing error :</p> <p><a href="http://i.stack.imgur.com/g3IBI.png."><img src="http://i.stack.imgur.com/g3IBI.png." alt="enter image description here"></a></p> <p>I have been able to do it with Pandas Dataframe. Any way to feed blaze object to this function ?</p>
10
2016-09-29T08:54:47Z
39,892,096
<p>I would suggest that you choose the number of clusters (K) to be much smaller than the number of training examples you have in your data set. It is not right to run the K-Means algorithm when the number of clusters you desire is greater than or equal to the number of training examples. The error occurs when you try to pass the blaze object with an undesirable shape, to the KMeans function. Please check : <a href="https://blaze.readthedocs.io/en/latest/csv.html" rel="nofollow">https://blaze.readthedocs.io/en/latest/csv.html</a></p>
1
2016-10-06T09:14:55Z
[ "python", "scikit-learn", "blaze" ]
Blaze with Scikit Learn K-Means
39,765,738
<p>I am trying to fit Blaze data object to scikit kmeans function.</p> <pre><code>from blaze import * from sklearn.cluster import KMeans data_numeric = Data('data.csv') data_cluster = KMeans(n_clusters=5) data_cluster.fit(data_numeric) </code></pre> <p>Data Sample:</p> <pre><code>A B C 1 32 34 5 57 92 89 67 21 </code></pre> <p>Its throwing error :</p> <p><a href="http://i.stack.imgur.com/g3IBI.png."><img src="http://i.stack.imgur.com/g3IBI.png." alt="enter image description here"></a></p> <p>I have been able to do it with Pandas Dataframe. Any way to feed blaze object to this function ?</p>
10
2016-09-29T08:54:47Z
39,920,176
<p>I think you need to convert your pandas dataframe into an numpy array before you fit. </p> <pre><code>from blaze import * import numpy from sklearn.cluster import KMeans data_numeric = numpy.array(data('data.csv')) data_cluster = KMeans(n_clusters=5) data_cluster.fit(data_numeric) </code></pre>
5
2016-10-07T14:53:30Z
[ "python", "scikit-learn", "blaze" ]
Blaze with Scikit Learn K-Means
39,765,738
<p>I am trying to fit Blaze data object to scikit kmeans function.</p> <pre><code>from blaze import * from sklearn.cluster import KMeans data_numeric = Data('data.csv') data_cluster = KMeans(n_clusters=5) data_cluster.fit(data_numeric) </code></pre> <p>Data Sample:</p> <pre><code>A B C 1 32 34 5 57 92 89 67 21 </code></pre> <p>Its throwing error :</p> <p><a href="http://i.stack.imgur.com/g3IBI.png."><img src="http://i.stack.imgur.com/g3IBI.png." alt="enter image description here"></a></p> <p>I have been able to do it with Pandas Dataframe. Any way to feed blaze object to this function ?</p>
10
2016-09-29T08:54:47Z
39,952,326
<p><code>sklearn.cluster.KMeans</code> don't support input data with type <code>blaze.interactive._Data</code> which is the type of data_numeric in your code.</p> <p>You can use <code>data_cluster.fit(data_numeric.peek())</code> to fit the transferred data_numeric with type <code>DataFrame</code> supported by <code>sklearn.cluster.KMeans</code>.</p>
2
2016-10-10T06:22:19Z
[ "python", "scikit-learn", "blaze" ]
Blaze with Scikit Learn K-Means
39,765,738
<p>I am trying to fit Blaze data object to scikit kmeans function.</p> <pre><code>from blaze import * from sklearn.cluster import KMeans data_numeric = Data('data.csv') data_cluster = KMeans(n_clusters=5) data_cluster.fit(data_numeric) </code></pre> <p>Data Sample:</p> <pre><code>A B C 1 32 34 5 57 92 89 67 21 </code></pre> <p>Its throwing error :</p> <p><a href="http://i.stack.imgur.com/g3IBI.png."><img src="http://i.stack.imgur.com/g3IBI.png." alt="enter image description here"></a></p> <p>I have been able to do it with Pandas Dataframe. Any way to feed blaze object to this function ?</p>
10
2016-09-29T08:54:47Z
39,991,875
<p>Yes,before you fit ,you must need to convert your pandas dataframe into an numpy array,now its works fine...i think @aberger already answered .</p> <p>thank you!</p>
0
2016-10-12T06:30:07Z
[ "python", "scikit-learn", "blaze" ]
"from" import works differently
39,765,779
<p>I have this code in my Python code (<code>settings.py</code> is located in the <code>PROJECT</code> dir):</p> <pre><code>import PROJECT.settings ... if PROJECT.settings.BASE_DIR: ... </code></pre> <p>which works fine. I'd say I could rewrite to this:</p> <pre><code>from PROJECT import settings ... if settings.BASE_DIR: ... </code></pre> <p>But that gives an <code>AttributeError: 'NoneType' object has no attribute 'BASE_DIR'</code></p> <p>Am I missing something here?</p>
0
2016-09-29T08:56:21Z
39,765,830
<p>The <code>from parent import name</code> format first looks for names in the <code>module</code> namespace (set in <code>__init__.py</code> or anything that added that name to the <code>parent</code> module).</p> <p>In your case, the <code>__init__.py</code> file in <code>PROJECT</code> has set <code>settings</code> to <code>None</code>. It is this name that is found before any contained modules.</p> <p>The <code>import parent.name</code> form will only look for modules in a package, not for names defined in the <code>parent</code> module.</p>
3
2016-09-29T08:58:57Z
[ "python" ]
Get the full name of a nested class
39,765,903
<p>Given a nested class <code>B</code>:</p> <pre><code>class A: class B: def __init__(self): pass ab = A.B() </code></pre> <p>How can I get the full name of the class for <code>ab</code>? I'd expect a result like <code>A.B</code>. </p>
0
2016-09-29T09:02:02Z
39,766,043
<p>You could get the fully qualified name, <a href="https://docs.python.org/3/library/stdtypes.html#definition.__qualname__" rel="nofollow"><code>__qualname__</code></a>, of its <a href="https://docs.python.org/3/library/stdtypes.html#instance.__class__" rel="nofollow"><code>__class__</code></a>:</p> <pre><code>&gt;&gt;&gt; ab.__class__.__qualname__ 'A.B' </code></pre> <p>Preferably by using <a href="https://docs.python.org/3/library/functions.html#type" rel="nofollow"><code>type</code></a> (which calls <code>__class__</code> on the instance):</p> <pre><code>&gt;&gt;&gt; type(ab).__qualname__ </code></pre>
2
2016-09-29T09:08:25Z
[ "python", "class", "python-3.x" ]
How to reduce Django class based view boilerplate
39,765,907
<p>I really hate boilerplate. However, I can't deny that code such as the following is a huge benefit. So my question, what does one do in Python to make up for the fact that it doesn't come with a macro (template) pre-processor?</p> <p>One idea would be to write a factory function, but I'll willingly admit that I don't know where to start. (Please note that this is Django with its declarative classes and interesting "magic" metaclassy stuff going on underneath, which I know enough to recognise and not enough to understand or debug if I break it)</p> <p>The other would be to turn this into a template and import it through a trivial pre-processor that implements something like <code>${var:-default}</code> in Bash. (see <a href="http://stackoverflow.com/questions/436198/what-is-an-alternative-to-execfile-in-python-3-0">What is an alternative to execfile in Python 3.0?</a> ), </p> <pre><code>with my_preprocessor("somefile.py") as f: code = compile(f.read(), "somefile.py", 'exec') exec(code) # in the current namespace </code></pre> <p>But there are lots of warnings about <code>exec</code> that I've seen over the years. The cited SO answer mentions line numbers for debugging as an issue. Then there is this, <a href="http://lucumr.pocoo.org/2011/2/1/exec-in-python/" rel="nofollow">http://lucumr.pocoo.org/2011/2/1/exec-in-python/</a> , warning of subtle problems including memory leaks. I suspect they won't apply to a code defining classes which are "never" deleted, but on the other hand I don't want the slightest risk of introducing obscure problems to a production setting.</p> <p>Any thoughts or pointers welcome. Is the best thing to do is to accept cut and paste boilerplate? There are unlikely to be more than twenty paste-modifies of any such template, usually less than ten.</p> <p>Example code. Lines marked #V are the only ones that would commonly be customized. The first two classes are used once only, by the third.</p> <pre><code>#--- this is boilerplate for a select-view ---- #--- just replace the string "User" by the relevant model and customize class UserSelectPopupTable( tables.Table): id = SelectorColumn( clickme='&lt;span class="glyphicon glyphicon-unchecked"&gt;&lt;/span&gt;' ) #V class Meta: model=User attrs={ 'class':'paleblue' } empty_text='Sorry, that search did not match anything.' fields=( 'name','address', ) #V sequence=('id','name','address',) #V class UserFilter2(django_filters.FilterSet): name = django_filters.CharFilter(lookup_expr='icontains') #V address = django_filters.CharFilter(lookup_expr='icontains') #V class Meta: model = User fields = ('name','address', ) #V (usually same as previous) class UserSelectPopup( FilterTableView ): model=User table_class=UserSelectPopupTable filterset_class=UserFilter2 template_name='redacted/select_in_popup.html' #--- end boilerplate </code></pre>
0
2016-09-29T09:02:19Z
39,768,951
<p>Python and Django are awesome.</p> <p>I read and re-read the (quite short) documentation of the 3-argument form of <code>type</code> that you use to dynamically create classes (<a href="https://docs.python.org/3/library/functions.html#type" rel="nofollow">https://docs.python.org/3/library/functions.html#type</a>). I wrote a trivial helper routine <code>Classfactory</code> to provide a better interface to <code>type</code>, and translated the class structure into function calls, which was mostly cut and paste! I arrived at the following (which I think also proves that you can write Javascript in Python ... the instinct to insert semicolons was strong)</p> <pre><code>def Classfactory( classname, inheritsfrom=(object,), **kwargs): inh = inheritsfrom if isinstance(inheritsfrom, tuple) else (inheritsfrom, ) return type( classname, inh, kwargs) ThisPopupFilter = Classfactory( 'ThisPopupFilter', django_filters.FilterSet, name = django_filters.CharFilter(lookup_expr='icontains') , address = django_filters.CharFilter(lookup_expr='icontains') , Meta = Classfactory( 'Meta', model = User, fields = ('name','address', ), ), ) ThisPopupTable = Classfactory( 'ThisPopupTable', tables.Table, id = SelectorColumn( clickme='&lt;span class="glyphicon glyphicon-unchecked"&gt;&lt;/span&gt;' ), Meta = Classfactory( 'Meta', # default inherit from object model=User, attrs={ 'class':'paleblue' }, empty_text='Sorry, that search did not match anything.', fields=( 'name','address', ) , sequence=('id','name','address',) , ), ) UserSelectPopup = Classfactory( 'UserSelectPopup', FilterTableView, model=User, table_class=ThisPopupTable, filterset_class=ThisPopupFilter, template_name='silson/select_in_popup.html', # this template handles any such view ) </code></pre> <p>Now I suddenly realized that it's not just Django <code>Meta</code> classes that can be defined inside other classes. Any class that is not needed elsewhere can be nested in the scope where it is needed. So I moved the first two classes inside the third, and then with a bit more rearranging I was able to move to a factory function with arguments ...</p> <pre><code>def SelectPopupFactory( Model, fields, sequence=None, clickme='&lt;span class="glyphicon glyphicon-unchecked"&gt;&lt;/span&gt;' , empty_text='Sorry, that search did not match anything.',): return Classfactory( 'UserSelectPopup', FilterTableView, model=Model, template_name='silson/select_in_popup.html', # this template handles any such view table_class=Classfactory( 'ThisPopupTable', tables.Table, id = SelectorColumn( clickme=clickme ), Meta = Classfactory( 'Meta', # default inherit from object model=Model, attrs={ 'class':'paleblue' }, empty_text=empty_text, fields=fields, sequence=sequence, )), filterset_class=Classfactory( 'ThisPopupFilter', django_filters.FilterSet, name = django_filters.CharFilter(lookup_expr='icontains') , address = django_filters.CharFilter(lookup_expr='icontains') , Meta = Classfactory( 'Meta', model = Model, fields = ('name','address', ), )), ) UserSelectPopup = SelectPopupFactory( User, fields=('name','address', ), sequence=('id','name','address',) , ) </code></pre> <p>Can anybody see anything fundamentally wrong with this? (I'm feeling slightly amazed that it all ran and did not crash at the first attempt, modulo typos)</p> <p>UPDATE a workday later: I think this is OK as an example / proof of concept (it is code that ran without crashing) but there are several fine points to do with the actual django_filters and django_tables2 usage that aren't right here. My factory function has evolved and is more capable, but less easy to relate to the original non-factory class definitions.</p>
1
2016-09-29T11:24:39Z
[ "python", "django", "python-3.x", "boilerplate" ]
Web scraping asian language sites with Python
39,765,939
<p>Not very familiar with the Python ecosystem, or with web scraping generally. So I'm trying to scrape content from a Chinese language site. </p> <pre><code>from bs4 import BeautifulSoup import requests r = requests.get("https://www.baidu.com/") r.encoding = 'utf-8' text = r.text soup = BeautifulSoup(text.encode('utf-8','ignore'), 'html.parser') print soup.prettify() </code></pre> <p>The problem is, this code works for me, but it doesn't work for everyone, and I don't know enough about character encoding or the python ecosystem to troubleshoot the issue. I'm running Python 2.7.10, but running this same block of code on another computer with Python 2.7.12 resulted in the following error: "UnicodeEncodeError: 'ascii' codec can't encode chracters in position 369-377: ordinal not in range(128)"</p> <p>So I guess my question really is the following: </p> <p>What is causing this error? And how can I fix this code to make it more portable? </p> <p>Thank you in advance for any guidance or pointers. </p>
0
2016-09-29T09:03:48Z
39,766,826
<p>I think you don't need to specify the encoding for request. since the r.text has do the coding-transform work and r.content is the raw data.</p> <p>see the documents:</p> <pre><code> | text | Content of the response, in unicode. | | If Response.encoding is None, encoding will be guessed using | ``chardet``. | | The encoding of the response content is determined based solely on HTTP | headers, following RFC 2616 to the letter. If you can take advantage of | non-HTTP knowledge to make a better guess at the encoding, you should | set ``r.encoding`` appropriately before accessing this property. </code></pre> <p>so you just need config the response's encoding, not the requests' encoding.</p> <p>so the code should be like this:</p> <pre><code>print r.encoding r.encoding = "urf8" print r.text </code></pre>
-1
2016-09-29T09:41:47Z
[ "python", "web-scraping", "beautifulsoup" ]
Astroquery VizieR UCAC4 full download
39,765,998
<p>I would like to have a local (offline) ASCII version of <strong>UCAC4</strong> star catalogue in order to have an isolated work environment (no internet).</p> <p>I am having issues trying to retrieve this specific <em>full</em> catalog. Downloading small parts is pretty straightforward using <strong>topcat</strong> <em>VO->Vizier service</em> option or even the CdS web interface, but I did not manage full catalogue retrieval .</p> <p>My best shot was using Python scripting Astroquery (<a href="http://astroquery.readthedocs.io/en/latest/vizier/vizier.html" rel="nofollow">http://astroquery.readthedocs.io/en/latest/vizier/vizier.html</a>) but the following function call for instance does not return nearly enough stars when it should download half the catalogue (Northern part of celestial sphere):</p> <p><code>ucac4 = v.query_region(coord.SkyCoord(ra=0, dec=45, unit=(u.deg, u.deg), frame='icrs'), width=90, height=360, catalog= 'I/322A')</code></p> <p><code>width</code>/<code>height</code> seem to refer to <em>declination</em>/<em>ra</em> in this order. (I am wrong with the usage of the box ? )</p> <p>I also tried to iterate on smaller parts of the sky and it improves the density, but I still seem to have missing objects and cannot figure out why.For instance, I tried to iterate on 0.2° declination steps so I could cross check with this file : <a href="ftp://cdsarc.u-strasbg.fr/pub/cats/I/322A/UCAC4/u4i/zone_stats" rel="nofollow">ftp://cdsarc.u-strasbg.fr/pub/cats/I/322A/UCAC4/u4i/zone_stats</a> but the <code>query_region</code> function does not return the expected amount of stars...</p> <p>And I also tried Astrosurf links but I cannot just use these files because I want it in an ASCII format.</p>
2
2016-09-29T09:06:22Z
39,796,905
<p>To download large data sets, you need to increase the <code>ROW_LIMIT</code>. The default is only 50 because we wanted to limit the load on the vizier servers unless users know what they're doing.</p> <pre><code>from astroquery.vizier import Vizier Vizier.ROW_LIMIT = 100000000000 </code></pre>
2
2016-09-30T17:26:09Z
[ "python", "astronomy", "astropy" ]
Astroquery VizieR UCAC4 full download
39,765,998
<p>I would like to have a local (offline) ASCII version of <strong>UCAC4</strong> star catalogue in order to have an isolated work environment (no internet).</p> <p>I am having issues trying to retrieve this specific <em>full</em> catalog. Downloading small parts is pretty straightforward using <strong>topcat</strong> <em>VO->Vizier service</em> option or even the CdS web interface, but I did not manage full catalogue retrieval .</p> <p>My best shot was using Python scripting Astroquery (<a href="http://astroquery.readthedocs.io/en/latest/vizier/vizier.html" rel="nofollow">http://astroquery.readthedocs.io/en/latest/vizier/vizier.html</a>) but the following function call for instance does not return nearly enough stars when it should download half the catalogue (Northern part of celestial sphere):</p> <p><code>ucac4 = v.query_region(coord.SkyCoord(ra=0, dec=45, unit=(u.deg, u.deg), frame='icrs'), width=90, height=360, catalog= 'I/322A')</code></p> <p><code>width</code>/<code>height</code> seem to refer to <em>declination</em>/<em>ra</em> in this order. (I am wrong with the usage of the box ? )</p> <p>I also tried to iterate on smaller parts of the sky and it improves the density, but I still seem to have missing objects and cannot figure out why.For instance, I tried to iterate on 0.2° declination steps so I could cross check with this file : <a href="ftp://cdsarc.u-strasbg.fr/pub/cats/I/322A/UCAC4/u4i/zone_stats" rel="nofollow">ftp://cdsarc.u-strasbg.fr/pub/cats/I/322A/UCAC4/u4i/zone_stats</a> but the <code>query_region</code> function does not return the expected amount of stars...</p> <p>And I also tried Astrosurf links but I cannot just use these files because I want it in an ASCII format.</p>
2
2016-09-29T09:06:22Z
39,874,130
<p>Fastest solution : get the <a href="http://vizier.u-strasbg.fr/vizier/doc/cdsclient.html" rel="nofollow">cdsclient</a> package. Run the finducac4 program with -whole option, for example : finducac4 -whole -m 115000000 > myUcac4.dat</p>
1
2016-10-05T12:35:19Z
[ "python", "astronomy", "astropy" ]
Pandas: adding new column to existing Data Frame for grouping purposes
39,766,141
<p>I have a <code>pandas</code> Data Frame consisting of 2000 rows x 8 columns. I want to be able to group the first 4 columns together, as well as the other 4, but I can't figure out how. The purpose is to create a categorical bar plot, with colors assigned according to C1=C5, C2=C6, and so forth.</p> <p>My Data Frame:</p> <pre><code>In[1]: df.head(5) Out[1]: C1 C2 C3 C4 C5 C6 C7 C8 0 15 37 17 10 8 11 19 86 1 39 84 11 5 5 13 9 11 2 10 20 30 51 74 62 56 58 3 88 2 1 3 9 6 0 17 4 17 17 32 24 91 45 63 48 </code></pre> <p>Do you suggest adding another column such as <code>df['Gr']</code> or what else?</p>
2
2016-09-29T09:12:45Z
39,766,297
<p>use <code>pd.concat</code></p> <pre><code>pd.concat([df.iloc[:, :4], df.iloc[:, 4:]], axis=1, keys=['first4', 'second4']) </code></pre> <p><a href="http://i.stack.imgur.com/8Fb5d.png" rel="nofollow"><img src="http://i.stack.imgur.com/8Fb5d.png" alt="enter image description here"></a></p>
1
2016-09-29T09:19:14Z
[ "python", "pandas", "dataframe", "grouping", "bar-chart" ]
Pandas: adding new column to existing Data Frame for grouping purposes
39,766,141
<p>I have a <code>pandas</code> Data Frame consisting of 2000 rows x 8 columns. I want to be able to group the first 4 columns together, as well as the other 4, but I can't figure out how. The purpose is to create a categorical bar plot, with colors assigned according to C1=C5, C2=C6, and so forth.</p> <p>My Data Frame:</p> <pre><code>In[1]: df.head(5) Out[1]: C1 C2 C3 C4 C5 C6 C7 C8 0 15 37 17 10 8 11 19 86 1 39 84 11 5 5 13 9 11 2 10 20 30 51 74 62 56 58 3 88 2 1 3 9 6 0 17 4 17 17 32 24 91 45 63 48 </code></pre> <p>Do you suggest adding another column such as <code>df['Gr']</code> or what else?</p>
2
2016-09-29T09:12:45Z
39,766,458
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_arrays.html" rel="nofollow"><code>MultiIndex.from_arrays</code></a>:</p> <pre><code>df.columns = pd.MultiIndex.from_arrays([['a'] * 4 + ['b'] * 4 , df.columns]) print (df) a b C1 C2 C3 C4 C5 C6 C7 C8 0 15 37 17 10 8 11 19 86 1 39 84 11 5 5 13 9 11 2 10 20 30 51 74 62 56 58 3 88 2 1 3 9 6 0 17 4 17 17 32 24 91 45 63 48 </code></pre> <p>Then you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow"><code>xs</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.bar.html" rel="nofollow"><code>DataFrame.plot.bar</code></a>:</p> <pre><code>import matplotlib.pyplot as plt f, a = plt.subplots(2,1) df.xs('a', axis=1).plot.bar(ax=a[0]) df.xs('b', axis=1).plot.bar(ax=a[1]) plt.show() </code></pre> <p><a href="http://i.stack.imgur.com/fDoSY.png" rel="nofollow"><img src="http://i.stack.imgur.com/fDoSY.png" alt="graph"></a></p> <hr> <pre><code>import matplotlib.pyplot as plt df.columns = pd.MultiIndex.from_arrays([['a'] * 4 + ['b'] * 4 , df.columns]) df.stack(0).T.plot.bar(rot='0', legend=False) df.columns = ['a'] * 4 + ['b'] * 4 df = df.T.plot.bar(rot='0') plt.show() </code></pre>
3
2016-09-29T09:25:15Z
[ "python", "pandas", "dataframe", "grouping", "bar-chart" ]
How do I display floats as currency with negative sign before currency
39,766,218
<p>consider the <code>pd.Series</code> <code>s</code></p> <pre><code>s = pd.Series([-1.23, 4.56]) s 0 -1.23 1 4.56 dtype: float64 </code></pre> <p>I can format floats with pandas <code>display.float_format</code> option</p> <pre><code>with pd.option_context('display.float_format', '${:,.2f}'.format): print s 0 $-1.23 1 $4.56 dtype: float64 </code></pre> <p>But how do I format it in such a way that I get the <code>-</code> sign in front of the <code>$</code></p> <pre><code>0 -$1.23 1 $4.56 dtype: float64 </code></pre>
4
2016-09-29T09:15:53Z
39,766,887
<p>You can substitute the formatting function with your own. Below is just a demo of how it works, you can tune it to your own needs:</p> <pre><code>def formatfunc(*args, **kwargs): value = args[0] if value &gt;= 0: return '${:,.2f}'.format(value) else: return '-${:,.2f}'.format(abs(value)) with pd.option_context('display.float_format', formatfunc): print(s) </code></pre> <p>And you get:</p> <pre><code>0 -$1.23 1 $4.56 dtype: float64 </code></pre>
3
2016-09-29T09:44:28Z
[ "python", "pandas" ]
CNTK tutorial:"Hands-On Lab: Image recognition with Convolutional Networks, Batch Normalization, and Residual Nets" python problems
39,766,242
<p>I am trying to follow this tutorial: <a href="https://github.com/Microsoft/CNTK/wiki/Hands-On-Labs-Image-Recognition" rel="nofollow">https://github.com/Microsoft/CNTK/wiki/Hands-On-Labs-Image-Recognition</a> I am now at the point where Frank is saying:” Please execute the following two Python scripts which you will also find in the working directory:</p> <pre><code>wget -rc http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz tar xvf www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz python CifarConverter.py cifar-10-batches-py </code></pre> <p>I am using windows 10. I assume that wget is a Linux “thing”. I have downloaded the file from <a href="http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz" rel="nofollow">http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz</a> To the path of the CifarConverter.py script as i can not run wget from cmd or cygwin. Next I am trying to run the tar command but get an error” No such file or directory” I changed the command to tar xvf cifar-10-python.tar.gz and executed it from Cygwin.(I just made a fresh installation of cygwin 2.6.0) This extracts the data.</p> <p>Next I am running the python command:” python CifarConverter.py cifar-10-batches-py” (from cygwin) But I am getting an error from line 48! I have tried change the line to: print ("error") but is only getting a new error in import cPickle as cp ImportError: No module named 'cPickle'</p> <p>What shall i do to run the python script?</p>
0
2016-09-29T09:16:34Z
39,776,802
<p>You are using Python 3.+ version. Try it with Python 2.7, and it should be ok.</p>
1
2016-09-29T17:43:55Z
[ "python", "windows", "cntk" ]
Use output of a bash command as an input to environment variable in VSCode
39,766,323
<p>In VSCode launch configuration for python, I set environment variables using the env element, like this:</p> <pre><code>"env": { "SOME_VARIABLE" : "SOME_VALUE" } </code></pre> <p>I want to set the value of this environment variable to the result of a bash command, like it is done in command line this way:</p> <pre><code>export SOME_VARIABLE="SOME_VALUE_FROM_$(some command)" </code></pre> <p>Any idea how to do this in launch.json?</p>
0
2016-09-29T09:20:11Z
39,806,049
<p>Hisham, I'm the author of the extension. Unfortunately such a feature isn't supported at this state.</p>
1
2016-10-01T11:27:52Z
[ "python", "vscode", "vscode-extensions" ]
Return all data from a loop (multiple returns so the page gets overwritten)
39,766,344
<p>I'm working on an API for an app, but I want to output multiple json objects from my sql query but if I use multiple returns the page gets overwritten.</p> <p>My code is :</p> <pre><code>@app.route('/api/location') @support_jsonp def get_locations(): d = {} for i, row in enumerate(locations): l = [] for col in range(0, len(row)): l.append(row[col]) d[i] = l for s in range(0, len(d)): db_questionid = d[s][0] db_title = d[s][1] db_text = d[s][2] db_long = d[s][3] db_lat = d[s][4] db_completed = d[s][5] db_image = d[s][6] return jsonify({'id': db_questionid, 'title': db_title, 'text': db_text.decode("ISO-8859-1"), 'long': db_long, 'lat': db_lat, 'completed': db_completed, 'image': db_image}) </code></pre> <p>How would you fix it? I'm really stuck.</p> <p>Thanks in advance,</p> <p>Jordy</p>
0
2016-09-29T09:21:09Z
39,767,002
<p>I had to use stream_with_context &amp; Response so I could yield it</p> <p>example:</p> <pre><code>@app.route('/api/location') @support_jsonp def get_locations(): def generate(): d = {} for i, row in enumerate(locations): l = [] for col in range(0, len(row)): l.append(row[col]) d[i] = l for s in range(0, len(d)): db_questionid = d[s][0] db_title = d[s][1] db_text = d[s][2] db_long = d[s][3] db_lat = d[s][4] db_completed = d[s][5] db_image = d[s][6] yield jsonify({'id': db_questionid, 'title': db_title, 'text': db_text.decode("ISO-8859-1"), 'long': db_long, 'lat': db_lat, 'completed': db_completed, 'image': db_image}).data return Response(stream_with_context(generate())) </code></pre>
0
2016-09-29T09:49:58Z
[ "python", "flask" ]
How to compare HTTP headers with time fields in Python?
39,766,497
<p>I am using Python to construct a proxy server as an exercise and I want to compare two different strings of time received from a server. For example,</p> <pre><code>Date: Sun, 24 Nov 2013 18:34:30 GMT Expires: Sat, 23 Nov 2013 18:34:30 GMT </code></pre> <p>How can I compare whether the expiry time is earlier than the current time? Do I have to parse it using the <code>strptime</code> method of the <code>datetime</code> module or is there an easier way to do so? </p>
0
2016-09-29T09:26:53Z
39,766,991
<p>Convert each of the strings to a timestamp and compare these, for example as follows:</p> <pre><code>from datetime import datetime date1 = "Date: Sun, 24 Nov 2013 18:34:30 GMT" date2 = "Expires: Sat, 23 Nov 2013 18:34:30 GMT" format = "%a, %d %b %Y %H:%M:%S %Z" if datetime.strptime(date1, "Date: " + format) &gt;= datetime.strptime(date2, "Expires: " + format): print "Expired" </code></pre>
0
2016-09-29T09:49:23Z
[ "python", "time", "http-headers", "compare", "http-proxy" ]
Running a Docker image in PyCharm causes "Invalid volume specification"
39,766,499
<p>I am trying to run a project based on a Docker Image (Tensorflow, following instructions from <a href="http://www.netinstructions.com/how-to-install-and-run-tensorflow-on-a-windows-pc/" rel="nofollow">this tutorial</a>) as described in <a href="https://blog.jetbrains.com/pycharm/2015/12/using-docker-in-pycharm/" rel="nofollow">this blog</a>. The Docker is running fine, but I'm unable to import it in PyCharm (professional, does not work in community version). I get the following error message:</p> <blockquote> <p>Error running main: Can't run remote python interpreter: {"message":"Invalid bind mount spec \"C:/Path-to-project/Project-name:/opt/project:rw\": Invalid volume specification: 'C:/Path-to-project/Project-name:/opt/project:rw'"}</p> </blockquote> <p>How to solve this?</p>
0
2016-09-29T09:26:56Z
39,766,501
<p>This is a Windows Linux path problem. To solve it, change project paths to the Docker file to <code>/c/Path-to-project/Project-name</code> (with a lower case c and forward slashes) in order to solve this problem. Inspired by <a href="https://blog.jetbrains.com/pycharm/2016/06/pycharm-2016-2-eap-4-build-162-1120/" rel="nofollow">this link</a>.</p>
0
2016-09-29T09:26:56Z
[ "python", "windows", "docker", "tensorflow", "pycharm" ]
Running a Docker image in PyCharm causes "Invalid volume specification"
39,766,499
<p>I am trying to run a project based on a Docker Image (Tensorflow, following instructions from <a href="http://www.netinstructions.com/how-to-install-and-run-tensorflow-on-a-windows-pc/" rel="nofollow">this tutorial</a>) as described in <a href="https://blog.jetbrains.com/pycharm/2015/12/using-docker-in-pycharm/" rel="nofollow">this blog</a>. The Docker is running fine, but I'm unable to import it in PyCharm (professional, does not work in community version). I get the following error message:</p> <blockquote> <p>Error running main: Can't run remote python interpreter: {"message":"Invalid bind mount spec \"C:/Path-to-project/Project-name:/opt/project:rw\": Invalid volume specification: 'C:/Path-to-project/Project-name:/opt/project:rw'"}</p> </blockquote> <p>How to solve this?</p>
0
2016-09-29T09:26:56Z
39,968,174
<p>I solved this problem as follows:</p> <ol> <li>Go to: <code>File -&gt; Settings -&gt; Project -&gt; Project Interpreter -&gt; Your docker interpreter -&gt; Path mappings</code>;</li> <li>Add row: <code>{'Local path': 'C:', 'Remote path': '/c'}</code> (replace to your drive with project; if you use this interpeter for several projects from differen drives then add them all);</li> <li>Go to: <code>Run -&gt; Edit Configurations -&gt; Your configuration -&gt; Docker container settings -&gt; '...' -&gt; Volume bindings</code>;</li> <li>Select row with project's binding (example: <code>{'Container path': '/opt/project', 'Host path': 'C:\Users\_username_\my_python_project'}</code>) and press '<strong>Edit</strong>'.</li> <li>Press <code>OK -&gt; Apply -&gt; OK</code>. If first '<strong>OK</strong>' without changes did not help then replace <code>'Host path': 'C:\Users\_username_\my_python_project'</code> to <code>'Host path': '/c/Users/_username_/my_python_project'</code>;</li> <li>Run your python configuration.</li> </ol> <p><strong>About</strong>:</p> <ul> <li>Windows 10 1607 [10.0.14393]</li> <li>PyCharm 2016.2.3</li> <li>Build #PY-162.1967.10, built on September 7, 2016</li> <li>JRE: 1.8.0_102-b14 amd64</li> <li>JVM: Java HotSpot(TM) 64-Bit Server VM by Oracle Corporation</li> </ul> <p>Why PyCharm ignore <code>File -&gt; Settings -&gt; Build, Execution, Deployment -&gt; Docker -&gt; VirtualBox shared folders</code> in <code>Run -&gt; Edit Configurations</code>?</p>
0
2016-10-10T23:17:29Z
[ "python", "windows", "docker", "tensorflow", "pycharm" ]
Get element closest to cluster centroid
39,766,593
<p>After clustering a distance matrix with <code>scipy.cluster.hierarchy.linkage</code>, and assigning each sample to a cluster using <code>scipy.cluster.hierarchy.cut_tree</code>, I would like to extract one element out of each cluster, which is the closest to that cluster's centroid.</p> <ul> <li>I would be the happiest if an off-the-shelf function existed for this, but in the lack thereof:</li> <li>some suggestions were already proposed <a href="http://stackoverflow.com/questions/9362304/how-to-get-centroids-from-scipys-hierarchical-agglomerative-clustering">here</a> for extracting the centroids themselves, but not the closest-to-centroid elements.</li> <li>Note that this is not to be confused with the <code>centroid</code> linkage rule in <code>scipy.cluster.hierarchy.linkage</code>. I have already carried out the clustering itself, just want to access the closest-to-centroid elements.</li> </ul>
0
2016-09-29T09:30:58Z
39,767,308
<p>Nearest neighbours are most efficiently computed using KD-Trees. E.g.:</p> <pre><code>from scipy.spatial import cKDTree def find_k_closest(centroids, data, k=1, distance_norm=2): """ Arguments: ---------- centroids: (M, d) ndarray M - number of clusters d - number of data dimensions data: (N, d) ndarray N - number of data points k: int (default 1) nearest neighbour to get distance_norm: int (default 2) 1: Hamming distance (x+y) 2: Euclidean distance (sqrt(x^2 + y^2)) np.inf: maximum distance in any dimension (max((x,y))) Returns: ------- indices: (M,) ndarray values: (M, d) ndarray """ kdtree = cKDTree(data, leafsize=leafsize) distances, indices = kdtree.query(centroids, k, p=distance_norm) if k &gt; 1: indices = indices[:,-1] values = data[indices] return indices, values indices, values = find_k_closest(centroids, data) </code></pre>
1
2016-09-29T10:04:45Z
[ "python", "numpy", "scipy", "scikit-learn" ]
Get element closest to cluster centroid
39,766,593
<p>After clustering a distance matrix with <code>scipy.cluster.hierarchy.linkage</code>, and assigning each sample to a cluster using <code>scipy.cluster.hierarchy.cut_tree</code>, I would like to extract one element out of each cluster, which is the closest to that cluster's centroid.</p> <ul> <li>I would be the happiest if an off-the-shelf function existed for this, but in the lack thereof:</li> <li>some suggestions were already proposed <a href="http://stackoverflow.com/questions/9362304/how-to-get-centroids-from-scipys-hierarchical-agglomerative-clustering">here</a> for extracting the centroids themselves, but not the closest-to-centroid elements.</li> <li>Note that this is not to be confused with the <code>centroid</code> linkage rule in <code>scipy.cluster.hierarchy.linkage</code>. I have already carried out the clustering itself, just want to access the closest-to-centroid elements.</li> </ul>
0
2016-09-29T09:30:58Z
39,870,085
<p><a href="http://stackoverflow.com/users/2912349/paul">Paul</a>'s solution above works well for multidimensional arrays. In the more specific case, where you have a distance matrix <code>dm</code>, in which the distances are calculated in a "non-trivial" way (<em>e.g.</em> each pair objects is aligned in 3D first, then RMSD is calculated), I ended up selecting from each cluster the element whose sum of distances to the other elements in the cluster is the lowest. (See discussion below <a href="http://stackoverflow.com/a/39767308/6869965">this</a> answer.) This is how I did it in possession of the distance matrix <code>dm</code> and the list of object names in identical order <code>names</code>:</p> <pre><code>import numpy as np import scipy.spatial.distance as spd import scipy.cluster.hierarchy as sch # Square form of distance matrix sq=spd.squareform(dm) # Perform clustering, capture linkage object clusters=sch.linkage(dm,method=linkage) # List of cluster assignments assignments=sch.cut_tree(clusters,height=rmsd_cutoff) # Store object names and assignments as zip object (list of tuples) nameList=list(zip(names,assignments)) ### Extract models closest to cluster centroids counter=0 while counter&lt;num_Clusters+1: # Create mask from the list of assignments for extracting submatrix of the cluster mask=np.array([1 if i==counter else 0 for i in assignments],dtype=bool) # Take the index of the column with the smallest sum of distances from the submatrix idx=np.argmin(sum(sq[:,mask][mask,:])) # Extract names of cluster elements from nameList sublist=[name for (name, cluster) in nameList if cluster==counter] # Element closest to centroid centroid=sublist[idx] </code></pre>
0
2016-10-05T09:20:51Z
[ "python", "numpy", "scipy", "scikit-learn" ]
Rotating character and sprite wall
39,766,665
<p>I have a sprite that represents my character. This sprite rotates every frame according to my mouse position which in turn makes it so my rectangle gets bigger and smaller depending on where the mouse is. </p> <p>Basically what I want is to make it so my sprite (<code>Character</code>) doesn't go into the sprite walls. Now since the rect for the walls are larger then the actual pictures seems and my rect keeps growing and shrinking depending on my mouse position it leaves me clueless as for how to make a statement that stops my sprite from moving into the walls in a convincing manner.</p> <p>I already know for sure that my ColideList is only the blocks that are supposed to be collided with. I found <a href="http://stackoverflow.com/questions/20927189/detecting-collision-of-two-sprites-that-can-rotate">Detecting collision of two sprites that can rotate</a>, but it's in Java and I don't need to check collision between two rotating sprites but one and a wall.</p> <p>My Character class looks like this:</p> <pre><code>class Character(pygame.sprite.Sprite): walking_frame = [] Max_Hp = 100 Current_HP = 100 Alive = True X_Speed = 0 Y_Speed = 0 Loc_x = 370 Loc_y = 430 size = 15 Current_Weapon = Weapon() Angle = 0 reloading = False shot = False LastFrame = 0 TimeBetweenFrames = 0.05 frame = 0 Walking = False Blocked = 0 rel_path = "Sprite Images/All.png" image_file = os.path.join(script_dir, rel_path) sprite_sheet = SpriteSheet(image_file) #temp image = sprite_sheet.get_image(0, 0, 48, 48) #Temp image = pygame.transform.scale(image, (60, 60)) orgimage = image def __init__(self): pygame.sprite.Sprite.__init__(self) self.walking_frame.append(self.image) image = self.sprite_sheet.get_image(48, 0, 48, 48) self.walking_frame.append(image) image = self.sprite_sheet.get_image(96, 0, 48, 48) self.walking_frame.append(image) image = self.sprite_sheet.get_image(144, 0, 48, 48) self.walking_frame.append(image) image = self.sprite_sheet.get_image(0, 48, 48, 48) self.walking_frame.append(image) image = self.sprite_sheet.get_image(48, 48, 48, 48) self.walking_frame.append(image) image = self.sprite_sheet.get_image(96, 48, 48, 48) self.walking_frame.append(image) image = self.sprite_sheet.get_image(144, 48, 48, 48) self.walking_frame.append(image) self.rect = self.image.get_rect() self.rect.left, self.rect.top = [self.Loc_x,self.Loc_y] print "Shabat Shalom" def Shoot(self): if self.Alive: if(self.reloading == False): if(self.Current_Weapon.Clip_Ammo &gt; 0): bullet = Bullet(My_Man) bullet_list.add(bullet) self.Current_Weapon.Clip_Ammo -= 1 def move(self): if self.Alive: self.Animation() self.Loc_x += self.X_Speed self.Loc_y += self.Y_Speed Wall_hit_List = pygame.sprite.spritecollide(My_Man, CollideList, False) self.Blocked = 0 for wall in Wall_hit_List: if self.rect.right &lt;= wall.rect.left and self.rect.right &gt;= wall.rect.right: self.Blocked = 1 #right self.X_Speed= 0 elif self.rect.left &lt;= wall.rect.right and self.rect.left &gt;= wall.rect.left: self.Blocked = 3 #Left self.X_Speed = 0 elif self.rect.top &lt;= wall.rect.bottom and self.rect.top &gt;= wall.rect.top: self.Blocked = 2 #Up self.Y_Speed = 0 elif self.rect.top &gt;= wall.rect.bottom and self.rect.top &lt;= wall.rect.top: self.Blocked = 4 #Down self.Y_Speed = 0 self.image = pygame.transform.rotate(self.orgimage, self.Angle) self.rect = self.image.get_rect() self.rect.left, self.rect.top = [self.Loc_x, self.Loc_y] def Animation(self): # #Character Walk Animation if self.X_Speed != 0 or self.Y_Speed != 0: if(self.Walking == False): self.LastFrame = time.clock() self.Walking = True if (self.frame &lt; len(self.walking_frame)): self.image = self.walking_frame[self.frame] self.image = pygame.transform.scale(self.image, (60, 60)) self.orgimage = self.image self.frame += 1 else: self.frame = 0 else: if self.frame != 0: self.frame = 0 self.image = self.walking_frame[self.frame] self.image = pygame.transform.scale(self.image, (60, 60)) self.orgimage = self.image if self.Walking and time.clock() - self.LastFrame &gt; self.TimeBetweenFrames: self.Walking = False def CalAngle(self,X,Y): angle = math.atan2(self.Loc_x - X, self.Loc_y - Y) self.Angle = math.degrees(angle) + 180 </code></pre> <p>My Wall class looks like this: </p> <pre><code>class Wall(pygame.sprite.Sprite): def __init__(self, PosX, PosY, image_file, ImageX,ImageY): pygame.sprite.Sprite.__init__(self) self.sprite_sheet = SpriteSheet(image_file) self.image = self.sprite_sheet.get_image(ImageX, ImageY, 64, 64) self.image = pygame.transform.scale(self.image, (32, 32)) self.image.set_colorkey(Black) self.rect = self.image.get_rect() self.rect.x = PosX self.rect.y = PosY </code></pre> <p>My BuildWall function looks like this: </p> <pre><code>def BuildWall(NumberOfBlocks,TypeBlock,Direction,X,Y,Collide): for i in range(NumberOfBlocks): if Direction == 1: wall = Wall(X + (i * 32), Y, spriteList, 0, TypeBlock) wall_list.add(wall) if Direction == 2: wall = Wall(X - (i * 32), Y, spriteList, 0, TypeBlock) wall_list.add(wall) if Direction == 3: wall = Wall(X, Y + (i * 32), spriteList, 0, TypeBlock) wall_list.add(wall) if Direction == 4: wall = Wall(X, Y - (i * 32), spriteList, 0, TypeBlock) wall_list.add(wall) if(Collide): CollideList.add(wall) </code></pre> <p>Lastly my walking events looks like this: </p> <pre><code>elif event.type == pygame.KEYDOWN: if event.key == pygame.K_ESCAPE: #Press escape also leaves game Game = False elif event.key == pygame.K_w and My_Man.Blocked != 2: My_Man.Y_Speed = -3 elif event.key == pygame.K_s and My_Man.Blocked != 4: My_Man.Y_Speed = 3 elif event.key == pygame.K_a and My_Man.Blocked != 3: My_Man.X_Speed = -3 elif event.key == pygame.K_d and My_Man.Blocked != 1: My_Man.X_Speed = 3 elif event.key == pygame.K_r and (My_Man.reloading == False): lastReloadTime = time.clock() My_Man.reloading = True if (My_Man.Current_Weapon.Name == "Pistol"): My_Man.Current_Weapon.Clip_Ammo = My_Man.Current_Weapon.Max_Clip_Ammo else: My_Man.Current_Weapon.Clip_Ammo, My_Man.Current_Weapon.Max_Ammo = Reload(My_Man.Current_Weapon.Max_Ammo,My_Man.Current_Weapon.Clip_Ammo,My_Man.Current_Weapon.Max_Clip_Ammo) elif event.type == pygame.KEYUP: if event.key == pygame.K_w: My_Man.Y_Speed = 0 elif event.key == pygame.K_s: My_Man.Y_Speed = 0 elif event.key == pygame.K_a: My_Man.X_Speed = 0 elif event.key == pygame.K_d: My_Man.X_Speed = 0 </code></pre>
1
2016-09-29T09:34:18Z
39,811,728
<p>It all depends on how your sprite looks and how you want the result to be. There are 3 different types of collision detection I believe could work in your scenario.</p> <h2>Keeping your rect from resizing</h2> <p>Since the image is getting larger when you rotate it, you could compensate by just removing the extra padding and keep the image in it's original size. </p> <p>Say that the size of the original image is 32 pixels wide and 32 pixels high. After rotating, the image is 36 pixels wide and 36 pixels high. We want to take out the center of the image (since the padding is added around it). </p> <p><a href="http://i.stack.imgur.com/s0BQA.png" rel="nofollow"><img src="http://i.stack.imgur.com/s0BQA.png" alt="Ted Klein Bergman"></a></p> <p>To take out the center of the new image we simply take out a subsurface of the image the size of our previous rectangle centered inside the image.</p> <pre><code>def rotate(self, degrees): self.rotation = (self.rotation + degrees) % 360 # Keep track of the current rotation. self.image = pygame.transform.rotate(self.original_image, self.rotation)) center_x = self.image.get_width() // 2 center_y = self.image.get_height() // 2 rect_surface = self.rect.copy() # Create a new rectangle. rect_surface.center = (center_x, center_y) # Move the new rectangle to the center of the new image. self.image = self.image.subsurface(rect_surface) # Take out the center of the new image. </code></pre> <p>Since the size of the rectangle doesn't change we don't need to do anything to recalculate it (in other words: <code>self.rect = self.image.get_rect()</code> will not be necessary).</p> <h2>Rectangular detection</h2> <p>From here you just use <code>pygame.sprite.spritecollide</code> (or if you have an own function) as usual.</p> <pre><code>def collision_rect(self, walls): last = self.rect.copy() # Keep track on where you are. self.rect.move_ip(*self.velocity) # Move based on the objects velocity. current = self.rect # Just for readability we 'rename' the objects rect attribute to 'current'. for wall in pygame.sprite.spritecollide(self, walls, dokill=False): wall = wall.rect # Just for readability we 'rename' the wall's rect attribute to just 'wall'. if last.left &gt;= wall.right &gt; current.left: # Collided left side. current.left = wall.right elif last.right &lt;= wall.left &lt; current.right: # Collided right side. current.right = wall.left elif last.top &gt;= wall.bottom &gt; current.top: # Collided from above. current.top = wall.bottom elif last.bottom &lt;= wall.top &lt; current.bottom: # Collided from below. current.bottom = wall.top </code></pre> <h2>Circular collision</h2> <p>This probably will not work the best if you're tiling your walls, because you'll be able to go between tiles depending on the size of the walls and your character. It is good for many other things so I'll keep this in. </p> <p>If you add the attribute <code>radius</code> to your player and wall you can use <code>pygame.sprite.spritecollide</code> and pass the callback function <code>pygame.sprite.collide_circle</code>. You don't need a radius attribute, it's optional. But if you don't pygame will calculate the radius based on the sprites rect attribute, which is unnecessary unless the radius is constantly changing.</p> <pre><code>def collision_circular(self, walls): self.rect.move_ip(*self.velocity) current = self.rect for wall in pygame.sprite.spritecollide(self, walls, dokill=False, collided=pygame.sprite.collide_circle): distance = self.radius + wall.radius dx = current.centerx - wall.rect.centerx dy = current.centery - wall.rect.centery multiplier = ((distance ** 2) / (dx ** 2 + dy ** 2)) ** (1/2) current.centerx = wall.rect.centerx + (dx * multiplier) current.centery = wall.rect.centery + (dy * multiplier) </code></pre> <h2>Pixel perfect collision</h2> <p>This is the hardest to implement and is performance heavy, but can give you the best result. We'll still use <code>pygame.sprite.spritecollide</code>, but this time we're going to pass <code>pygame.sprite.collide_mask</code> as the callback function. This method require that your sprites have a rect attribute and a per pixel alpha Surface or a Surface with a colorkey. You can read more on transparent Surfaces <a class='doc-link' href="http://stackoverflow.com/documentation/pygame/7079/drawing-on-the-screen/23788/transparency#t=201610011952123976046">here</a>. </p> <p>A mask attribute is optional, if there is none the function will create one temporarily. If you use a mask attribute you'll need to change update it every time your sprite image is changed.</p> <p>The hard part of this kind of collision is not to detect it but to respond correctly and make it move/stop appropriately. I made a buggy example demonstrating one way to handle it somewhat decently.</p> <pre><code>def collision_mask(self, walls): last = self.rect.copy() self.rect.move_ip(*self.velocity) current = self.rect for wall in pygame.sprite.spritecollide(self, walls, dokill=False, collided=pygame.sprite.collide_mask): if not self.rect.center == last.center: self.rect.center = last.center break wall = wall.rect x_distance = current.centerx - wall.centerx y_distance = current.centery - wall.centery if abs(x_distance) &gt; abs(y_distance): current.centerx += (x_distance/abs(x_distance)) * (self.velocity[0] + 1) else: current.centery += (y_distance/abs(y_distance)) * (self.velocity[1] + 1) </code></pre> <h2>Full code</h2> <p>You can try out the different examples by pressing 1 for rectangular collision, 2 for circular collision and 3 for pixel-perfect collision. It's a little buggy in some places, the movement isn't top notch and isn't ideal performance wise, but it's just a simple demonstration.</p> <pre><code>import pygame pygame.init() SIZE = WIDTH, HEIGHT = (256, 256) clock = pygame.time.Clock() screen = pygame.display.set_mode(SIZE) mode = 1 modes = ["Rectangular collision", "Circular collision", "Pixel perfect collision"] class Player(pygame.sprite.Sprite): def __init__(self, pos): super(Player, self).__init__() self.original_image = pygame.Surface((32, 32)) self.original_image.set_colorkey((0, 0, 0)) self.image = self.original_image.copy() pygame.draw.ellipse(self.original_image, (255, 0, 0), pygame.Rect((0, 8), (32, 16))) self.rect = self.image.get_rect(center=pos) self.rotation = 0 self.velocity = [0, 0] self.radius = self.rect.width // 2 self.mask = pygame.mask.from_surface(self.image) def rotate_clipped(self, degrees): self.rotation = (self.rotation + degrees) % 360 # Keep track of the current rotation self.image = pygame.transform.rotate(self.original_image, self.rotation) center_x = self.image.get_width() // 2 center_y = self.image.get_height() // 2 rect_surface = self.rect.copy() # Create a new rectangle. rect_surface.center = (center_x, center_y) # Move the new rectangle to the center of the new image. self.image = self.image.subsurface(rect_surface) # Take out the center of the new image. self.mask = pygame.mask.from_surface(self.image) def collision_rect(self, walls): last = self.rect.copy() # Keep track on where you are. self.rect.move_ip(*self.velocity) # Move based on the objects velocity. current = self.rect # Just for readability we 'rename' the objects rect attribute to 'current'. for wall in pygame.sprite.spritecollide(self, walls, dokill=False): wall = wall.rect # Just for readability we 'rename' the wall's rect attribute to just 'wall'. if last.left &gt;= wall.right &gt; current.left: # Collided left side. current.left = wall.right elif last.right &lt;= wall.left &lt; current.right: # Collided right side. current.right = wall.left elif last.top &gt;= wall.bottom &gt; current.top: # Collided from above. current.top = wall.bottom elif last.bottom &lt;= wall.top &lt; current.bottom: # Collided from below. current.bottom = wall.top def collision_circular(self, walls): self.rect.move_ip(*self.velocity) current = self.rect for wall in pygame.sprite.spritecollide(self, walls, dokill=False, collided=pygame.sprite.collide_circle): distance = self.radius + wall.radius dx = current.centerx - wall.rect.centerx dy = current.centery - wall.rect.centery multiplier = ((distance ** 2) / (dx ** 2 + dy ** 2)) ** (1/2) current.centerx = wall.rect.centerx + (dx * multiplier) current.centery = wall.rect.centery + (dy * multiplier) def collision_mask(self, walls): last = self.rect.copy() self.rect.move_ip(*self.velocity) current = self.rect for wall in pygame.sprite.spritecollide(self, walls, dokill=False, collided=pygame.sprite.collide_mask): if not self.rect.center == last.center: self.rect.center = last.center break wall = wall.rect x_distance = current.centerx - wall.centerx y_distance = current.centery - wall.centery if abs(x_distance) &gt; abs(y_distance): current.centerx += (x_distance/abs(x_distance)) * (self.velocity[0] + 1) else: current.centery += (y_distance/abs(y_distance)) * (self.velocity[1] + 1) def update(self, walls): self.rotate_clipped(1) if mode == 1: self.collision_rect(walls) elif mode == 2: self.collision_circular(walls) else: self.collision_mask(walls) class Wall(pygame.sprite.Sprite): def __init__(self, pos): super(Wall, self).__init__() size = (32, 32) self.image = pygame.Surface(size) self.image.fill((0, 0, 255)) # Make the Surface blue. self.image.set_colorkey((0, 0, 0)) # Will not affect the image but is needed for collision with mask. self.rect = pygame.Rect(pos, size) self.radius = self.rect.width // 2 self.mask = pygame.mask.from_surface(self.image) def show_rects(player, walls): for wall in walls: pygame.draw.rect(screen, (1, 1, 1), wall.rect, 1) pygame.draw.rect(screen, (1, 1, 1), player.rect, 1) def show_circles(player, walls): for wall in walls: pygame.draw.circle(screen, (1, 1, 1), wall.rect.center, wall.radius, 1) pygame.draw.circle(screen, (1, 1, 1), player.rect.center, player.radius, 1) def show_mask(player, walls): for wall in walls: pygame.draw.rect(screen, (1, 1, 1), wall.rect, 1) for pixel in player.mask.outline(): pixel_x = player.rect.x + pixel[0] pixel_y = player.rect.y + pixel[1] screen.set_at((pixel_x, pixel_y), (1, 1, 1)) # Create walls around the border. walls = pygame.sprite.Group() walls.add(Wall(pos=(col, 0)) for col in range(0, WIDTH, 32)) walls.add(Wall(pos=(0, row)) for row in range(0, HEIGHT, 32)) walls.add(Wall(pos=(col, HEIGHT - 32)) for col in range(0, WIDTH, 32)) walls.add(Wall(pos=(WIDTH - 32, row)) for row in range(0, HEIGHT, 32)) walls.add(Wall(pos=(WIDTH//2, HEIGHT//2))) # Obstacle in the middle of the screen player = Player(pos=(64, 64)) speed = 2 # Speed of the player. while True: screen.fill((255, 255, 255)) clock.tick(60) for event in pygame.event.get(): if event.type == pygame.QUIT: quit() elif event.type == pygame.KEYDOWN: if event.key == pygame.K_a: player.velocity[0] = -speed elif event.key == pygame.K_d: player.velocity[0] = speed elif event.key == pygame.K_w: player.velocity[1] = -speed elif event.key == pygame.K_s: player.velocity[1] = speed elif pygame.K_1 &lt;= event.key &lt;= pygame.K_3: mode = event.key - 48 print(modes[mode - 1]) elif event.type == pygame.KEYUP: if event.key == pygame.K_a or event.key == pygame.K_d: player.velocity[0] = 0 elif event.key == pygame.K_w or event.key == pygame.K_s: player.velocity[1] = 0 player.update(walls) walls.draw(screen) screen.blit(player.image, player.rect) if mode == 1: show_rects(player, walls) # Show rectangles for circular collision detection. elif mode == 2: show_circles(player, walls) # Show circles for circular collision detection. else: show_mask(player, walls) # Show mask for pixel perfect collision detection. pygame.display.update() </code></pre> <h2>Last note</h2> <p>Before programming any further you really need to refactor your code. I tried to read some of your code but it's really hard to understand. Try follow <a href="https://www.python.org/dev/peps/pep-0008/#prescriptive-naming-conventions" rel="nofollow">Python's naming conventions</a>, it'll make it much easier for other programmers to read and understand your code, which makes it easier for them to help you with your questions.</p> <p>Just following these simple guidelines will make your code much readable:</p> <ul> <li>Variable names should contain only lowercase letters. Names with more than 1 word should be separated with an underscore. Example: <code>variable</code>, <code>variable_with_words</code>.</li> <li>Functions and attributes should follow the same naming convention as variables.</li> <li>Class names should start with an uppercase for every word and the rest should be lowercase. Example: <code>Class</code>, <code>MyClass</code>. Known as CamelCase.</li> <li>Separate methods in classes with one line, and functions and classes with two lines.</li> </ul> <p>I don't know what kind of IDE you use, but <a href="https://www.jetbrains.com/pycharm/download/#section=windows" rel="nofollow">Pycharm Community Edition</a> is a great IDE for Python. It'll show you when you're breaking Python conventions (and much more of course).</p> <p>It's important to note that these are conventions and not rules. They are meant to make code more readable and not to be followed strictly. Break them if you think it improves readability.</p>
2
2016-10-01T21:42:57Z
[ "python", "python-2.7", "pygame", "sprite" ]
Add custom http header in RedirectView
39,766,817
<p>I want to add the http header but it does not work. I'm trying to test solve by using the print, but it appears nothing</p> <p>This is my code but not works:</p> <pre><code>class MyRedirectView(RedirectView): def head(self, *args, **kwargs): response = HttpResponse() response['X-Robots-Tag'] = 'noindex' print('TEST') return response </code></pre>
1
2016-09-29T09:41:27Z
39,767,133
<p>What you have done here is to override the <code>head</code> method. Which is only used when a HTTP request of the type HEAD is made to your url. You should override the <code>get</code> method or better still the dispatch method instead.</p> <pre><code>class MyRedirectView(RedirectView): def dispatch(self, *args, **kwargs): response = super(MyRedirectView,self).dispatch(*args, **kwargs) response['X-Robots-Tag'] = 'noindex' print('TEST LOL') return response </code></pre>
1
2016-09-29T09:56:18Z
[ "python", "django", "http-headers", "django-class-based-views" ]
How to use num2date/ date2num with Tkinter mainloop()
39,766,830
<p>I have this code inside a tkinter <code>mainloop()</code>:</p> <pre><code>self.raw_start_date = num2date(date2num(dt.datetime.strptime(self.end_date, "%Y-%m-%d")) - self.period) self.start_date = self.raw_start_date.strftime("%Y-%m-%d") </code></pre> <p>I get the following error:</p> <blockquote> <p>File "D:\Python35-32\lib\tkinter__init__.py", line 1949, in <strong>getattr</strong> return getattr(self.tk, attr) RecursionError: maximum recursion depth exceeded</p> </blockquote> <p>Can someone please help out with this?</p>
-1
2016-09-29T09:42:01Z
39,811,563
<p>This is an artifact of subclassing <code>tkinter.Tk</code> and overriding the <code>__init__</code> method without ever calling <code>Tk.__init__</code>:</p> <pre><code>import tkinter class Application(tkinter.Tk): def __init__(self): "do out stuff, forget to call Tk.__init__(self) !" pass app = Application() app.any_possible_attribute_name #recursion error here </code></pre> <p>This happens because: </p> <ol> <li><code>Tk.__init__</code> initializes the very important <code>.tk</code> attribute.</li> <li>Any attribute lookup (that cannot be resolved) is forwarded to the <code>.tk</code> attribute.</li> </ol> <p>So normally if you did <code>app.thing</code> and <code>.thing</code> was not already defined then it would try to return <code>app.tk.thing</code>, but when <code>app.tk</code> is not defined then it tries to look up <code>app.tk.tk</code> which requires looking up <code>app.tk</code> which causes the recursion error.</p> <hr> <h2>To fix this</h2> <p>Just remember to call <code>Tk.__init__(self)</code> in your initialize method:</p> <pre><code>import tkinter class Application(tkinter.Tk): def __init__(self): "do out stuff, just make sure to call Tk.__init__(self) !" tkinter.Tk.__init__(self) app = Application() app.any_possible_attribute_name #now we just get an AttributeError </code></pre>
0
2016-10-01T21:19:29Z
[ "python", "tkinter", "tail-recursion" ]
How To Fix "TypeError: 'NoneType' object is not callable"
39,766,852
<p>When I run my script:</p> <pre><code>from selenium import webdriver # from selenium.webdriver.firefox.firefox_binary import FirefoxBinary from selenium.webdriver.common.desired_capabilities import DesiredCapabilities import os import pytest import unittest from nose_parameterized import parameterized class multiBrowsers(unittest.TestCase): @parameterized.expand([ ("chrome"), ("firefox"), ]) def setUp(self, browser): if browser == "firefox": caps = DesiredCapabilities.FIREFOX caps["marionette"] = True caps["binary"] = "/Applications/Firefox.app/Contents/MacOS/firefox-bin" self.driver = webdriver.Firefox(capabilities=caps) elif browser == "chrome": self.driver = webdriver.Chrome() def test_loadPage(self): driver = self.driver driver.get("http://www.google.com") def tearDown(self): self.driver.quit() </code></pre> <p>I get the error:</p> <pre><code>Error TypeError: 'NoneType' object is not callable </code></pre> <p>I read that I am not passing something correctly but I don't know where to look. Thanks in advance for the help!</p>
-3
2016-09-29T09:42:53Z
39,782,866
<p>Total guess, but I think this might be your problem:</p> <pre><code>@parameterized.expand([ ("chrome"), ("firefox"), ]) </code></pre> <p>Something in <code>@parameterized</code> might not be recognizing those as tuples. Try adding a comma to make them explicitly tuples:</p> <pre><code>@parameterized.expand([ ("chrome", ), ("firefox", ), ]) </code></pre>
0
2016-09-30T02:44:32Z
[ "python", "selenium", "nose-parameterized" ]
how to write this querySet with Django?
39,766,855
<p>I started learning Django QuerySets but I didn't succeed to write it in this case where I must agregate 2 models related by a foreign key .</p> <p>I have 2 models <code>user</code> and <code>course</code> where course contains a foreign key of <code>user</code></p> <pre><code>class user(models.Model): first_name = models.CharField(max_length=100) middle_name = models.CharField(max_length=100, null=True, blank=True) class Course(models.Model): user= models.ForeignKey(User) course_name = models.CharField(max_length=100) </code></pre> <p>and I want to translate this <code>sql</code> statement into <code>querysets</code> :</p> <pre><code>select * from user u, course c where u.id = c.user_id and c.course_name='science' ; </code></pre> <p>could someone help me please ?</p>
1
2016-09-29T09:43:10Z
39,766,960
<p>If what you want is a set of courses with each course instance having a set of related users you do this:</p> <pre><code>Course.objects.select_related('user').filter(course_name='science') </code></pre> <p>Please see: <a href="https://docs.djangoproject.com/en/1.10/ref/models/querysets/#select-related" rel="nofollow">https://docs.djangoproject.com/en/1.10/ref/models/querysets/#select-related</a></p> <blockquote> <p>select_related(*fields) Returns a QuerySet that will “follow” foreign-key relationships, selecting additional related-object data when it executes its query. This is a performance booster which results in a single more complex query but means later use of foreign-key relationships won’t require database queries.</p> </blockquote> <p>If on the other hand you want a list of users and the courses they are following you do</p> <pre><code>user.objects.filter(course__name='science') </code></pre> <p>This is the reverse traversal of a foreign key relationship. Please note that by convention django model names begin with an upper case latter. Your model is 'user' but really ought to be 'User'</p>
3
2016-09-29T09:47:45Z
[ "python", "django", "django-models" ]
Find cells in dataframe where value is between x and y
39,766,886
<p>I want all values in a pandas dataframe as True / False depending on whether the value is between the given x and y.</p> <p>Any combining of 2 dataframes using an 'AND' operator, or any 'between' functionality from pandas would be nice. I would prefer not to loop over the columns and call the pandas.Series.between(x, y) function.</p> <p><strong>Example</strong></p> <p>Given the following dataframe</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame([{1:1,2:2,3:6},{1:9,2:9,3:10}]) &gt;&gt;&gt; df 1 2 3 0 1 2 6 1 9 9 10 </code></pre> <p>I want all values between x and y. I can for example start with:</p> <pre><code>&gt;&gt;&gt; df &gt; 2 1 2 3 0 False False True 1 True True True </code></pre> <p>and then do </p> <pre><code>&gt;&gt;&gt; df &lt; 10 1 2 3 0 True True True 1 True True False </code></pre> <p>But then</p> <pre><code>&gt;&gt;&gt; df &gt; 2 and df &lt; 10 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Users\Laurens Koppenol\Anaconda2\lib\site-packages\pandas\core\generic.py", line 731, in __nonzero__ .format(self.__class__.__name__)) ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre>
3
2016-09-29T09:44:25Z
39,766,947
<p>use <code>&amp;</code> with parentheses (due to operator precedence), <code>and</code> doesn't understand how to treat an array of booleans hence the warning:</p> <pre><code>In [64]: df = pd.DataFrame([{1:1,2:2,3:6},{1:9,2:9,3:10}]) (df &gt; 2) &amp; (df &lt; 10) Out[64]: 1 2 3 0 False False True 1 True True False </code></pre> <p>It's possible to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.between.html" rel="nofollow"><code>between</code></a> with <code>apply</code> but this will be slower for a large df:</p> <pre><code>In [66]: df.apply(lambda x: x.between(2,10, inclusive=False)) Out[66]: 1 2 3 0 False False True 1 True True False </code></pre> <p>Note that this warning will get raised whenever you try to compare a df or series using <code>and</code>, <code>or</code>, and <code>not</code>, you should use <code>&amp;</code>, <code>|</code>, and <code>~</code> respectively as these bitwise operators understand how to treat arrays correctly</p>
4
2016-09-29T09:47:03Z
[ "python", "pandas" ]
Find cells in dataframe where value is between x and y
39,766,886
<p>I want all values in a pandas dataframe as True / False depending on whether the value is between the given x and y.</p> <p>Any combining of 2 dataframes using an 'AND' operator, or any 'between' functionality from pandas would be nice. I would prefer not to loop over the columns and call the pandas.Series.between(x, y) function.</p> <p><strong>Example</strong></p> <p>Given the following dataframe</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame([{1:1,2:2,3:6},{1:9,2:9,3:10}]) &gt;&gt;&gt; df 1 2 3 0 1 2 6 1 9 9 10 </code></pre> <p>I want all values between x and y. I can for example start with:</p> <pre><code>&gt;&gt;&gt; df &gt; 2 1 2 3 0 False False True 1 True True True </code></pre> <p>and then do </p> <pre><code>&gt;&gt;&gt; df &lt; 10 1 2 3 0 True True True 1 True True False </code></pre> <p>But then</p> <pre><code>&gt;&gt;&gt; df &gt; 2 and df &lt; 10 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "C:\Users\Laurens Koppenol\Anaconda2\lib\site-packages\pandas\core\generic.py", line 731, in __nonzero__ .format(self.__class__.__name__)) ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre>
3
2016-09-29T09:44:25Z
39,767,451
<p><code>between</code> is a convenient method for this. However, it is only for series objects. we can get around this by either using <code>apply</code> which operates on each row (or column) which is a series. Or, reshape the dataframe to a series with <code>stack</code></p> <p>use <code>stack</code>, <code>between</code>, <code>unstack</code></p> <pre><code>df.stack().between(2, 10, inclusive=False).unstack() </code></pre> <p><a href="http://i.stack.imgur.com/aP2fY.png" rel="nofollow"><img src="http://i.stack.imgur.com/aP2fY.png" alt="enter image description here"></a></p>
1
2016-09-29T10:12:05Z
[ "python", "pandas" ]
Why does it not find the variable?
39,766,919
<p>So below is my code, and I need to get the variable 'p' from the Entry widget, set it as a new variable name, the print it. For some reason, I get the following error 'NameError: name 'p' is not defined'. I have absolutely no idea how to fix it and this is my last resort. Please help me.</p> <p>Code:</p> <pre><code>import tkinter as tk # python3 #import Tkinter as tk # python self = tk TITLE_FONT = ("Helvetica", 18, "bold") #-------------------FUNCTIONS-------------------# def EnterP(): b1 = p.get() print (p.get()) def EnterS(*self): print (self.s.get()) def EnterB(*args): print (b.get()) def EnterN(*args): print (n.get()) #-----------------------------------------------# class SampleApp(tk.Tk): def __init__(self, *args, **kwargs): tk.Tk.__init__(self, *args, **kwargs) # the container is where we'll stack a bunch of frames # on top of each other, then the one we want visible # will be raised above the others container = tk.Frame(self) container.pack(side="top", fill="both", expand=True) container.grid_rowconfigure(0, weight=1) container.grid_columnconfigure(0, weight=1) self.frames = {} for F in (Home, Population, Quit): page_name = F.__name__ frame = F(parent=container, controller=self) self.frames[page_name] = frame # put all of the pages in the same location; # the one on the top of the stacking order # will be the one that is visible. frame.grid(row=0, column=0, sticky="nsew") self.show_frame("Home") def show_frame(self, page_name): '''Show a frame for the given page name''' frame = self.frames[page_name] frame.tkraise() class Home(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) self.controller = controller label = tk.Label(self, text="Home Page", font=TITLE_FONT) label.pack(side="top", fill="x", pady=10) button1 = tk.Button(self, text="Population", command=lambda: controller.show_frame("Population")) button5 = tk.Button(self, text = "Quit", command=lambda: controller.show_frame("Quit")) button1.pack() button5.pack() class Population(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) self.controller = controller label = tk.Label(self, text="Enter Generation 0 Values", font=TITLE_FONT) label.pack(side="top", fill="x", pady=10) #Population Number w = tk.Label(self, text="Enter the value for the Population") w.pack() p = tk.Entry(self) p.pack() pb = tk.Button(self, text="OK", command = EnterP) pb.pack() #Survival Rates w = tk.Label(self, text="Enter the value of Survival Rates") w.pack() s = tk.Entry(self) s.pack() sb = tk.Button(self, text="OK", command = EnterS) sb.pack() #Birth Rates w = tk.Label(self, text="ENter the value for the Birth Rate") w.pack() b = tk.Entry(self) b.pack() bb = tk.Button(self, text="OK", command = EnterB) bb.pack() #Number of New Generations To Model w = tk.Label(self, text="Enter the number of New Generatiions") w.pack() n = tk.Entry(self) n.pack() nb = tk.Button(self, text="OK", command = EnterN) nb.pack() button = tk.Button(self, text="&lt;&lt;&lt; BACK", command=lambda: controller.show_frame("Home")) button.pack() class Quit(tk.Frame): def __init__(self, parent, controller): tk.Frame.__init__(self, parent) self.controller = controller label = tk.Label(self, text="Are you sure you want to quit?", font=TITLE_FONT) label.pack(side="top", fill="x", pady=10) yes = tk.Button(self, text="Yes") yes.pack() no = tk.Button(self, text = "No", command = lambda: controller.show_frame("Home")) no.pack() if __name__ == "__main__": app = SampleApp() app.mainloop() </code></pre>
-3
2016-09-29T09:45:39Z
39,767,059
<p>modify your code like this:</p> <pre><code>class Population(tk.Frame): def __init__(self, parent, controller): def EnterP(): b1 = p.get() print (p.get()) </code></pre>
-1
2016-09-29T09:53:00Z
[ "python", "variables", "tkinter", "nameerror" ]