title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
list
BeautifulSoup, findAll after findAll?
39,478,865
<p>I'm pretty new to Python and mainly need it for getting information from websites. Here I tried to get the short headlines from the bottom of the website, but cant quite get them.</p> <pre><code>from bfs4 import BeautifulSoup import requests url = "http://some-website" r = requests.get(url) soup = BeautifulSoup(r.content, "html.parser") nachrichten = soup.findAll('ul', {'class':'list'}) </code></pre> <p>Now I would need another findAll to get all the links/a from the var "nachrichten", but how can I do this ?</p>
1
2016-09-13T20:58:01Z
39,479,053
<pre><code>from bs4 import BeautifulSoup import requests url = "http://some-website" r = requests.get(url) soup = BeautifulSoup(r.content, "html.parser") for item in soup.findAll('ul', {'class':'list'}) for link in item.findAll('a'): print link.text </code></pre>
-1
2016-09-13T21:10:37Z
[ "python", "beautifulsoup", "python-requests" ]
BeautifulSoup, findAll after findAll?
39,478,865
<p>I'm pretty new to Python and mainly need it for getting information from websites. Here I tried to get the short headlines from the bottom of the website, but cant quite get them.</p> <pre><code>from bfs4 import BeautifulSoup import requests url = "http://some-website" r = requests.get(url) soup = BeautifulSoup(r.content, "html.parser") nachrichten = soup.findAll('ul', {'class':'list'}) </code></pre> <p>Now I would need another findAll to get all the links/a from the var "nachrichten", but how can I do this ?</p>
1
2016-09-13T20:58:01Z
39,479,072
<pre><code>from bs4 import BeautifulSoup import requests url = "http://www.n-tv.de/ticker/" r = requests.get(url) soup = BeautifulSoup(r.content, "html.parser") nachrichten = soup.findAll('ul', {'class':'list'}) links = [] for ul in nachrichten: links.extend(ul.findAll('a')) print len(links) </code></pre> <p>Hope this solves your problem and I think the import is bs4. I never herd of bfs4 </p>
0
2016-09-13T21:13:15Z
[ "python", "beautifulsoup", "python-requests" ]
Filter IPs coming from AWS
39,478,870
<p>I have a list of IPs and I want to filter out the IPs that come from AWS. Fortunately, AWS publishes a <a href="https://ip-ranges.amazonaws.com/ip-ranges.json" rel="nofollow">list</a> of their IPs. That list is in a JSON format, so I converted it to a <a href="https://json-csv.com/c/ru5L" rel="nofollow">CSV</a>. I then filtered my IP addresses using <code>my_ip = my_ip[~my_ip['col'].isin(amazon_ips['col'])]</code>.</p> <p>Unfortunately, I think that the list is for ranges of IPs, rather than actually containing each IP that an AWS request could have come from. How can I correct for this?</p>
-1
2016-09-13T20:58:21Z
39,479,073
<p>The list you are getting contains IP Ranges using <a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing" rel="nofollow">CIDR Notation</a>.</p> <p>You could programmatically determine the individual IP addresses from a CIDR Block, if required: <a href="http://stackoverflow.com/questions/1942160/python-3-create-a-list-of-possible-ip-addresses-from-a-cidr-notation">Python 3: create a list of possible ip addresses from a CIDR notation</a></p>
2
2016-09-13T21:13:18Z
[ "python", "amazon-web-services", "pandas", "ip" ]
Python newbie clarification about tuples and strings
39,478,939
<p>I just learned that I can check if a substring is inside a string using:</p> <blockquote> <p>substring in string</p> </blockquote> <p>It looks to me that a string is just a special kind of tuple where its elements are chars. So I wonder if there's a straightforward way to search a slice of a tuple inside a tuple. The elements in the tuple can be of any type.</p> <blockquote> <p>tupleslice in tuple</p> </blockquote> <p>Now my related second question:</p> <pre><code>&gt;&gt;&gt; tu = 12 ,23, 34,56 &gt;&gt;&gt; tu[:2] in tu False </code></pre> <p>I gather that I get False because (12, 23) is not an element of tu. But then, why substring in string works?. Is there syntactic sugar hidden behind scenes?.</p>
2
2016-09-13T21:02:23Z
39,478,993
<p>A string is not just a special kind of tuple. They have many similar properties, in particular, both are iterators, but they are distinct types and each defines the behavior of the <code>in</code> operator differently. See the docs on this here: <a href="https://docs.python.org/3/reference/expressions.html#in" rel="nofollow">https://docs.python.org/3/reference/expressions.html#in</a></p> <p>To solve your problem of finding whether one tuple is a sub-sequence of another tuple, writing an algorithm like in your answer would be the way to go. Try something like this:</p> <pre><code>def contains(inner, outer): inner_len = len(inner) for i, _ in enumerate(outer): outer_substring = outer[i:i+inner_len] if outer_substring == inner: return True return False </code></pre>
3
2016-09-13T21:06:58Z
[ "python", "string", "tuples" ]
Python newbie clarification about tuples and strings
39,478,939
<p>I just learned that I can check if a substring is inside a string using:</p> <blockquote> <p>substring in string</p> </blockquote> <p>It looks to me that a string is just a special kind of tuple where its elements are chars. So I wonder if there's a straightforward way to search a slice of a tuple inside a tuple. The elements in the tuple can be of any type.</p> <blockquote> <p>tupleslice in tuple</p> </blockquote> <p>Now my related second question:</p> <pre><code>&gt;&gt;&gt; tu = 12 ,23, 34,56 &gt;&gt;&gt; tu[:2] in tu False </code></pre> <p>I gather that I get False because (12, 23) is not an element of tu. But then, why substring in string works?. Is there syntactic sugar hidden behind scenes?.</p>
2
2016-09-13T21:02:23Z
39,479,059
<p><code>string</code> is not a type of <code>tuple</code>. Infact both belongs to different class. How <code>in</code> statement will be evaluated is based on the <code>__contains__()</code> magic function defined within there respective class.</p> <p>Read <a href="http://stackoverflow.com/questions/14766767/how-do-you-set-up-the-contains-method-in-python">How do you set up the contains method in python</a>, may be you will find it useful. To know about magic functions in Python, read: <a href="http://www.rafekettler.com/magicmethods.html">A Guide to Python's Magic Methods</a></p>
6
2016-09-13T21:11:03Z
[ "python", "string", "tuples" ]
Python newbie clarification about tuples and strings
39,478,939
<p>I just learned that I can check if a substring is inside a string using:</p> <blockquote> <p>substring in string</p> </blockquote> <p>It looks to me that a string is just a special kind of tuple where its elements are chars. So I wonder if there's a straightforward way to search a slice of a tuple inside a tuple. The elements in the tuple can be of any type.</p> <blockquote> <p>tupleslice in tuple</p> </blockquote> <p>Now my related second question:</p> <pre><code>&gt;&gt;&gt; tu = 12 ,23, 34,56 &gt;&gt;&gt; tu[:2] in tu False </code></pre> <p>I gather that I get False because (12, 23) is not an element of tu. But then, why substring in string works?. Is there syntactic sugar hidden behind scenes?.</p>
2
2016-09-13T21:02:23Z
39,479,085
<p>Try just playing around with tuples and splices. In this case its pretty easy because your splice is essentially indexing. </p> <pre><code>&gt;&gt;&gt; tu = 12 ,23, 34,56 &gt;&gt;&gt; tu (12, 23, 34, 56) #a tuple of ints &gt;&gt;&gt; tu[:1] # a tuple with an int in it (12,) &gt;&gt;&gt; tu[:1] in tu #checks for a tuple against int. no match. False &gt;&gt;&gt; tu[0] in tu #checks for int against ints. matched! True &gt;&gt;&gt; #you can see as we iterate through the values... &gt;&gt;&gt; for i in tu: print(""+str(tu[:1])+" == " + str(i)) (12,) == 12 (12,) == 23 (12,) == 34 (12,) == 56 </code></pre> <p>Splicing is returning a list of tuples, but you need to index further to compare <code>in</code> by values and not containers. Spliced strings return values, strings and the <code>in</code> operator can compare to values, but splicing tuples returns tuples, which are containers. </p>
0
2016-09-13T21:14:28Z
[ "python", "string", "tuples" ]
Python newbie clarification about tuples and strings
39,478,939
<p>I just learned that I can check if a substring is inside a string using:</p> <blockquote> <p>substring in string</p> </blockquote> <p>It looks to me that a string is just a special kind of tuple where its elements are chars. So I wonder if there's a straightforward way to search a slice of a tuple inside a tuple. The elements in the tuple can be of any type.</p> <blockquote> <p>tupleslice in tuple</p> </blockquote> <p>Now my related second question:</p> <pre><code>&gt;&gt;&gt; tu = 12 ,23, 34,56 &gt;&gt;&gt; tu[:2] in tu False </code></pre> <p>I gather that I get False because (12, 23) is not an element of tu. But then, why substring in string works?. Is there syntactic sugar hidden behind scenes?.</p>
2
2016-09-13T21:02:23Z
39,499,092
<p>This is how I accomplished to do my first request, however, it's not straightforward nor pythonic. I had to iterate the Java way. I wasn't able to make it using "for" loops.</p> <pre><code>def tupleInside(tupleSlice): i, j = 0, 0 while j &lt; len(tu): t = tu[j] ts = tupleSlice[i] print(t, ts, i, j) if ts == t: i += 1 if i == len(tupleSlice): return True else: j -= i i = 0 j += 1 return False tu = tuple('abcdefghaabedc') print(tupleInside(tuple(input('Tuple slice: ')))) </code></pre>
1
2016-09-14T20:25:37Z
[ "python", "string", "tuples" ]
Python newbie clarification about tuples and strings
39,478,939
<p>I just learned that I can check if a substring is inside a string using:</p> <blockquote> <p>substring in string</p> </blockquote> <p>It looks to me that a string is just a special kind of tuple where its elements are chars. So I wonder if there's a straightforward way to search a slice of a tuple inside a tuple. The elements in the tuple can be of any type.</p> <blockquote> <p>tupleslice in tuple</p> </blockquote> <p>Now my related second question:</p> <pre><code>&gt;&gt;&gt; tu = 12 ,23, 34,56 &gt;&gt;&gt; tu[:2] in tu False </code></pre> <p>I gather that I get False because (12, 23) is not an element of tu. But then, why substring in string works?. Is there syntactic sugar hidden behind scenes?.</p>
2
2016-09-13T21:02:23Z
39,507,379
<p>Just adding to Cameron Lee's answer so that it accepts <code>inner</code> containing a single integer.</p> <pre><code>def contains(inner, outer): try: inner_len = len(inner) for i, _ in enumerate(outer): outer_substring = outer[i:i+inner_len] if outer_substring == inner: return True return False except TypeError: return inner in outer contains(4, (3,1,2,4,5)) # returns True contains((4), (3,1,2,4,5)) # returns True </code></pre>
0
2016-09-15T09:21:20Z
[ "python", "string", "tuples" ]
python-docx - deleting first paragraph
39,479,005
<p>When I create a new document with python-docx and add paragraphs, it starts at the very first line. But if I use an empty document (I need it because of the user defined styles) and add paragraphes the document would always start with an empty line. Is there any workaround?</p>
0
2016-09-13T21:07:31Z
39,483,410
<p>You can call <code>document._body.clear_content()</code> before adding the first paragraph.</p> <pre><code>document = Document('my-document.docx') document._body.clear_content() # start adding new paragraphs and whatever ... </code></pre> <p>That will leave the document with no paragraphs, so when you add new ones they start at the beginning.</p> <p>It does, however, leave the document in a technically invalid state. So if you didn't add any new paragraphs and then tried to open it with Word, you might get a repair error on loading.</p> <p>But if the next thing you're doing is adding paragraphs of your own, this should work just fine.</p> <p>Also, note that this is technically an "internal" method and is not part of the documented API. So there's no guarantee this method's name won't change in a future release. But frankly I can't see any reason to change or remove it, so I expect it's safe enough :)</p>
0
2016-09-14T05:58:43Z
[ "python", "python-docx" ]
Python: make sure to send 1024 bytes at a time
39,479,036
<p>perl_server.pl </p> <pre><code>$client_socket = $socket-&gt;accept(); if (defined $client_socket) { $client_socket-&gt;autoflush(1); $client_socket-&gt;setsockopt(SOL_SOCKET, SO_RCVTIMEO, pack('l!l!', 30, 0)) or die "setsockopt: $!"; } while (1) { my $line = ""; $client_socket-&gt;recv($line, 1024); print "$line\n"; } </code></pre> <p>python_client.py </p> <pre><code>def random_line(): return "random len of line here" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_address = (brock_host, int(brock_port)) sock.connect(server_address) while True: sock.sendall(random_line()) </code></pre> <p>How do i gurantee so that perl_server receives line at a time (not mangled)?<br> Or prevent line from being buffered so it's not printing in the server side unless another line comes in?</p> <p>i believe if i make sure the size of the item sent is 1024, then perl server would get a line at a time, correct?</p> <p>How do i make sure if the line is 1024 bytes at a time?</p> <p>whats a good strategy if the line is longer than 1024 bytes? (not sure if this is a case to be concerned about)</p> <p>normally, a line would be somewhere around 50 characters</p>
0
2016-09-13T21:09:12Z
39,479,067
<pre><code>PACKET_SIZE = 1024 line = random_line() sock.send(line+"\x00"*max(PACKET_SIZE-len(line),0)) </code></pre> <p>is one way you could do it </p> <p>another way (if <code>'\x00'</code> is <strong>not</strong> your terminal char)</p> <pre><code>'{val:{terminal}&lt;{size}}'.format(val=line,size=PACKET_SIZE,terminal='\xff') </code></pre>
3
2016-09-13T21:12:37Z
[ "python", "python-2.7", "sockets" ]
Python: make sure to send 1024 bytes at a time
39,479,036
<p>perl_server.pl </p> <pre><code>$client_socket = $socket-&gt;accept(); if (defined $client_socket) { $client_socket-&gt;autoflush(1); $client_socket-&gt;setsockopt(SOL_SOCKET, SO_RCVTIMEO, pack('l!l!', 30, 0)) or die "setsockopt: $!"; } while (1) { my $line = ""; $client_socket-&gt;recv($line, 1024); print "$line\n"; } </code></pre> <p>python_client.py </p> <pre><code>def random_line(): return "random len of line here" sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) server_address = (brock_host, int(brock_port)) sock.connect(server_address) while True: sock.sendall(random_line()) </code></pre> <p>How do i gurantee so that perl_server receives line at a time (not mangled)?<br> Or prevent line from being buffered so it's not printing in the server side unless another line comes in?</p> <p>i believe if i make sure the size of the item sent is 1024, then perl server would get a line at a time, correct?</p> <p>How do i make sure if the line is 1024 bytes at a time?</p> <p>whats a good strategy if the line is longer than 1024 bytes? (not sure if this is a case to be concerned about)</p> <p>normally, a line would be somewhere around 50 characters</p>
0
2016-09-13T21:09:12Z
39,482,520
<p>Maybe a combination of using stride, length, and padding. <code>max</code> isn't needed as you are sure the slice is smaller than your packet size.</p> <pre><code>M = 32 for length in (M - 1, M, M + 1): payload = '1' * length for i in range(0, len(payload), M): packet = payload[i : i + M] packet += '0' * (M - len(packet)) print(packet) print() </code></pre> <p>Output:</p> <pre><code>11111111111111111111111111111110 11111111111111111111111111111111 11111111111111111111111111111111 10000000000000000000000000000000 </code></pre>
1
2016-09-14T04:31:52Z
[ "python", "python-2.7", "sockets" ]
why isn't my image being uploaded with the django form I created?
39,479,069
<p>I am using Python 2.7, Django 1.9.</p> <p>I'm trying to get an image from the user with this model/form pair:</p> <p>models.py</p> <pre><code>from PIL import Image class UserProfile(models.Model): user = models.OneToOneField(User) website = models.URLField(blank=True, null=True) location = models.CharField(max_length=200, null=True) longitude = models.FloatField(null=True) latitude = models.FloatField(null=True) credit = models.FloatField(default=0, null=True) picture = models.ImageField(upload_to='media/images/profile_pictures', blank=True, null=True) def __unicode__(self): return self.user.username </code></pre> <p>forms.py</p> <pre><code>class UserProfileForm(forms.ModelForm): class Meta: model = UserProfile fields = [ "website", "location", "picture", ] widgets = { 'location': forms.TextInput( attrs={'id': 'location', 'class': 'geo', 'required': True, 'placeholder': 'location'} ), } </code></pre> <p>This is saved using the following view: </p> <pre><code>def register(request): registered = False if request.method == "POST": user_form = UserForm(request.POST) profile_form = UserProfileForm(request.POST, request.FILES) if user_form.is_valid() and profile_form.is_valid(): print(request.POST['location']) print(str(request.POST['location'])) user = user_form.save() user.set_password(user.password) user.save() profile = profile_form.save(commit=False) profile.user = user profile.save() else: print user_form.errors, profile_form.errors else: profile_form = UserProfileForm() user_form = UserForm() return render(request, "register.html", {'user_form' : user_form, 'profile_form' : profil </code></pre> <p>But upon execution, no pictures are saved to the folders. Here is the root of the image_urls: project/static/media/images/profile_pictures Any ideas?</p> <p>Edit: Here's the html:</p> <pre><code>{% load static from staticfiles %} {% block head %} {% endblock %} {% block content %} {% if registered %} &lt;h1&gt;Thank you for registering.&lt;/h2&gt;&lt;br&gt; &lt;a href="dashboard"&gt;Start playing!&lt;/a&gt; {% else %} &lt;form id="user_form" method="post" action="/register/" enctype="multipart/form-data"&gt; {% csrf_token %} {{ user_form.as_p }} {{ profile_form.as_p }} &lt;input type="submit" name="submit" value="register" /&gt; &lt;/form&gt; {% endif %} {% endblock %} </code></pre>
0
2016-09-13T21:12:58Z
39,479,960
<p>I upgraded to the latest version of Pillow and it works now.</p>
0
2016-09-13T22:33:49Z
[ "python", "django", "forms", "python-imaging-library", "pillow" ]
why isn't my image being uploaded with the django form I created?
39,479,069
<p>I am using Python 2.7, Django 1.9.</p> <p>I'm trying to get an image from the user with this model/form pair:</p> <p>models.py</p> <pre><code>from PIL import Image class UserProfile(models.Model): user = models.OneToOneField(User) website = models.URLField(blank=True, null=True) location = models.CharField(max_length=200, null=True) longitude = models.FloatField(null=True) latitude = models.FloatField(null=True) credit = models.FloatField(default=0, null=True) picture = models.ImageField(upload_to='media/images/profile_pictures', blank=True, null=True) def __unicode__(self): return self.user.username </code></pre> <p>forms.py</p> <pre><code>class UserProfileForm(forms.ModelForm): class Meta: model = UserProfile fields = [ "website", "location", "picture", ] widgets = { 'location': forms.TextInput( attrs={'id': 'location', 'class': 'geo', 'required': True, 'placeholder': 'location'} ), } </code></pre> <p>This is saved using the following view: </p> <pre><code>def register(request): registered = False if request.method == "POST": user_form = UserForm(request.POST) profile_form = UserProfileForm(request.POST, request.FILES) if user_form.is_valid() and profile_form.is_valid(): print(request.POST['location']) print(str(request.POST['location'])) user = user_form.save() user.set_password(user.password) user.save() profile = profile_form.save(commit=False) profile.user = user profile.save() else: print user_form.errors, profile_form.errors else: profile_form = UserProfileForm() user_form = UserForm() return render(request, "register.html", {'user_form' : user_form, 'profile_form' : profil </code></pre> <p>But upon execution, no pictures are saved to the folders. Here is the root of the image_urls: project/static/media/images/profile_pictures Any ideas?</p> <p>Edit: Here's the html:</p> <pre><code>{% load static from staticfiles %} {% block head %} {% endblock %} {% block content %} {% if registered %} &lt;h1&gt;Thank you for registering.&lt;/h2&gt;&lt;br&gt; &lt;a href="dashboard"&gt;Start playing!&lt;/a&gt; {% else %} &lt;form id="user_form" method="post" action="/register/" enctype="multipart/form-data"&gt; {% csrf_token %} {{ user_form.as_p }} {{ profile_form.as_p }} &lt;input type="submit" name="submit" value="register" /&gt; &lt;/form&gt; {% endif %} {% endblock %} </code></pre>
0
2016-09-13T21:12:58Z
39,482,596
<p>Are you using the correct dependencies for processing images?</p>
0
2016-09-14T04:41:55Z
[ "python", "django", "forms", "python-imaging-library", "pillow" ]
Finding the longest alphabetical substring in a longer string
39,479,125
<p>I need a code to find the longest alphabetical substring in a string (s). This is the code I'm using:</p> <pre><code>letter = s[0] best = '' for n in range(1, len(s)): if len(letter) &gt; len(best): best = letter if s[n] &gt;= s[n-1]: letter += s[n] else: letter = s[n] </code></pre> <p>It works most of the time, but occasionally it gives the wrong answers and I am confused why it only works sometimes. for example when:</p> <blockquote> <p>s='maezsibmhzxhpprvx'</p> </blockquote> <p>It said the answer was "hpprv" while it should have been "hpprvx".</p> <p>In another case, when </p> <blockquote> <p>s='ysdxvkctcpxidnvaepz'</p> </blockquote> <p>It gave the answer "cpx", when it should have been "aepz".</p> <p>What's happening here?</p>
0
2016-09-13T21:18:06Z
39,479,241
<p>You should move this check</p> <pre><code>if len(letter) &gt; len(best): best = letter </code></pre> <p>after the rest of the loop</p>
3
2016-09-13T21:26:41Z
[ "python", "string", "alphabetical" ]
Finding the longest alphabetical substring in a longer string
39,479,125
<p>I need a code to find the longest alphabetical substring in a string (s). This is the code I'm using:</p> <pre><code>letter = s[0] best = '' for n in range(1, len(s)): if len(letter) &gt; len(best): best = letter if s[n] &gt;= s[n-1]: letter += s[n] else: letter = s[n] </code></pre> <p>It works most of the time, but occasionally it gives the wrong answers and I am confused why it only works sometimes. for example when:</p> <blockquote> <p>s='maezsibmhzxhpprvx'</p> </blockquote> <p>It said the answer was "hpprv" while it should have been "hpprvx".</p> <p>In another case, when </p> <blockquote> <p>s='ysdxvkctcpxidnvaepz'</p> </blockquote> <p>It gave the answer "cpx", when it should have been "aepz".</p> <p>What's happening here?</p>
0
2016-09-13T21:18:06Z
39,479,247
<p>The logic is almost okay except that if <code>letter</code> grows on the last loop iteration (when <code>n == len(s) - 1</code>), <code>best</code> is not changed that last time. You may insert another <code>best = letter</code> part after the loop, or re-think carefully the program structure so you won't repeat yourself.</p>
1
2016-09-13T21:26:58Z
[ "python", "string", "alphabetical" ]
Finding the longest alphabetical substring in a longer string
39,479,125
<p>I need a code to find the longest alphabetical substring in a string (s). This is the code I'm using:</p> <pre><code>letter = s[0] best = '' for n in range(1, len(s)): if len(letter) &gt; len(best): best = letter if s[n] &gt;= s[n-1]: letter += s[n] else: letter = s[n] </code></pre> <p>It works most of the time, but occasionally it gives the wrong answers and I am confused why it only works sometimes. for example when:</p> <blockquote> <p>s='maezsibmhzxhpprvx'</p> </blockquote> <p>It said the answer was "hpprv" while it should have been "hpprvx".</p> <p>In another case, when </p> <blockquote> <p>s='ysdxvkctcpxidnvaepz'</p> </blockquote> <p>It gave the answer "cpx", when it should have been "aepz".</p> <p>What's happening here?</p>
0
2016-09-13T21:18:06Z
39,479,422
<p>Your routine was almost ok, here's a little comparison between yours, the fixed one and another possible solution to your problem:</p> <pre><code>def buggy(s): letter = s[0] best = '' for n in range(1, len(s)): if len(letter) &gt; len(best): best = letter if s[n] &gt;= s[n - 1]: letter += s[n] else: letter = s[n] return best def fixed(s): letter = s[0] best = '' for n in range(1, len(s)): if s[n] &gt;= s[n - 1]: letter += s[n] else: letter = s[n] if len(letter) &gt; len(best): best = letter return best def alternative(s): result = ['' for i in range(len(s))] index = 0 for i in range(len(s)): if (i == len(s) - 1 and s[i] &gt;= s[i - 1]) or s[i] &lt;= s[i + 1]: result[index] += s[i] else: result[index] += s[i] index += 1 return max(result, key=len) for sample in ['maezsibmhzxhpprvx', 'ysdxvkctcpxidnvaepz']: o1, o2, o3 = buggy(sample), fixed(sample), alternative(sample) print "buggy={0:10} fixed={1:10} alternative={2:10}".format(o1, o2, o3) </code></pre> <p>As you can see in your version the order of the inner loop conditional is not good, you should move the first conditional to the end of loop.</p>
1
2016-09-13T21:40:59Z
[ "python", "string", "alphabetical" ]
Filling a diagonal matrix based on selectors
39,479,167
<p>I have two lists:</p> <pre><code>[(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2)] [False, False, True, False, False, False] </code></pre> <p>The first list represent the row_number, column_number of the matrix. The second list represent the element value. How can I create an efficient loop(or other algorithm) so I end up with a 4 by 4 matrix:</p> <pre><code>0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 </code></pre>
0
2016-09-13T21:21:07Z
39,479,319
<p>This is actually pretty easy if you use <a href="https://docs.python.org/3.6/library/itertools.html#itertools.compress" rel="nofollow"><code>itertools.compress</code></a>:</p> <pre><code>from itertools import compress d = [(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2)] sel = [False, False, True, False, False, False] res = [[0 if (j, i) not in compress(d, sel) else 1 for i in range(4)] for j in range(4)] </code></pre> <p>Yields:</p> <pre><code>[[0, 0, 0, 0], [0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 0]] </code></pre> <p>Compress takes some data (<code>d</code> here) and some selectors (<code>sel</code> here) and retains the data that has the corresponding selector with a true value. </p> <p>The list comprehension then creates the matrix and fills it with zeros or ones accordingly.</p>
1
2016-09-13T21:31:19Z
[ "python", "python-3.x" ]
Filling a diagonal matrix based on selectors
39,479,167
<p>I have two lists:</p> <pre><code>[(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2)] [False, False, True, False, False, False] </code></pre> <p>The first list represent the row_number, column_number of the matrix. The second list represent the element value. How can I create an efficient loop(or other algorithm) so I end up with a 4 by 4 matrix:</p> <pre><code>0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 </code></pre>
0
2016-09-13T21:21:07Z
39,479,362
<p>I would suggest using the <code>sparse</code> library from the <code>scipy</code> module for efficient sparse matrix manipulation. Here is how you would create the desired matrix:</p> <pre><code>from scipy import sparse coo = [(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2)] data = [False, False, True, False, False, False] m = sparse.coo_matrix((data,zip(*coo)), shape=(4,4)) print(m) </code></pre> <p>Note that there are <a href="http://docs.scipy.org/doc/scipy/reference/sparse.html" rel="nofollow">many</a> other sparse matrix formats (including diagonal) depending on what representation you find most appropriate to create and manipulate it.</p>
1
2016-09-13T21:34:50Z
[ "python", "python-3.x" ]
Filling a diagonal matrix based on selectors
39,479,167
<p>I have two lists:</p> <pre><code>[(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2)] [False, False, True, False, False, False] </code></pre> <p>The first list represent the row_number, column_number of the matrix. The second list represent the element value. How can I create an efficient loop(or other algorithm) so I end up with a 4 by 4 matrix:</p> <pre><code>0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 </code></pre>
0
2016-09-13T21:21:07Z
39,479,598
<p>Does this have to actually be a numpy-like matrix? Seems to me you could do something like:</p> <pre><code>coords = [(1, 0), (2, 0), (2, 1), (3, 0), (3, 1), (3, 2)] values = [False, False, True, False, False, False] DEFAULT_VALUE = 0 height, width = max(coords)[0], max(coords, key=lambda x_y:x_y[1])[1] matrix = [[DEFAULT_VALUE for _ in range(width)] for _ in range(height)] for coord, value in zip(coords, values): y, x = coord matrix[y][x] = value </code></pre>
0
2016-09-13T21:56:38Z
[ "python", "python-3.x" ]
pandas - Changing the format of a data frame
39,479,185
<p>I have a data frame which is of the format:</p> <pre><code>level_0 level_1 counts 0 back not_share 1183 1 back share 1154 2 back total 2337 3 front not_share 697 4 front share 1073 5 front total 1770 6 left not_share 4819 7 left share 5097 8 left total 9916 9 other not_share 2649 10 other share 2182 11 other total 4831 12 right not_share 1449 13 right share 1744 14 right total 3193 </code></pre> <p>I want to convert this form to </p> <pre><code>level_0 share not_share total back 1154 1183 2337 front 1073 697 1770 </code></pre> <p>and so on..</p> <p>Is there any method that I can use or should I convert it into a native Python datatype and then do the manipulations?</p>
3
2016-09-13T21:22:48Z
39,479,224
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow">pivot_table()</a> method:</p> <pre><code>In [101]: df.pivot_table(index='level_0', columns='level_1', values='counts', aggfunc='sum') Out[101]: level_1 not_share share total level_0 back 1183 1154 2337 front 697 1073 1770 left 4819 5097 9916 other 2649 2182 4831 right 1449 1744 3193 </code></pre>
3
2016-09-13T21:25:21Z
[ "python", "pandas", "dataframe", "pivot" ]
pandas - Changing the format of a data frame
39,479,185
<p>I have a data frame which is of the format:</p> <pre><code>level_0 level_1 counts 0 back not_share 1183 1 back share 1154 2 back total 2337 3 front not_share 697 4 front share 1073 5 front total 1770 6 left not_share 4819 7 left share 5097 8 left total 9916 9 other not_share 2649 10 other share 2182 11 other total 4831 12 right not_share 1449 13 right share 1744 14 right total 3193 </code></pre> <p>I want to convert this form to </p> <pre><code>level_0 share not_share total back 1154 1183 2337 front 1073 697 1770 </code></pre> <p>and so on..</p> <p>Is there any method that I can use or should I convert it into a native Python datatype and then do the manipulations?</p>
3
2016-09-13T21:22:48Z
39,479,250
<p>Use <code>groupby</code> and <code>sum</code></p> <pre><code>df.groupby(['level_0', 'level_1']).counts.sum().unstack() </code></pre> <p><a href="http://i.stack.imgur.com/IeXuV.png"><img src="http://i.stack.imgur.com/IeXuV.png" alt="enter image description here"></a></p>
5
2016-09-13T21:27:03Z
[ "python", "pandas", "dataframe", "pivot" ]
Can't import passlib into python3
39,479,207
<p>Trying to import passlib into python3 and it fails:</p> <pre><code>$ pip freeze | grep lib passlib==1.6.5 $ python3 Python 3.4.2 (default, Oct 8 2014, 10:45:20) &gt;&gt;&gt; import passlib Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ImportError: No module named 'passlib' </code></pre>
1
2016-09-13T21:23:55Z
39,479,313
<p><code>pip</code> will install a library for python 2. If you want to install it for python 3 too, you must do so separately:</p> <pre><code>$ python3 -m pip install passlib </code></pre>
1
2016-09-13T21:31:04Z
[ "python", "python-3.x", "pip", "passlib" ]
Add multiple of a matrix without build a new one
39,479,228
<p>Say I have two matrices <code>B</code> and <code>M</code> and I want to execute the following statement:</p> <pre><code>B += 3*M </code></pre> <p>I execute this instruction repeatedly so I don't want to build each time the matrix <code>3*M</code> (<code>3</code> may change, it is just to make cleat that I only do a scalar-matrix product). Is it a numpy-function which makes this computation "in place"?</p> <p>More precisely, I have a list of scalars <code>as</code> and a list of matrices <code>Ms</code>, I would like to perform the "dot product" (which is not really one since the two operands are of different type) of the two, that is to say:</p> <pre><code>sum(a*M for a, M in zip(as, Ms)) </code></pre> <p>The <code>np.dot</code> function does not do what I except...</p>
2
2016-09-13T21:25:40Z
39,479,306
<p>You can use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.tensordot.html"><code>np.tensordot</code></a> -</p> <pre><code>np.tensordot(As,Ms,axes=(0,0)) </code></pre> <p>Or <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html"><code>np.einsum</code></a> -</p> <pre><code>np.einsum('i,ijk-&gt;jk',As,Ms) </code></pre> <p>Sample run -</p> <pre><code>In [41]: As = [2,5,6] In [42]: Ms = [np.random.rand(2,3),np.random.rand(2,3),np.random.rand(2,3)] In [43]: sum(a*M for a, M in zip(As, Ms)) Out[43]: array([[ 6.79630284, 5.04212877, 10.76217631], [ 4.91927651, 1.98115548, 6.13705742]]) In [44]: np.tensordot(As,Ms,axes=(0,0)) Out[44]: array([[ 6.79630284, 5.04212877, 10.76217631], [ 4.91927651, 1.98115548, 6.13705742]]) In [45]: np.einsum('i,ijk-&gt;jk',As,Ms) Out[45]: array([[ 6.79630284, 5.04212877, 10.76217631], [ 4.91927651, 1.98115548, 6.13705742]]) </code></pre>
5
2016-09-13T21:30:56Z
[ "python", "numpy", "matrix" ]
Add multiple of a matrix without build a new one
39,479,228
<p>Say I have two matrices <code>B</code> and <code>M</code> and I want to execute the following statement:</p> <pre><code>B += 3*M </code></pre> <p>I execute this instruction repeatedly so I don't want to build each time the matrix <code>3*M</code> (<code>3</code> may change, it is just to make cleat that I only do a scalar-matrix product). Is it a numpy-function which makes this computation "in place"?</p> <p>More precisely, I have a list of scalars <code>as</code> and a list of matrices <code>Ms</code>, I would like to perform the "dot product" (which is not really one since the two operands are of different type) of the two, that is to say:</p> <pre><code>sum(a*M for a, M in zip(as, Ms)) </code></pre> <p>The <code>np.dot</code> function does not do what I except...</p>
2
2016-09-13T21:25:40Z
39,482,794
<p>Another way you could do this, particularly if you favour readability, is to make use of <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">broadcasting</a>.</p> <p>So you could make a 3D array from the 1D and 2D arrays and then sum over the appropriate axis:</p> <pre><code>&gt;&gt;&gt; Ms = np.random.randn(4, 2, 3) # 4 arrays of size 2x3 &gt;&gt;&gt; As = np.random.randn(4) &gt;&gt;&gt; np.sum(As[:, np.newaxis, np.newaxis] * Ms) array([[-1.40199248, -0.40337845, -0.69986566], [ 3.52724279, 0.19547118, 2.1485559 ]]) &gt;&gt;&gt; sum(a*M for a, M in zip(As, Ms)) array([[-1.40199248, -0.40337845, -0.69986566], [ 3.52724279, 0.19547118, 2.1485559 ]]) </code></pre> <p>However, it's worth noting that <code>np.einsum</code> and <code>np.tensordot</code> are usually much more efficient:</p> <pre><code>&gt;&gt;&gt; %timeit np.sum(As[:, np.newaxis, np.newaxis] * Ms, axis=0) The slowest run took 7.38 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 8.58 µs per loop &gt;&gt;&gt; %timeit np.einsum('i,ijk-&gt;jk', As, Ms) The slowest run took 19.16 times longer than the fastest. This could mean that an intermediate result is being cached. 100000 loops, best of 3: 2.44 µs per loop </code></pre> <p>And this is also true for larger numbers:</p> <pre><code>&gt;&gt;&gt; Ms = np.random.randn(100, 200, 300) &gt;&gt;&gt; As = np.random.randn(100) &gt;&gt;&gt; %timeit np.einsum('i,ijk-&gt;jk', As, Ms) 100 loops, best of 3: 5.03 ms per loop &gt;&gt;&gt; %timeit np.sum(As[:, np.newaxis, np.newaxis] * Ms, axis=0) 100 loops, best of 3: 14.8 ms per loop &gt;&gt;&gt; %timeit np.tensordot(As,Ms,axes=(0,0)) 100 loops, best of 3: 2.79 ms per loop </code></pre> <p>So <code>np.tensordot</code> works best in this case.</p> <p>The only good reason to use <code>np.sum</code> and broadcasting is to make the code a <em>little</em> more readable (helps when you have small matrices).</p>
1
2016-09-14T05:02:38Z
[ "python", "numpy", "matrix" ]
Parse JSON output from API file into CSV
39,479,263
<p>I am currently trying to convert a JSON output from an API request to a CSV format so i can store the results into our database. Here is my current code for reference:</p> <pre><code>import pyodbc import csv #import urllib2 import json import collections import requests #import pprint #import functools print ("Connecting via ODBC") conn = pyodbc.connect('DSN=DSN', autocommit=True) print ("Connected!\n") cur = conn.cursor() sql = """SELECT DATA""" cur.execute(sql) #df = pandas.read_sql_query(sql, conn) #df.to_csv('TEST.csv') #print('CSV sheet is ready to go!') rows = cur.fetchall() obs_list = [] for row in rows: d = collections.OrderedDict() d['addressee'] = row.NAME d['street'] = row.ADDRESS d['city'] = row.CITY d['state'] = row.STATE d['zipcode'] = row.ZIP obs_list.append(d) obs_file = 'TEST.json' with open(obs_file, 'w') as file: json.dump(obs_list, file) print('Run through API') url = 'https://api.smartystreets.com/street-address?' headers = {'content-type': 'application/json'} with open('test1.json', 'r') as run: dict_run = run.readlines() dict_ready = (''.join(dict_run)) r = requests.post(url , data=dict_ready, headers=headers) ss_output = r.text output = 'output.json' with open(output,'w') as of: json.dump(ss_output, of) print('I think it works') f = open('output.json') data = json.load(f) data_1 = data['analysis'] data_2 = data['metadata'] data_3 = data['components'] entity_data = open('TEST.csv','w') csvwriter = csv.writer(entity_data) count = 0 count2 = 0 count3 = 0 for ent in data_1: if count == 0: header = ent.keys() csvwriter.writerow(header) count += 1 csvwriter.writerow(ent.values()) for ent_2 in data_2: if count2 == 0: header2 = ent_2.keys() csvwriter.writerow(header2) count2 += 1 csvwriter.writerow(ent_2.values()) for ent_3 in data_3: if count3 == 0: header3 = ent_3.keys() csvwriter.writerow(header3) count3 += 1 csvwriter.writerow(ent_3.values()) entity_data.close() </code></pre> <p>Sample output from API:</p> <pre><code>[ { "input_index": 0, "candidate_index": 0, "delivery_line_1": "1 Santa Claus Ln", "last_line": "North Pole AK 99705-9901", "delivery_point_barcode": "997059901010", "components": { "primary_number": "1", "street_name": "Santa Claus", "street_suffix": "Ln", "city_name": "North Pole", "state_abbreviation": "AK", "zipcode": "99705", "plus4_code": "9901", "delivery_point": "01", "delivery_point_check_digit": "0" }, "metadata": { "record_type": "S", "zip_type": "Standard", "county_fips": "02090", "county_name": "Fairbanks North Star", "carrier_route": "C004", "congressional_district": "AL", "rdi": "Commercial", "elot_sequence": "0001", "elot_sort": "A", "latitude": 64.75233, "longitude": -147.35297, "precision": "Zip8", "time_zone": "Alaska", "utc_offset": -9, "dst": true }, "analysis": { "dpv_match_code": "Y", "dpv_footnotes": "AABB", "dpv_cmra": "N", "dpv_vacant": "N", "active": "Y", "footnotes": "L#" } }, { "input_index": 1, "candidate_index": 0, "delivery_line_1": "Loop land 1", "last_line": "North Pole AK 99705-9901", "delivery_point_barcode": "997059901010", "components": { "primary_number": "1", "street_name": "Lala land", "street_suffix": "Ln", "city_name": "North Pole", "state_abbreviation": "AK", "zipcode": "99705", "plus4_code": "9901", "delivery_point": "01", "delivery_point_check_digit": "0" }, "metadata": { "record_type": "S", "zip_type": "Standard", "county_fips": "02090", "county_name": "Fairbanks North Star", "carrier_route": "C004", "congressional_district": "AL", "rdi": "Commercial", "elot_sequence": "0001", "elot_sort": "A", "latitude": 64.75233, "longitude": -147.35297, "precision": "Zip8", "time_zone": "Alaska", "utc_offset": -9, "dst": true }, "analysis": { "dpv_match_code": "Y", "dpv_footnotes": "AABB", "dpv_cmra": "N", "dpv_vacant": "N", "active": "Y", "footnotes": "L#" } ] </code></pre> <p>After storing the API output the trouble is trying to parse the returned output (Sample output) into a CSV format. The code im using to try to do this:</p> <pre><code>f = open('output.json') data = json.load(f) data_1 = data['analysis'] data_2 = data['metadata'] data_3 = data['components'] entity_data = open('TEST.csv','w') csvwriter = csv.writer(entity_data) count = 0 count2 = 0 count3 = 0 for ent in data_1: if count == 0: header = ent.keys() csvwriter.writerow(header) count += 1 csvwriter.writerow(ent.values()) for ent_2 in data_2: if count2 == 0: header2 = ent_2.keys() csvwriter.writerow(header2) count2 += 1 csvwriter.writerow(ent_2.values()) for ent_3 in data_3: if count3 == 0: header3 = ent_3.keys() csvwriter.writerow(header3) count3 += 1 csvwriter.writerow(ent_3.values()) entity_data.close() </code></pre> <p>returns the following error: TypeError: string indices must be integers. And as someone kindly commented and pointed out it appears i am iterating over keys instead of the different dictionaries, and this is where I get stuck because im not sure what to do? From my understanding it looks like the JSON is split into 3 different arrays with JSON object for each, but that does not appear to be the case according to the structure? I apologize for the length of the code, but I want some resemblance of context to what i am trying to accomplish.</p>
0
2016-09-13T21:28:02Z
39,480,315
<p>You are saving request's <code>result.text</code> with json. <code>result.text</code> is a string so upon rereading it through json you get the same one long string instead of a <code>list</code>. Try to write <code>result.text</code> as is:</p> <pre><code>output = 'output.json' with open(output,'w') as of: of.write(ss_output) </code></pre> <p>That's the cause of <code>TypeError:string indices must be integers</code> you mention. The rest of your code has multiple issues. </p> <ol> <li><p>The data in json is a list of dicts so to get ,say , <code>data_1</code> you need list comprehension like this: <code>data_1 = [x['analysis'] for x in data]</code></p></li> <li><p>You write three types of rows into the same csv file: components, metadata and analyzis. That's really odd. </p></li> </ol> <p>Probably you have to rewrite the second half of the code: open three csv_writers one per data type, then iterate over <code>data</code> items and write their fields into corresponding csv_writer.</p>
0
2016-09-13T23:15:11Z
[ "python", "json", "csv" ]
Parse JSON output from API file into CSV
39,479,263
<p>I am currently trying to convert a JSON output from an API request to a CSV format so i can store the results into our database. Here is my current code for reference:</p> <pre><code>import pyodbc import csv #import urllib2 import json import collections import requests #import pprint #import functools print ("Connecting via ODBC") conn = pyodbc.connect('DSN=DSN', autocommit=True) print ("Connected!\n") cur = conn.cursor() sql = """SELECT DATA""" cur.execute(sql) #df = pandas.read_sql_query(sql, conn) #df.to_csv('TEST.csv') #print('CSV sheet is ready to go!') rows = cur.fetchall() obs_list = [] for row in rows: d = collections.OrderedDict() d['addressee'] = row.NAME d['street'] = row.ADDRESS d['city'] = row.CITY d['state'] = row.STATE d['zipcode'] = row.ZIP obs_list.append(d) obs_file = 'TEST.json' with open(obs_file, 'w') as file: json.dump(obs_list, file) print('Run through API') url = 'https://api.smartystreets.com/street-address?' headers = {'content-type': 'application/json'} with open('test1.json', 'r') as run: dict_run = run.readlines() dict_ready = (''.join(dict_run)) r = requests.post(url , data=dict_ready, headers=headers) ss_output = r.text output = 'output.json' with open(output,'w') as of: json.dump(ss_output, of) print('I think it works') f = open('output.json') data = json.load(f) data_1 = data['analysis'] data_2 = data['metadata'] data_3 = data['components'] entity_data = open('TEST.csv','w') csvwriter = csv.writer(entity_data) count = 0 count2 = 0 count3 = 0 for ent in data_1: if count == 0: header = ent.keys() csvwriter.writerow(header) count += 1 csvwriter.writerow(ent.values()) for ent_2 in data_2: if count2 == 0: header2 = ent_2.keys() csvwriter.writerow(header2) count2 += 1 csvwriter.writerow(ent_2.values()) for ent_3 in data_3: if count3 == 0: header3 = ent_3.keys() csvwriter.writerow(header3) count3 += 1 csvwriter.writerow(ent_3.values()) entity_data.close() </code></pre> <p>Sample output from API:</p> <pre><code>[ { "input_index": 0, "candidate_index": 0, "delivery_line_1": "1 Santa Claus Ln", "last_line": "North Pole AK 99705-9901", "delivery_point_barcode": "997059901010", "components": { "primary_number": "1", "street_name": "Santa Claus", "street_suffix": "Ln", "city_name": "North Pole", "state_abbreviation": "AK", "zipcode": "99705", "plus4_code": "9901", "delivery_point": "01", "delivery_point_check_digit": "0" }, "metadata": { "record_type": "S", "zip_type": "Standard", "county_fips": "02090", "county_name": "Fairbanks North Star", "carrier_route": "C004", "congressional_district": "AL", "rdi": "Commercial", "elot_sequence": "0001", "elot_sort": "A", "latitude": 64.75233, "longitude": -147.35297, "precision": "Zip8", "time_zone": "Alaska", "utc_offset": -9, "dst": true }, "analysis": { "dpv_match_code": "Y", "dpv_footnotes": "AABB", "dpv_cmra": "N", "dpv_vacant": "N", "active": "Y", "footnotes": "L#" } }, { "input_index": 1, "candidate_index": 0, "delivery_line_1": "Loop land 1", "last_line": "North Pole AK 99705-9901", "delivery_point_barcode": "997059901010", "components": { "primary_number": "1", "street_name": "Lala land", "street_suffix": "Ln", "city_name": "North Pole", "state_abbreviation": "AK", "zipcode": "99705", "plus4_code": "9901", "delivery_point": "01", "delivery_point_check_digit": "0" }, "metadata": { "record_type": "S", "zip_type": "Standard", "county_fips": "02090", "county_name": "Fairbanks North Star", "carrier_route": "C004", "congressional_district": "AL", "rdi": "Commercial", "elot_sequence": "0001", "elot_sort": "A", "latitude": 64.75233, "longitude": -147.35297, "precision": "Zip8", "time_zone": "Alaska", "utc_offset": -9, "dst": true }, "analysis": { "dpv_match_code": "Y", "dpv_footnotes": "AABB", "dpv_cmra": "N", "dpv_vacant": "N", "active": "Y", "footnotes": "L#" } ] </code></pre> <p>After storing the API output the trouble is trying to parse the returned output (Sample output) into a CSV format. The code im using to try to do this:</p> <pre><code>f = open('output.json') data = json.load(f) data_1 = data['analysis'] data_2 = data['metadata'] data_3 = data['components'] entity_data = open('TEST.csv','w') csvwriter = csv.writer(entity_data) count = 0 count2 = 0 count3 = 0 for ent in data_1: if count == 0: header = ent.keys() csvwriter.writerow(header) count += 1 csvwriter.writerow(ent.values()) for ent_2 in data_2: if count2 == 0: header2 = ent_2.keys() csvwriter.writerow(header2) count2 += 1 csvwriter.writerow(ent_2.values()) for ent_3 in data_3: if count3 == 0: header3 = ent_3.keys() csvwriter.writerow(header3) count3 += 1 csvwriter.writerow(ent_3.values()) entity_data.close() </code></pre> <p>returns the following error: TypeError: string indices must be integers. And as someone kindly commented and pointed out it appears i am iterating over keys instead of the different dictionaries, and this is where I get stuck because im not sure what to do? From my understanding it looks like the JSON is split into 3 different arrays with JSON object for each, but that does not appear to be the case according to the structure? I apologize for the length of the code, but I want some resemblance of context to what i am trying to accomplish.</p>
0
2016-09-13T21:28:02Z
39,481,419
<p>Consider pandas's <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.io.json.json_normalize.html" rel="nofollow"><code>json_normalize()</code></a> method to flatten nested items into tabular df structure:</p> <pre><code>import pandas as pd from pandas.io.json import json_normalize import json with open('Output.json') as f: data = json.load(f) df = json_normalize(data) df.to_csv('Output.csv') </code></pre> <p>Do note the <em>components</em>, <em>metadata</em>, and <em>analysis</em> become period-separated prefixes to corresponding values. If not needed, consider renaming columns.</p> <p><a href="http://i.stack.imgur.com/CLCkY.png" rel="nofollow"><img src="http://i.stack.imgur.com/CLCkY.png" alt="JSON to CSV Output"></a></p>
0
2016-09-14T01:59:40Z
[ "python", "json", "csv" ]
Cost/Number Rollup In Tree-like structure - Python
39,479,335
<p>I have a database table where each row has:</p> <pre><code>name, child1, child1_quantity, child2, child2_quantity, child3, child3_quantity, price </code></pre> <p>This table will be brought into python as a list of dictionaries or dictionary of dictionaries (doesn't matter). It will look something like this:</p> <pre><code>[{name: A, child1: C1, child1_quantity:2, child2:C2, child2_quantity: 1, child3: C3, child3_quantity:3, price: null}, {name: C1, child1: C1A, child1_quantity:5, child2: C1B, child2_quantity:2, child3: C1C, child3_quantity:6, price: 3}, {name: C2, child1: C2A, child1_quantity:5, child2: C2B, child2_quantity:2, child3: C2C, child3_quantity:10, price: 4}, {name: C3, child1: C3A, child1_quantity:3, child2: C3B, child2_quantity:7, child3: C3C, child3_quantity:15, price: null}] </code></pre> <p>Problem Case: I want to be able to enter component's name and get its price. If a price is given in the table, easy, return it. If a price is not given, we have to calculate the price by adding the prices of it's children i.e </p> <pre><code>(child1 price x child1 qty) + (child2 price x child2 qty) + ..... </code></pre> <p>But each child may/may not have a price. So we'll need to go down and find the total cost of the child from its child and then bring it up ..... all until we get the total prices of the children and then sum them up to get the price of our component of interest. It is a recursive type of problem, I think but I cannot think of how to conceptualize or represent the data to make my goal possible. Can I get some clues/pointers? sql recursive query is not an option. I'm trying to do this in a python data structure or object. Thanks.</p>
0
2016-09-13T21:32:34Z
39,479,775
<pre><code>def find_price(self, name): if self.dictionary[name]["price"]: return self.dictionary[name]["price"] else: #assuming this is in a class...otherwise use global instead of self return self.find_price(dictionary[name]["child1"])*dictionary[name]["child1_quantity"] + find_price(self.dictionary[name]["child2"])*self.dictionary[name]["child2_quantity"]#.....etc </code></pre> <p>this also assumes that the top level object that you read the data into is a diction where the names also serve as keys in addition to their name fields.</p>
1
2016-09-13T22:14:33Z
[ "python", "recursion", "conditional", "rollup" ]
Detecting how many strings are present in a given variable?
39,479,392
<p>I'm trying to make a function that when you input a table, for example 'Fish, Carrot, Beef, Fish' it detects how many times 'Fish' was inputted, in this case 2. But when I try to do this it returns 'None' instead of 2.</p> <pre><code>def word_count(x): count = 0 for item in x: if item == 'Fish': count = count + 1 return count word_count(['Fish', 'Carrot', 'Beef', 'Fish']) </code></pre> <p>Any help would be appreciated as I'm quite new to Python, thank you.</p>
0
2016-09-13T21:38:19Z
39,479,429
<p><strong>Issue with your code</strong>: Your function is <code>return</code>ing value within the <code>if</code> statement. So, when there is no <code>Fish</code> in your <code>x</code> list, it will return <code>None</code> (<em>which is default value returned by a function in Python</em>). Try below code and it will work:</p> <pre><code>def word_count(x): count = 0 for item in x: if item == 'Fish': count = count + 1 return count </code></pre> <p><strong>Suggestion</strong>: You may simplify you function by using the <code>count()</code> function present with lists. For example:</p> <pre><code>&gt;&gt;&gt; my_list = ['Fish', 'Carrot', 'Beef', 'Fish'] &gt;&gt;&gt; my_list.count('Fish') 2 </code></pre>
1
2016-09-13T21:41:26Z
[ "python" ]
Detecting how many strings are present in a given variable?
39,479,392
<p>I'm trying to make a function that when you input a table, for example 'Fish, Carrot, Beef, Fish' it detects how many times 'Fish' was inputted, in this case 2. But when I try to do this it returns 'None' instead of 2.</p> <pre><code>def word_count(x): count = 0 for item in x: if item == 'Fish': count = count + 1 return count word_count(['Fish', 'Carrot', 'Beef', 'Fish']) </code></pre> <p>Any help would be appreciated as I'm quite new to Python, thank you.</p>
0
2016-09-13T21:38:19Z
39,479,448
<p>It's almost correct, except:</p> <p><code>x</code>, according to your logic should be a list of strings, so you should enclose it in <code>[]</code> when calling the function. Then you can iterate over it.</p> <p>You should return from the function when you've got your answer, i.e. iterated over the list.</p> <p>So the corrected version would look like</p> <pre><code>def word_count(x): count = 0 for item in x: if item == 'Fish': count = count + 1 return count word_count(['Fish', 'Carrot', 'Beef', 'Fish']) </code></pre>
1
2016-09-13T21:42:29Z
[ "python" ]
How do I check if a function is called in a mock method?
39,479,445
<p>AuthUser is a class which contains the delete method. I want to test if the mock delete method calls a function, given the arguments for the method.</p> <pre><code>@mock.patch.object(AuthUser, 'delete') @mock.patch('oscadmin.common.oscp.deactivate_user') def test_delete(self, deactivate_user_mock, delete_mock): """Test the delete() method in AuthUser""" authUserObject = mock.Mock() authUserObject.oscp_id = 4 """If delete_from_oscp = True &amp;&amp; oscp_id isset""" delete_mock(self, True, authUserObject, mock.Mock()) self.assertTrue(authUserObject.oscp_id) </code></pre>
0
2016-09-13T21:42:20Z
39,479,530
<pre><code>delete_mock.method_expected_to_be_called.assert_called_once_with(args, kwargs) </code></pre>
0
2016-09-13T21:50:11Z
[ "python", "django", "python-2.7", "unit-testing", "django-models" ]
Generate http error from python3 requests
39,479,522
<p>I have a simple long poll thing using <code>python3</code> and the <code>requests</code> package. It currently looks something like:</p> <pre><code>def longpoll(): session = requests.Session() while True: try: fetched = session.get(MyURL) input = base64.b64decode(fetched.content) output = process(data) session.put(MyURL, data=base64.b64encode(response)) except Exception as e: print(e) time.sleep(10) </code></pre> <p>There is a case where instead of <code>process</code>ing the input and <code>put</code>ing the result, I'd like to raise an http error. Is there a simple way to do this from the high level <code>Session</code> interface? Or do I have to drill down to use the lower level objects?</p>
0
2016-09-13T21:49:03Z
39,496,369
<p>Since You have control over the server you may want to reverse the 2nd call</p> <p>Here is an example using bottle to recive the 2nd poll</p> <pre><code>def longpoll(): session = requests.Session() while True: #I'm guessing that the server does not care that we call him a lot of times ... try: session.post(MyURL, {"ip_address": my_ip_address}) # request work or I'm alive #input = base64.b64decode(fetched.content) #output = process(data) #session.put(MyURL, data=base64.b64encode(response)) except Exception as e: print(e) time.sleep(10) @bottle.post("/process") def process_new_work(): data = bottle.request.json() output = process(data) #if an error is thrown an HTTP error will be returned by the framework return output </code></pre> <p>This way the server will get the output or an bad HTTP status</p>
0
2016-09-14T17:26:24Z
[ "python", "python-3.x", "python-requests", "http-error" ]
How to replace a list of special characters in a csv in python
39,479,563
<p>I have some csv files that may or may not contain characters like “”à that are undesirable, so I want to write a simple script that will feed in a csv and feed out a csv (or its contents) with those characters replaced with more standard characters, so in the example: </p> <pre><code>bad_chars = '“”à' good_chars = '""a' </code></pre> <p>The problem so far is that my code seems to produce a csv with perhaps the wrong encoding? Any help would be appreciated in making this simpler and/or making sure my output csv doesn't force an incorrect regex encoding--maybe using pandas? </p> <p>Attempt:</p> <pre><code>import csv, string upload_path = sys.argv[1] input_file = open('{}'.format(upload_path), 'rb') upload_csv = open('{}_fixed.csv'.format(upload_path.strip('.csv')), 'wb') data = csv.reader(input_file) writer = csv.writer(upload_csv, quoting=csv.QUOTE_ALL) in_chars = '\xd2\xd3' out_chars = "''" replace_list = string.maketrans(in_chars, out_chars) for line in input_file: line = str(line) new_line = line.translate(replace_list) writer.writerow(new_line.split(',')) input_file.close() upload_csv.close() </code></pre>
0
2016-09-13T21:52:27Z
39,479,662
<p>As you stamped your question with the <code>pandas</code> tag - here is a pandas solution:</p> <pre><code>import pandas as pd (pd.read_csv('/path/to/file.csv') .replace(r'RegEx_search_for_str', r'RegEx_replace_with_str', regex=True) .to_csv('/path/to/fixed.csv', index=False) ) </code></pre>
1
2016-09-13T22:02:11Z
[ "python", "regex", "csv", "pandas" ]
Python pairwise comparison of elements in a array or list
39,479,578
<p>Let me elaborate my question using a simple example.I have a=[a1,a2,a3,a4], with all ai being a numerical value. </p> <p>What I want to get is pairwise comparisons within 'a', such as I(a1>=a2), I(a1>=a3), I(a1>=a4), ,,,,I(a4>=a1), I(a4>=a2), I(a4>=a3), where I is a indicator function. So I used the following code.</p> <pre><code>res=[x&gt;=y for x in a for y in a] </code></pre> <p>But it also gives the comparison results like I(a1>=a1),..,I(a4>=a4), which is always one. To get rid of these nuisance, I convert res into a numpy array and find the off diagonal elements.</p> <pre><code>res1=numpy.array(res) </code></pre> <p>This gives the result what I want, but I think there should be more efficient or simpler way to do pairwise comparison and extract the off diagonal element. Do you have any idea about this? Thanks in advance.</p>
2
2016-09-13T21:54:23Z
39,479,607
<p>Perhaps you want:</p> <pre><code> [x &gt;= y for i,x in enumerate(a) for j,y in enumerate(a) if i != j] </code></pre> <p>This will not compare any item against itself, but compare each of the others against each other.</p>
4
2016-09-13T21:57:42Z
[ "python", "arrays", "numpy", "comparison", "rank" ]
Python pairwise comparison of elements in a array or list
39,479,578
<p>Let me elaborate my question using a simple example.I have a=[a1,a2,a3,a4], with all ai being a numerical value. </p> <p>What I want to get is pairwise comparisons within 'a', such as I(a1>=a2), I(a1>=a3), I(a1>=a4), ,,,,I(a4>=a1), I(a4>=a2), I(a4>=a3), where I is a indicator function. So I used the following code.</p> <pre><code>res=[x&gt;=y for x in a for y in a] </code></pre> <p>But it also gives the comparison results like I(a1>=a1),..,I(a4>=a4), which is always one. To get rid of these nuisance, I convert res into a numpy array and find the off diagonal elements.</p> <pre><code>res1=numpy.array(res) </code></pre> <p>This gives the result what I want, but I think there should be more efficient or simpler way to do pairwise comparison and extract the off diagonal element. Do you have any idea about this? Thanks in advance.</p>
2
2016-09-13T21:54:23Z
39,479,618
<p>You may achieve that by using:</p> <pre><code>[x &gt;= y for i,x in enumerate(a) for j,y in enumerate(a) if i != j] </code></pre> <p><strong><em>Issue with your code</em></strong>:</p> <p>You are iterating in over list twice. If you convert your <code>comprehension</code> to <code>loop</code>, it will work like:</p> <pre><code>for x in a: for y in a: x&gt;=y # which is your condition </code></pre> <p>Hence, it the the order of execution is as: (a1, a1), (a1, a2), ... , (a2, a1), (a2, a2), ... , (a4, a4)</p>
1
2016-09-13T21:58:40Z
[ "python", "arrays", "numpy", "comparison", "rank" ]
Python pairwise comparison of elements in a array or list
39,479,578
<p>Let me elaborate my question using a simple example.I have a=[a1,a2,a3,a4], with all ai being a numerical value. </p> <p>What I want to get is pairwise comparisons within 'a', such as I(a1>=a2), I(a1>=a3), I(a1>=a4), ,,,,I(a4>=a1), I(a4>=a2), I(a4>=a3), where I is a indicator function. So I used the following code.</p> <pre><code>res=[x&gt;=y for x in a for y in a] </code></pre> <p>But it also gives the comparison results like I(a1>=a1),..,I(a4>=a4), which is always one. To get rid of these nuisance, I convert res into a numpy array and find the off diagonal elements.</p> <pre><code>res1=numpy.array(res) </code></pre> <p>This gives the result what I want, but I think there should be more efficient or simpler way to do pairwise comparison and extract the off diagonal element. Do you have any idea about this? Thanks in advance.</p>
2
2016-09-13T21:54:23Z
39,479,623
<p>You could use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow"><code>NumPy broadcasting</code></a> -</p> <pre><code># Get the mask of comparisons in a vectorized manner using broadcasting mask = a[:,None] &gt;= a # Select the elements other than diagonal ones out = mask[~np.eye(a.size,dtype=bool)] </code></pre> <p>If you rather prefer to set the diagonal elements as <code>False</code> in <code>mask</code> and then <code>mask</code> would be the output, like so -</p> <pre><code>mask[np.eye(a.size,dtype=bool)] = 0 </code></pre> <p>Sample run -</p> <pre><code>In [56]: a Out[56]: array([3, 7, 5, 8]) In [57]: mask = a[:,None] &gt;= a In [58]: mask Out[58]: array([[ True, False, False, False], [ True, True, True, False], [ True, False, True, False], [ True, True, True, True]], dtype=bool) In [59]: mask[~np.eye(a.size,dtype=bool)] # Selecting non-diag elems Out[59]: array([False, False, False, True, True, False, True, False, False, True, True, True], dtype=bool) In [60]: mask[np.eye(a.size,dtype=bool)] = 0 # Setting diag elems as False In [61]: mask Out[61]: array([[False, False, False, False], [ True, False, True, False], [ True, False, False, False], [ True, True, True, False]], dtype=bool) </code></pre> <hr> <p><strong>Runtime test</strong> </p> <p>Reasons to use <code>NumPy broadcasting</code>? Performance! Let's see how with a large dataset -</p> <pre><code>In [34]: def pairwise_comp(A): # Using NumPy broadcasting ...: a = np.asarray(A) # Convert to array if not already so ...: mask = a[:,None] &gt;= a ...: out = mask[~np.eye(a.size,dtype=bool)] ...: return out ...: In [35]: a = np.random.randint(0,9,(1000)).tolist() # Input list In [36]: %timeit [x &gt;= y for i,x in enumerate(a) for j,y in enumerate(a) if i != j] 1 loop, best of 3: 185 ms per loop # @Sixhobbits's loopy soln In [37]: %timeit pairwise_comp(a) 100 loops, best of 3: 5.76 ms per loop </code></pre>
2
2016-09-13T21:58:54Z
[ "python", "arrays", "numpy", "comparison", "rank" ]
Python pairwise comparison of elements in a array or list
39,479,578
<p>Let me elaborate my question using a simple example.I have a=[a1,a2,a3,a4], with all ai being a numerical value. </p> <p>What I want to get is pairwise comparisons within 'a', such as I(a1>=a2), I(a1>=a3), I(a1>=a4), ,,,,I(a4>=a1), I(a4>=a2), I(a4>=a3), where I is a indicator function. So I used the following code.</p> <pre><code>res=[x&gt;=y for x in a for y in a] </code></pre> <p>But it also gives the comparison results like I(a1>=a1),..,I(a4>=a4), which is always one. To get rid of these nuisance, I convert res into a numpy array and find the off diagonal elements.</p> <pre><code>res1=numpy.array(res) </code></pre> <p>This gives the result what I want, but I think there should be more efficient or simpler way to do pairwise comparison and extract the off diagonal element. Do you have any idea about this? Thanks in advance.</p>
2
2016-09-13T21:54:23Z
39,482,510
<p>Why are you worried about the <code>a1&gt;=a1</code> comparison. It may be pridictable, but skipping it might not be worth the extra work.</p> <p>Make a list of 100 numbers</p> <pre><code>In [17]: a=list(range(100)) </code></pre> <p>Compare them with the simple double loop; producing a 10000 values (100*100)</p> <pre><code>In [18]: len([x&gt;=y for x in a for y in a]) Out[18]: 10000 In [19]: timeit [x&gt;=y for x in a for y in a] 1000 loops, best of 3: 1.04 ms per loop </code></pre> <p>Now use <code>@Moinuddin Quadri's</code> enumerated loop to skip the 100 <code>eye</code> values:</p> <pre><code>In [20]: len([x&gt;=y for i,x in enumerate(a) for j, y in enumerate(a) if i!=j]) Out[20]: 9900 In [21]: timeit [x&gt;=y for i,x in enumerate(a) for j, y in enumerate(a) if i!=j] 100 loops, best of 3: 2.12 ms per loop </code></pre> <p>It takes 2x longer. Half the extra time is the enumerates, and half the <code>if</code>.</p> <p>In this case working with numpy arrays is much faster, even when including the time to create the array.</p> <pre><code>xa = np.array(x); Z = xa[:,None]&gt;=xa </code></pre> <p>But you can't get rid of the the diagonal values. They will the <code>True</code>; they can be flipped to <code>False</code>, but why. In a boolean array there are only 2 values.</p> <p>The fastest solution is to write an indicator function that isn't bothered by these diagonal values.</p>
1
2016-09-14T04:30:45Z
[ "python", "arrays", "numpy", "comparison", "rank" ]
Cplex gives two diffirent results?
39,479,632
<p>I use Python API in Cplex to solve a Linear programing problem. When using Cplex, I had the result below:</p> <p><a href="http://i.stack.imgur.com/1BqQ2.png" rel="nofollow"><img src="http://i.stack.imgur.com/1BqQ2.png" alt="The result is solved directly by Python API"></a></p> <p>But then I saved my LP prolem as a <strong>lp file</strong> and use Cplex to solve again, the result was a little bit difirence from the first one:</p> <p><a href="http://i.stack.imgur.com/A7qTB.png" rel="nofollow"><img src="http://i.stack.imgur.com/A7qTB.png" alt="enter image description here"></a> Anyone gives an explaination?</p> <p>Below is my function:</p> <pre><code>def SubProblem(myobj,myrow,mysense,myrhs,mylb): c = cplex.Cplex() c.objective.set_sense(c.objective.sense.minimize) c.variables.add(obj = myobj,lb = mylb) c.linear_constraints.add(lin_expr = myrow, senses = mysense,rhs = myrhs) c.solve() lpfile = "Save_models\clem.lp" c.write(lpfile) print("\nFile '%s' was saved"%(lpfile)) </code></pre>
2
2016-09-13T21:59:47Z
39,481,267
<p>If I understand correctly, you are solving the second time using the LP file you exported in the first run. You can loose precision when writing to LP format. Try with <a href="https://www.ibm.com/support/knowledgecenter/SSSA5P_12.6.3/ilog.odms.cplex.help/CPLEX/FileFormats/topics/SAV.html" rel="nofollow">SAV</a> format instead.</p>
2
2016-09-14T01:37:12Z
[ "python", "mathematical-optimization", "cplex" ]
Cplex gives two diffirent results?
39,479,632
<p>I use Python API in Cplex to solve a Linear programing problem. When using Cplex, I had the result below:</p> <p><a href="http://i.stack.imgur.com/1BqQ2.png" rel="nofollow"><img src="http://i.stack.imgur.com/1BqQ2.png" alt="The result is solved directly by Python API"></a></p> <p>But then I saved my LP prolem as a <strong>lp file</strong> and use Cplex to solve again, the result was a little bit difirence from the first one:</p> <p><a href="http://i.stack.imgur.com/A7qTB.png" rel="nofollow"><img src="http://i.stack.imgur.com/A7qTB.png" alt="enter image description here"></a> Anyone gives an explaination?</p> <p>Below is my function:</p> <pre><code>def SubProblem(myobj,myrow,mysense,myrhs,mylb): c = cplex.Cplex() c.objective.set_sense(c.objective.sense.minimize) c.variables.add(obj = myobj,lb = mylb) c.linear_constraints.add(lin_expr = myrow, senses = mysense,rhs = myrhs) c.solve() lpfile = "Save_models\clem.lp" c.write(lpfile) print("\nFile '%s' was saved"%(lpfile)) </code></pre>
2
2016-09-13T21:59:47Z
39,486,455
<p>Just to add to rkersh's comment. CPLEX when run in deterministic mode should give identical answers every time. However, if you write the model out as an LP file you will lose some precision in some of the numbers and this will perturb the problem even if only slightly, and that will often lead to different answers. The SAV format is the closest you can get to a faithful copy of the model that was inside CPLEX at the time it was saved. But even then I am not certain that the behaviour of CPLEX through the interactive solver will be identical to that through the API. If you run them on the same hardware, I would hope that they would be the same, but on a different machine you might still get different behaviour (different cpu, memory etc)</p>
0
2016-09-14T09:02:08Z
[ "python", "mathematical-optimization", "cplex" ]
What is wrong with my sorting algorithm?
39,479,850
<p>I am a beginner programmer and I've been trying to create my own sorting algorithm in Python, I don't understand why it outputs only some of the numbers that were present in the input. I put debug prints everywhere to understand the problem but still got nothing. The code should find and move to the final array the biggest number of the input array, and do that until the input array it's empty, but it seems to stop at some point. There was a person with a <a href="http://stackoverflow.com/questions/29234706/what-is-wrong-with-my-sorting-function">similar problem</a> but the solution did not apply to me as well. This is the code:</p> <pre><code>array = [3, 6, 25, 4, 5, 24, 7, 15, 5, 2, 0, 8, 1] #just random numbers output = [] while(len(array) &gt; 0): maximum = 0 for x in array: maximum = max(maximum, x) output.append(maximum) tempArray = [] for x in array: temp = array.pop() if(temp &lt; maximum): tempArray.append(temp) array = tempArray print(output) </code></pre>
-2
2016-09-13T22:23:21Z
39,479,914
<p>Consider what happens when 5 is the maximum (and there are two 5s in the input.) One 5 gets added to output, the rest of the 5s are never added to tempArray.</p>
0
2016-09-13T22:29:50Z
[ "python", "algorithm", "sorting" ]
What is wrong with my sorting algorithm?
39,479,850
<p>I am a beginner programmer and I've been trying to create my own sorting algorithm in Python, I don't understand why it outputs only some of the numbers that were present in the input. I put debug prints everywhere to understand the problem but still got nothing. The code should find and move to the final array the biggest number of the input array, and do that until the input array it's empty, but it seems to stop at some point. There was a person with a <a href="http://stackoverflow.com/questions/29234706/what-is-wrong-with-my-sorting-function">similar problem</a> but the solution did not apply to me as well. This is the code:</p> <pre><code>array = [3, 6, 25, 4, 5, 24, 7, 15, 5, 2, 0, 8, 1] #just random numbers output = [] while(len(array) &gt; 0): maximum = 0 for x in array: maximum = max(maximum, x) output.append(maximum) tempArray = [] for x in array: temp = array.pop() if(temp &lt; maximum): tempArray.append(temp) array = tempArray print(output) </code></pre>
-2
2016-09-13T22:23:21Z
39,479,923
<p>The problem is here:</p> <pre><code>for x in array: temp = array.pop() </code></pre> <p>You're modifying the same list that you're iterating over. That's going to cause trouble.</p>
4
2016-09-13T22:30:33Z
[ "python", "algorithm", "sorting" ]
What is wrong with my sorting algorithm?
39,479,850
<p>I am a beginner programmer and I've been trying to create my own sorting algorithm in Python, I don't understand why it outputs only some of the numbers that were present in the input. I put debug prints everywhere to understand the problem but still got nothing. The code should find and move to the final array the biggest number of the input array, and do that until the input array it's empty, but it seems to stop at some point. There was a person with a <a href="http://stackoverflow.com/questions/29234706/what-is-wrong-with-my-sorting-function">similar problem</a> but the solution did not apply to me as well. This is the code:</p> <pre><code>array = [3, 6, 25, 4, 5, 24, 7, 15, 5, 2, 0, 8, 1] #just random numbers output = [] while(len(array) &gt; 0): maximum = 0 for x in array: maximum = max(maximum, x) output.append(maximum) tempArray = [] for x in array: temp = array.pop() if(temp &lt; maximum): tempArray.append(temp) array = tempArray print(output) </code></pre>
-2
2016-09-13T22:23:21Z
39,480,007
<p>To diagnose, put some debug prints in the loop, such as <code>print(output, array)</code> at the end of the outer loop. And maybe more in the inner loop. After seeing the problem (removing two things from array each inner iteration, this works.</p> <pre><code>array = [3, 6, 25, 4, 5, 24, 7, 15, 5, 2, 0, 8, 1] #just random numbers output = [] while(array): maximum = 0 for x in array: maximum = max(maximum, x) output.append(maximum) tempArray = [] for x in array: if(x &lt; maximum): tempArray.append(x) array = tempArray print(output) </code></pre> <p>There are, of course, easier and better ways to delete the max from array, and only delete one copy of max instead of all.</p>
0
2016-09-13T22:37:16Z
[ "python", "algorithm", "sorting" ]
Elif statements not printing
39,479,885
<p>I'm doing a basic rock paper scissors code for school, but my elif statements aren't running.</p> <pre><code>def player1(x): while x != 'rock' and x != 'paper' and x != 'scissors': print("This is not a valid object selection") x = input("Player 1? ") def player2(x): while x != 'rock' and x != 'paper' and x != 'scissors': print("This is not a valid object selection") x = input("Player 2? ") def winner(): player1(input("Player 1? ")) player2(input("Player 2? ")) if player1 == 'rock' and player2 == 'rock': print('Tie') elif player1 == 'paper' and player2 == 'paper': print('Tie') elif player1 == 'rock' and player2 == 'paper': print('Player 2 wins') elif player1 == 'paper' and player2 == 'rock': print('Player 1 wins') elif player1 == 'rock' and player2 == 'scissors': print('Player 1 wins') elif player1 == 'scissors' and player2 == 'rock': print('Player 2 wins') elif player1 == 'paper' and player2 == 'scissors': print('Player 2 wins') elif player1 == 'scissors' and player2 == 'paper': print('Player 1 wins') elif player1 == 'scissors' and player2 == 'scissors': print('Tie') winner() </code></pre> <p>When I run this code, it asks for 'Player 1?' and won't accept anything other than rock, paper, or scissors. It then proceeds to do the same for player2. However, this is where the code ends, and it will not run my elif statements and print which player wins.</p> <p>Edit: Solved. Thanks for helping a beginner. I was completely forgetting to return the strings and assign them to variables.</p>
-1
2016-09-13T22:27:04Z
39,479,926
<p>Assigning to x inside <code>player1</code> isn't doing anything. As soon as the function returns, the value assigned to x is dropped. That means you are discarding your input! Then you are comparing the <em>function</em> <code>player1</code> to a <em>string</em> that might or might not match your input.</p> <p>Suggestion for debugging: Whenever you have a flow-of-control problem, print out the control variable. Here, if you print player1, you will see something surprising.</p>
1
2016-09-13T22:30:55Z
[ "python" ]
Elif statements not printing
39,479,885
<p>I'm doing a basic rock paper scissors code for school, but my elif statements aren't running.</p> <pre><code>def player1(x): while x != 'rock' and x != 'paper' and x != 'scissors': print("This is not a valid object selection") x = input("Player 1? ") def player2(x): while x != 'rock' and x != 'paper' and x != 'scissors': print("This is not a valid object selection") x = input("Player 2? ") def winner(): player1(input("Player 1? ")) player2(input("Player 2? ")) if player1 == 'rock' and player2 == 'rock': print('Tie') elif player1 == 'paper' and player2 == 'paper': print('Tie') elif player1 == 'rock' and player2 == 'paper': print('Player 2 wins') elif player1 == 'paper' and player2 == 'rock': print('Player 1 wins') elif player1 == 'rock' and player2 == 'scissors': print('Player 1 wins') elif player1 == 'scissors' and player2 == 'rock': print('Player 2 wins') elif player1 == 'paper' and player2 == 'scissors': print('Player 2 wins') elif player1 == 'scissors' and player2 == 'paper': print('Player 1 wins') elif player1 == 'scissors' and player2 == 'scissors': print('Tie') winner() </code></pre> <p>When I run this code, it asks for 'Player 1?' and won't accept anything other than rock, paper, or scissors. It then proceeds to do the same for player2. However, this is where the code ends, and it will not run my elif statements and print which player wins.</p> <p>Edit: Solved. Thanks for helping a beginner. I was completely forgetting to return the strings and assign them to variables.</p>
-1
2016-09-13T22:27:04Z
39,479,933
<p>Your comparisons:</p> <pre><code>elif player1 == "rock" and player2 == "rock": # etc </code></pre> <p>will always fail, since both player1 and player2 are functions.</p> <p>Instead, you need to return from your functions and assign those to variables. Let's cut out the validation for a minute and reduce this a little.</p> <pre><code>def choose(prompt): return input(prompt) def winner(a, b): if a == 'rock': if b == 'rock': return None elif b == 'paper': return 2 elif b == 'scissors': return 1 elif a == 'paper': # etc def play_game(): p1_choice = choose("Player 1: ") p2_choice = choose("Player 2: ") return winner(p1_choice, p2_choice) </code></pre> <p>Note that a nicer-looking trick for these chains of elifs is to put them in a dictionary and index the dictionary instead.</p> <pre><code>RESULT_DICT = {"rock": {"rock": None, "paper": 2, "scissors": 1}, "paper": {"rock": 1, "paper": None, "scissors": 2}, "scissors": {"rock": 2, "paper": 1, "scissors": None}} def winner(a, b): return RESULT_DICT[a][b] </code></pre>
0
2016-09-13T22:31:32Z
[ "python" ]
Django Queryset: filtering by related item manager
39,479,907
<p>I have model for pages that consist of a number of content blocks.</p> <pre><code>class Page(models.Model): ... class Block(models.Model): page = models.ForeignKey(Page) ... </code></pre> <p>The <code>Block</code> has a handful of other properties that determine whether it is considered to be "active" or not (a couple booleans and a datetime field). I have a manager for the <code>Block</code> model so that I can get a list of active Blocks</p> <pre><code>Block.objects.active() Page.objects.first().block_set.active() </code></pre> <p>I want to be able to write a queryset to return only <code>Page</code> objects that have active blocks. I would like to do this using the existing <code>Block</code> active manager, so that I am only defining what makes an "active" block once (DRY). IE something like:</p> <pre><code>Page.objects.annotate(count=Count('block__active')).filter(count__gt=0) </code></pre> <p>Obviously that does not work since <code>active</code> is not a property of <code>Block</code>. Is there a way I can use the existing <code>Block</code> manager to achieve this?</p>
0
2016-09-13T22:28:37Z
39,480,161
<p>As per what I know, there is no way to achieve it using single <code>queryset</code>, since you want to use the <code>active()</code> available within your manager. But you may achieve the result by adding the <a href="http://stackoverflow.com/questions/2642613/what-is-related-name-used-for-in-django">related name</a> to <code>Page</code> in <code>Block</code> model as:</p> <pre><code>class Block(models.Model): page = models.ForeignKey(Page, related_name='related_block') ... </code></pre> <p>Now your code should be:</p> <pre><code>active_blocks = Block.objects.active() # Queryset with all active blocks # Queryset of Pages with Block in 'active_blocks' active_pages = Page.objects.filter(related_block__in=active_blocks) </code></pre> <p>Now on this QuerySet, you may perform <code>annonate</code> or whatever you desire which is allowed on <code>Page</code>'s QuerySet.</p>
0
2016-09-13T22:55:37Z
[ "python", "django", "django-queryset" ]
Django Queryset: filtering by related item manager
39,479,907
<p>I have model for pages that consist of a number of content blocks.</p> <pre><code>class Page(models.Model): ... class Block(models.Model): page = models.ForeignKey(Page) ... </code></pre> <p>The <code>Block</code> has a handful of other properties that determine whether it is considered to be "active" or not (a couple booleans and a datetime field). I have a manager for the <code>Block</code> model so that I can get a list of active Blocks</p> <pre><code>Block.objects.active() Page.objects.first().block_set.active() </code></pre> <p>I want to be able to write a queryset to return only <code>Page</code> objects that have active blocks. I would like to do this using the existing <code>Block</code> active manager, so that I am only defining what makes an "active" block once (DRY). IE something like:</p> <pre><code>Page.objects.annotate(count=Count('block__active')).filter(count__gt=0) </code></pre> <p>Obviously that does not work since <code>active</code> is not a property of <code>Block</code>. Is there a way I can use the existing <code>Block</code> manager to achieve this?</p>
0
2016-09-13T22:28:37Z
39,483,258
<p>Why not just add a boolean field on the model, something like <code>is_active</code>, and update it on <code>save()</code> or <code>set_active()</code>? Then you'll be able to query your Model by that field.</p>
0
2016-09-14T05:47:50Z
[ "python", "django", "django-queryset" ]
How do I subtract the previous row from the current row in a pandas dataframe and apply it to every row; without using a loop?
39,479,919
<p>I am using Python3.5 and I am working with pandas. I have loaded stock data from yahoo finance and have saved the files to csv. My DataFrames load this data from the csv. This is a copy of the ten rows of the csv file that is my DataFrame</p> <pre><code> Date Open High Low Close Volume Adj Close 1990-04-12 26.875000 26.875000 26.625 26.625 6100 250.576036 1990-04-16 26.500000 26.750000 26.375 26.750 500 251.752449 1990-04-17 26.750000 26.875000 26.750 26.875 2300 252.928863 1990-04-18 26.875000 26.875000 26.500 26.625 3500 250.576036 1990-04-19 26.500000 26.750000 26.500 26.750 700 251.752449 1990-04-20 26.750000 26.875000 26.750 26.875 2100 252.928863 1990-04-23 26.875000 26.875000 26.750 26.875 700 252.928863 1990-04-24 27.000000 27.000000 26.000 26.000 2400 244.693970 1990-04-25 25.250000 25.250000 24.875 25.125 9300 236.459076 1990-04-26 25.000000 25.250000 24.750 25.000 1200 235.282663 </code></pre> <p>I know that I can use iloc, loc, ix but these values that I index will only give my specific rows and columns and will not perform the operation on every row. For example: Row one of the data in the open column has a value of 26.875 and the row below it has 26.50. The price dropped .375 cents. I want to be able to capture the % of Increase or Decrease from the previous day so to finish this example .375 divided by 26.875 = 1.4% decrease from one day to the next. I want to be able to run this calculation on every row so I know how much it has increased or decreased from the previous day. The index functions I have tried but they are absolute, and I don't want to use a loop. Is there a way I can do this with the ix, iloc, loc or another function?</p>
2
2016-09-13T22:30:16Z
39,480,011
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.pct_change.html" rel="nofollow">pct_change()</a> or/and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.diff.html" rel="nofollow">diff()</a> methods</p> <p>Demo:</p> <pre><code>In [138]: df.Close.pct_change() * 100 Out[138]: 0 NaN 1 0.469484 2 0.467290 3 -0.930233 4 0.469484 5 0.467290 6 0.000000 7 -3.255814 8 -3.365385 9 -0.497512 Name: Close, dtype: float64 In [139]: df.Close.diff() Out[139]: 0 NaN 1 0.125 2 0.125 3 -0.250 4 0.125 5 0.125 6 0.000 7 -0.875 8 -0.875 9 -0.125 Name: Close, dtype: float64 </code></pre>
2
2016-09-13T22:37:35Z
[ "python", "pandas", "indexing", "dataframe" ]
Optimize Or Pipes (|) in Regex
39,480,021
<p>I am running a multi-pipe Regex on text blocks that are about 3,000 characters long. I have 6 different and the matches are always in the same order relative to one another and complicating it is I always want to prioritize the last one firsst</p> <pre><code>Pattern1|Pattern2|Pattern3|Pattern4|Pattern5|Pattern6 </code></pre> <p>Right now I am testing on a block of text that finds Pattern1. Standalone it takes 41 steps, in the pipes it takes over 30,000. I get that it might take more specific information for this specific problem, but was wondering if there were some generic steps to take to make and/or pipes more efficient. Is there an "order" that helps? Clearly this is not testing the first pattern and then quitting since it would still be 41 steps so wondering if I need to adhere to some basic and/or construction I am unaware of.</p>
1
2016-09-13T22:39:23Z
39,489,976
<p>The main point about optimizing alternation groups is <strong>the alternative branches should not match at the same location</strong>.</p> <p>Consider a string that has many similar substrings to those in your pattern above:</p> <pre><code>Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern Pattern2 </code></pre> <p>See the <a href="https://regex101.com/r/iU6nN2/1" rel="nofollow">regex demo</a>. The <code>Pattern1</code> is tried first, and it matches <code>Pattern</code> but since there is no digit after it, the alternative is discarded and the next branch is tried. <code>Pattern2</code> also goes up to the digit, but there is no digit. And so on. </p> <p>If you make your regex start with a common prefix and then use a group for the different endings - like <a href="https://regex101.com/r/iU6nN2/2" rel="nofollow"><code>Pattern(?:1|2|3|4|5|6)</code></a> - a lot of redundant backtracking is spared. </p>
0
2016-09-14T12:03:15Z
[ "python", "regex" ]
Python folder structure with children to JSON
39,480,031
<p>I have a script that turns any given folder structure into a JSON, JSTree compatible structure. However child folders are all grouped under one level of child. So folders within folders are marked as just one level under the root. How could I maintain the root-child-child relationship within JSON? </p> <pre><code>import os, sys, json def all_dirs_with_subdirs(path, subdirs): try: path = os.path.abspath(path) result = [] for root, dirs, files in os.walk(path): exclude = "Archive", "Test" dirs[:] = [d for d in dirs if d not in exclude] if all(subdir in dirs for subdir in subdirs): result.append(root) return result except WindowsError: pass def get_directory_listing(path): try: output = {} output["text"] = path.decode('latin1') output["type"] = "directory" output["children"] = all_dirs_with_subdirs("G:\TEST", ('Maps', 'Temp')) return output except WindowsError: pass with open(r'G:\JSONData.json', 'w+') as f: listing = get_directory_listing("G:\TEST") json.dump(listing, f) </code></pre>
0
2016-09-13T22:40:16Z
39,480,349
<p>You have only a one level hierarchy because in <code>all_dirs_with_dubdirs</code> you walk through the directory tree and append every valid directory to a flat list <code>result</code> which you then store in the only <code>"children"</code> key.</p> <p>What you want to do is to create a structure like</p> <pre><code>{ 'text': 'root_dir', 'type': 'directory', 'children': [ { 'text': 'subdir1 name', 'type': 'directory', 'children': [ { 'text': 'subsubdir1.1 name', 'type': 'directory', 'children': [ ... ] }, ... ] }, { 'text': 'subdir2 name', 'type': 'directory', 'children': [ ... ] }, ] } </code></pre> <p>You could do this quite elegantly with recursion</p> <pre><code>def is_valid_dir(path, subdirs): return all(os.path.isdir(path) for subdir in subdirs) def all_dirs_with_subdirs(path, subdirs): children = [] for name in os.listdir(path): subpath = os.path.join(path, name) if name not in ("Archive", "Test") and os.path.isdir(subpath) and is_valid_dir(subpath, subdirs): children.append({ 'text': name, 'type': 'directory', 'children': all_dirs_with_subdirs(subpath, subdirs) }) return children </code></pre>
0
2016-09-13T23:19:44Z
[ "python", "json", "jstree" ]
Python folder structure with children to JSON
39,480,031
<p>I have a script that turns any given folder structure into a JSON, JSTree compatible structure. However child folders are all grouped under one level of child. So folders within folders are marked as just one level under the root. How could I maintain the root-child-child relationship within JSON? </p> <pre><code>import os, sys, json def all_dirs_with_subdirs(path, subdirs): try: path = os.path.abspath(path) result = [] for root, dirs, files in os.walk(path): exclude = "Archive", "Test" dirs[:] = [d for d in dirs if d not in exclude] if all(subdir in dirs for subdir in subdirs): result.append(root) return result except WindowsError: pass def get_directory_listing(path): try: output = {} output["text"] = path.decode('latin1') output["type"] = "directory" output["children"] = all_dirs_with_subdirs("G:\TEST", ('Maps', 'Temp')) return output except WindowsError: pass with open(r'G:\JSONData.json', 'w+') as f: listing = get_directory_listing("G:\TEST") json.dump(listing, f) </code></pre>
0
2016-09-13T22:40:16Z
39,480,395
<p>You can get the immediate children of the CWD with:</p> <pre><code>next(os.walk('.'))[1] </code></pre> <p>With that expression you could write a recursive traversal function like this:</p> <pre><code>def dir_to_json(dir): subdirs = next(os.walk('dir'))[1] if not subdirs: return # Something in your base case, how are you representing this? return combine_as_json(dir_to_json(os.path.join(dir, d)) for d in subdirs) </code></pre> <p>Then you need to make a <code>combine_as_json()</code> function that aggregates the subdir results in your chosen encoding/representation.</p>
0
2016-09-13T23:26:31Z
[ "python", "json", "jstree" ]
Pandas python: Create function to merge two dataframes based on defined list of columns
39,480,038
<p>In the script that I am writing, I want to frequently repeat the same piece of code, where I create a "numerator" dataframe with one group by, and then a "denominator" dataframe with a different group by. I then merge the two together so that I have the numerator and denominator in one place. I am trying to create a function where all I have to pass to it is the list of fields I want included in the numerator and denominator.</p> <p>Here is the function:</p> <pre><code> def calcfractions(self, df, numlist, denomlist): print("test 1") numlist.append(denomlist) selectlist = numlist selectlist.append("TeamID") selectlist.append("PlayerID") print("test 2") numdf = df[selectlist].groupby(numlist).agg({"PlayerID": "count"}) denomdf = df[selectlist].groupby(denomlist).agg({"PlayerID": "count"}) print("test 3") mergeddf = pd.merge(numdf, denomdf, on=denomlist) print("test 4") return mergeddf </code></pre> <p>Here is the script I'm trying to use it in:</p> <pre><code> def team_pr(self, df1): numlist = ['PlayerLevel'] denomlist = ['TeamName', 'Year'] mergeddf = self.calcfractions(df1, numlist, denomlist) print(mergeddf.head(2)) </code></pre> <p>However, when I run this, I only get to printing "test 2" in def calcfractions, something fails after that point. I think it might have to do with trying to append denomlist to numlist. Any thoughts?</p> <p>EDIT: The script doesn't "fail", there is no error. It just ends.</p>
0
2016-09-13T22:40:58Z
39,480,095
<p>You are not capturing the return value from <code>calcfractions</code>. in <code>team_pr</code>, change to <code>merged_df = self.calcfractions(df1, numlist, denomlist)</code> then <code>print(merged_df.head(2))</code> to see what you get.</p> <p>This is assuming these are methods of a class. If they are just functions, just do away with the whole <code>self</code> bit, it's syntax used for classes only, to pass in the instance as the first argument.</p>
0
2016-09-13T22:47:14Z
[ "python", "pandas" ]
Pandas python: Create function to merge two dataframes based on defined list of columns
39,480,038
<p>In the script that I am writing, I want to frequently repeat the same piece of code, where I create a "numerator" dataframe with one group by, and then a "denominator" dataframe with a different group by. I then merge the two together so that I have the numerator and denominator in one place. I am trying to create a function where all I have to pass to it is the list of fields I want included in the numerator and denominator.</p> <p>Here is the function:</p> <pre><code> def calcfractions(self, df, numlist, denomlist): print("test 1") numlist.append(denomlist) selectlist = numlist selectlist.append("TeamID") selectlist.append("PlayerID") print("test 2") numdf = df[selectlist].groupby(numlist).agg({"PlayerID": "count"}) denomdf = df[selectlist].groupby(denomlist).agg({"PlayerID": "count"}) print("test 3") mergeddf = pd.merge(numdf, denomdf, on=denomlist) print("test 4") return mergeddf </code></pre> <p>Here is the script I'm trying to use it in:</p> <pre><code> def team_pr(self, df1): numlist = ['PlayerLevel'] denomlist = ['TeamName', 'Year'] mergeddf = self.calcfractions(df1, numlist, denomlist) print(mergeddf.head(2)) </code></pre> <p>However, when I run this, I only get to printing "test 2" in def calcfractions, something fails after that point. I think it might have to do with trying to append denomlist to numlist. Any thoughts?</p> <p>EDIT: The script doesn't "fail", there is no error. It just ends.</p>
0
2016-09-13T22:40:58Z
39,482,056
<p>So, after concocting my own dataframe with bogus values and trying to work through this, I have found that I run into a <code>ValueError: setting an array element with a sequence</code>. This is due to the fact that you are appending a list to a list and trying to use that as a column index in your df:</p> <pre><code>numlist = ['PlayerLevel'] denomList = ['TeamName', 'Year'] numlist.append(denomlist) # as you suspected this is problematic: print(numlist) ['PlayerLevel', ['TeamName', 'Year']] </code></pre> <p>Try this instead:</p> <pre><code>numlist += denomlist </code></pre> <p>Is this entire provided snippet wrapped up in some <code>try: except:</code> clause somewhere? In any case, if this doesn't solve your problem, please provide us with a small version of your dataframe.</p> <hr> <p><strong>Edit:</strong> From the <a href="https://docs.python.org/3/tutorial/errors.html#handling-exceptions" rel="nofollow">docs on exceptions</a>: "The last except clause may omit the exception name(s), to serve as a wildcard. Use this with extreme caution, since it is easy to mask a real programming error in this way!" </p> <p>Definitely look through the linked docs, but here is the gist of the immediate problem. It's considered poor practice to write try/except clauses like this:</p> <pre><code>try: # do stuff except: # do different/more stuff if original stuff fails </code></pre> <p>because the <code>except:</code> excepts <code>Exceptions</code> of <em>all</em> types. To parrot the quoted docs, <strong>this makes it seem like there is no error.</strong> Furthermore, as this entire question exemplifies, <strong>this makes it impossible to know what is exactly causing an error</strong> (if you even manage to detect one). In nearly all cases, you should have an expectation of what kind of error your code can throw, so your try/except's should look like:</p> <pre><code>try: # do stuff here except ValueError: # or whatever type of child of Exception() # do different/more stuff if original stuff fails </code></pre> <p>If you <strong>have</strong> to do a wildcard <code>except:</code> for some crazy reason, ideally refactor, so that such a thing isn't necessary, but at the very least, <code>print()</code> <em>some</em> kind of message indicating that the <code>try:</code> failed. </p> <p>Generally, to avoid this problem (as it relates to wildcard or even specific exceptions), <strong>do your best to ensure that the try/except clause wraps as little code as is necessary to accomplish your goals.</strong> </p>
1
2016-09-14T03:26:57Z
[ "python", "pandas" ]
button-press-event and Gtk.Frame
39,480,064
<p>In a window I have a <code>Gtk.Frame</code> with an image inside it. Now I need show a message when the <code>Gtk.Frame</code> is clicked. Here is my code.</p> <p>Glade XML:</p> <pre><code>&lt;?xml version="1.0" encoding="UTF-8"?&gt; &lt;!-- Generated with glade 3.18.3 --&gt; &lt;interface&gt; &lt;requires lib="gtk+" version="3.12"/&gt; &lt;object class="GtkWindow" id="window1"&gt; &lt;property name="can_focus"&gt;False&lt;/property&gt; &lt;child&gt; &lt;object class="GtkBox" id="box1"&gt; &lt;property name="visible"&gt;True&lt;/property&gt; &lt;property name="can_focus"&gt;False&lt;/property&gt; &lt;property name="orientation"&gt;vertical&lt;/property&gt; &lt;child&gt; &lt;object class="GtkFrame" id="frame1"&gt; &lt;property name="can_focus"&gt;False&lt;/property&gt; &lt;property name="tooltip_text" translatable="yes"&gt;Clique para imprimir&lt;/property&gt; &lt;property name="halign"&gt;start&lt;/property&gt; &lt;property name="valign"&gt;start&lt;/property&gt; &lt;property name="label_xalign"&gt;0&lt;/property&gt; &lt;property name="shadow_type"&gt;none&lt;/property&gt; &lt;signal name="button-press-event" handler="onFrameClicked" swapped="no"/&gt; &lt;child&gt; &lt;object class="GtkAlignment" id="alignment1"&gt; &lt;property name="visible"&gt;True&lt;/property&gt; &lt;property name="can_focus"&gt;False&lt;/property&gt; &lt;property name="left_padding"&gt;12&lt;/property&gt; &lt;signal name="button-press-event" handler="onFrameClicked" swapped="no"/&gt; &lt;child&gt; &lt;object class="GtkImage" id="image1"&gt; &lt;property name="visible"&gt;True&lt;/property&gt; &lt;property name="can_focus"&gt;False&lt;/property&gt; &lt;property name="stock"&gt;gtk-home&lt;/property&gt; &lt;signal name="button-press-event" handler="onFrameClicked" swapped="no"/&gt; &lt;/object&gt; &lt;/child&gt; &lt;/object&gt; &lt;/child&gt; &lt;child type="label"&gt; &lt;object class="GtkLabel" id="label1"&gt; &lt;property name="visible"&gt;True&lt;/property&gt; &lt;property name="can_focus"&gt;False&lt;/property&gt; &lt;property name="label" translatable="yes"&gt;frame1&lt;/property&gt; &lt;signal name="button-press-event" handler="onFrameClicked" swapped="no"/&gt; &lt;/object&gt; &lt;/child&gt; &lt;/object&gt; &lt;packing&gt; &lt;property name="expand"&gt;False&lt;/property&gt; &lt;property name="fill"&gt;True&lt;/property&gt; &lt;property name="position"&gt;0&lt;/property&gt; &lt;/packing&gt; &lt;/child&gt; &lt;child&gt; &lt;object class="GtkLabel" id="label2"&gt; &lt;property name="visible"&gt;True&lt;/property&gt; &lt;property name="can_focus"&gt;False&lt;/property&gt; &lt;property name="label" translatable="yes"&gt;label&lt;/property&gt; &lt;/object&gt; &lt;packing&gt; &lt;property name="expand"&gt;False&lt;/property&gt; &lt;property name="fill"&gt;True&lt;/property&gt; &lt;property name="position"&gt;1&lt;/property&gt; &lt;/packing&gt; &lt;/child&gt; &lt;child&gt; &lt;object class="GtkEntry" id="entry1"&gt; &lt;property name="visible"&gt;True&lt;/property&gt; &lt;property name="can_focus"&gt;True&lt;/property&gt; &lt;/object&gt; &lt;packing&gt; &lt;property name="expand"&gt;False&lt;/property&gt; &lt;property name="fill"&gt;True&lt;/property&gt; &lt;property name="position"&gt;2&lt;/property&gt; &lt;/packing&gt; &lt;/child&gt; &lt;/object&gt; &lt;/child&gt; &lt;/object&gt; &lt;/interface&gt; </code></pre> <p>Python code:</p> <pre><code>import gi gi.require_version('Gtk', '3.0') from gi.repository import Gtk, Gdk def on_frame_clicked(widget, event): print('Clicked!') builder = Gtk.Builder.new() builder.add_from_file('frame_event.glade') window = builder.get_object('window1') handlers = {'onFrameClicked': on_frame_clicked} builder.connect_signals(handlers) window.show_all() Gtk.main() </code></pre> <p>But, when I click on <code>Gtk.Frame</code> nothing happens. I also tried using the same handler for the <code>image</code> and the <code>label</code> in the <code>frame</code>. Still does not work. I need capture the double click event, but I know that the start point is the <code>button-press-event</code>.</p> <p>What am I doing wrong?</p>
0
2016-09-13T22:43:59Z
39,505,344
<p>There's two requirements for button-press-event to work on a widget:</p> <ul> <li>widget must have it's own GdkWindow</li> <li>widget must have GDK_BUTTON_PRESS_MASK in its event mask</li> </ul> <p>Widgets that are not normally used for input do not have a GdkWindow so this won't work with GtkFrame. </p> <p>My first suggestion is to re-evaluate the need: Users do not expect frames to react to clicking. are you sure you want to surprise your users? Maybe another design would work better?</p> <p>If your situation really does require the frame to be clickable, you can solve this by adding a GtkEventBox as the parent of the frame. The GtkEventBox does have a GdkWindow so you only need to use <code>widget.add_events()</code> to set the event mask and connect to the button-press-event of the GtkEventBox instead of the frame. You also probably want to set the EventBox invisible with <code>event_box.set_visible_window()</code> but please read the documentation on that function for details.</p>
2
2016-09-15T07:30:37Z
[ "python", "gtk" ]
'WSGIRequest' object has no attribute 'session' while upgrading from django 1.3 to 1.9
39,480,179
<p>Similar to this question <a href="http://stackoverflow.com/questions/11783404/wsgirequest-object-has-no-attribute-session">&#39;WSGIRequest&#39; object has no attribute &#39;session&#39;</a></p> <p>But my MIDDLEWARE classes are in the correct order.</p> <pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [ 'django.contrib.sessions', 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.messages', 'django.contrib.staticfiles', 'membership', 'treebeard', 'haystack', 'reversion', ] MIDDLEWARE = [ 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] </code></pre> <p>I am redirecting to login</p> <pre class="lang-py prettyprint-override"><code>url(r'^$', RedirectView.as_view(url='login/')), url(r'^login/$', 'membership.views.loginView', name='login'), </code></pre> <p>and then</p> <pre class="lang-py prettyprint-override"><code>def loginView(request): a = request.session </code></pre> <p>Throws the error</p>
1
2016-09-13T22:57:05Z
39,480,337
<p><code>MIDDLEWARE</code> is a new setting in 1.10 that will replace the old <code>MIDDLEWARE_CLASSES</code>.</p> <p>Since you're currently on 1.9, Django doesn't recognize the <code>MIDDLEWARE</code> setting. You should use the <code>MIDDLEWARE_CLASSES</code> setting instead:</p> <pre><code>MIDDLEWARE_CLASSES = [ 'django.contrib.sessions.middleware.SessionMiddleware', 'django.middleware.security.SecurityMiddleware', 'django.middleware.common.CommonMiddleware', 'django.middleware.csrf.CsrfViewMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware', 'django.contrib.messages.middleware.MessageMiddleware', 'django.middleware.clickjacking.XFrameOptionsMiddleware', ] </code></pre>
4
2016-09-13T23:18:05Z
[ "python", "django", "django-middleware" ]
Sony QX1 error: "Not available now"
39,480,188
<p>I have recently worked on a web app with Sony QX1. Basically, the web app should be able to change the mode of camera, start/stop shooting, take picture, playback, and etc. For the server side, I am using flask (a template of python)</p> <p>Everything works fine (I can change the exposure mode, or get the exposure mode) until I change to shooting mode and start shooting movie. It keep giving me the error:</p> <pre><code>{u'id': 1, u'error': [1, u'Not Available Now']} </code></pre> <p><strong>I am wondering if other functions can't be used if the camera is in shooting mode.</strong></p> <p>This confused me for a couple weeks, and I can't find any answer online. </p>
0
2016-09-13T22:57:48Z
39,648,793
<p>Yes there are two camera modes. Contents Transfer and Remote Shooting. This is set using the setCameraFunction command. To change camera settings, take pictures you need to be in Remote Shooting mode. To transfer files and delete files you need to be in Contents Transfer mode.</p>
0
2016-09-22T21:00:31Z
[ "python", "flask", "sony", "sony-camera-api" ]
Spark HiveContext: Tables with multiple files on HDFS
39,480,213
<p>I have a Hive table X which has multiple files on HDFS. Table X location on HDFS is /data/hive/X. Files:</p> <pre><code>/data/hive/X/f1 /data/hive/X/f2 /data/hive/X/f3 ... </code></pre> <p>Now, I run the below commands:</p> <pre><code>df=hiveContext.sql("SELECT count(*) from X") df.show() </code></pre> <p>What happens internally? Does each file be considered as a separate partition and is processed by a separate node and then results are collated?</p> <p>If yes, is there a way to instruct Spark to load all the files into 1 partition and then process the data?</p> <p>Thanks in advance.</p>
0
2016-09-13T23:00:21Z
39,483,376
<p>Spark will contact Hive metastore to find out (a) Location of data (b) How to read the data. At low level, Spark will get Input Splits based on Input Formats used in hive to store the data. Once Splits are decided, Spark will read data 1 split/partition. In Spark, one physical node can run one or more executors. Each executor will have one or more partitions. Once data is read into memory, spark will run a count, which will be (a) local counts on map (b) global count after a shuffle. then it is returned to driver as a result. </p>
1
2016-09-14T05:55:30Z
[ "python", "apache-spark", "dataframe", "hdfs" ]
Django: How to get latest value from db
39,480,271
<p>So I want to get the latest result for a list of values from the database.</p> <p>Example:</p> <pre><code>Task col1 col2 1 1 2 12 3 32 4 1 5 24 1 25 2 62 3 7 2 81 1 9 -&gt; last occurence for '1' in 'col1' 4 10 -&gt; last occurence for '4' in 'col1' 3 121 5 12 -&gt; last occurence for '5' in 'col1' </code></pre> <p>There's a list 'z' and I want to write query such a way that I get the latest results from the table 'Task' such that z exists in col1. Assuming 'z' is:</p> <pre><code>z = [1, 4, 5] </code></pre> <p>I want the end result as:</p> <pre><code>col1 col2 1 9 4 10 5 12 </code></pre> <p>There are 2 solutions that I came up with. Solution1 is as follows:</p> <pre><code>all_results = Task.objects.filter(col1__in=z) results = dict() for result in results: results[result.col1] = result </code></pre> <p>Solution2 is as follows:</p> <pre><code>results = dict() for x in z: results[x] = Task.objects.filter(col1=x).reverse()[0] </code></pre> <p>But I'm wondering if there's a way to combine both solutions together such that I only have to make one database call and get results for each distinct 'col1'. Reason: My database is very big and currently I've implemented Solution2 which is making the entire process very slow. Solution1 is not feasible since there are multiple entries and 'results' dictionary is going to be very big and hence it'll be very time consuming.</p>
0
2016-09-13T23:10:04Z
39,480,469
<p>You may achieve this result from within the QuerySet. Your ORM query will be:</p> <pre><code>Task.objects.filter(col1__in=z).values('col1').annotate(latest_record=Max('col2')) # where z = [1, 4, 5] </code></pre>
0
2016-09-13T23:36:41Z
[ "python", "django", "django-models" ]
Using integers to access a key, value pairs in a dictionary?
39,480,299
<p>Question: how can i use integer to use as Keys for a dictionary?</p> <p>Explanation: I have a dictionary</p> <pre><code>example = {} </code></pre> <p>I am populating it with key, value pairs of</p> <pre><code>A: something B: something C: something 1: something 2: something 3: something 4: something 5: something 6: something </code></pre> <p>Now i need to iterate over the 1 - 6 entries of this dictionary</p> <pre><code>for i in range(1,6): if self.compare(image1, example[i])&gt;0.8: return True </code></pre> <p>Now as the keys in original dictionary are strings, and here 'i' is in integer format, I am getting the error:</p> <pre><code>if self.compare(image1, example[i])&gt;0.8: KeyError: 0 </code></pre> <p>In the console window if I try to access the dictionary key with <code>example["1"]</code>, it shows me its content, but when I try to show it as <code>example[1]</code>, it shows me:</p> <pre class="lang-none prettyprint-override"><code>print example[1] Traceback (most recent call last): File "pydevd_comm.py", line 1080, in do_it result = pydevd_vars.evaluate_expression(self.thread_id, self.frame_id, self.expression, self.doExec) File "pydevd_vars.py", line 352, in evaluate_expression Exec(expression, updated_globals, frame.f_locals) File "pydevd_exec.py", line 3, in Exec exec exp in global_vars, local_vars File "&lt;string&gt;", line 1, in &lt;module&gt; KeyError: 1 </code></pre>
0
2016-09-13T23:12:58Z
39,480,321
<p>Convert the number to a string before trying to access the dictionary:</p> <pre><code>for i in range(1,6): if self.compare(image1, example[str(i)]) &gt; 0.8: return True </code></pre> <p>Or create the dictionary using integer keys instead of strings for the keys that are numbers.</p>
2
2016-09-13T23:15:54Z
[ "python", "loops", "dictionary", "key", "key-value" ]
Using integers to access a key, value pairs in a dictionary?
39,480,299
<p>Question: how can i use integer to use as Keys for a dictionary?</p> <p>Explanation: I have a dictionary</p> <pre><code>example = {} </code></pre> <p>I am populating it with key, value pairs of</p> <pre><code>A: something B: something C: something 1: something 2: something 3: something 4: something 5: something 6: something </code></pre> <p>Now i need to iterate over the 1 - 6 entries of this dictionary</p> <pre><code>for i in range(1,6): if self.compare(image1, example[i])&gt;0.8: return True </code></pre> <p>Now as the keys in original dictionary are strings, and here 'i' is in integer format, I am getting the error:</p> <pre><code>if self.compare(image1, example[i])&gt;0.8: KeyError: 0 </code></pre> <p>In the console window if I try to access the dictionary key with <code>example["1"]</code>, it shows me its content, but when I try to show it as <code>example[1]</code>, it shows me:</p> <pre class="lang-none prettyprint-override"><code>print example[1] Traceback (most recent call last): File "pydevd_comm.py", line 1080, in do_it result = pydevd_vars.evaluate_expression(self.thread_id, self.frame_id, self.expression, self.doExec) File "pydevd_vars.py", line 352, in evaluate_expression Exec(expression, updated_globals, frame.f_locals) File "pydevd_exec.py", line 3, in Exec exec exp in global_vars, local_vars File "&lt;string&gt;", line 1, in &lt;module&gt; KeyError: 1 </code></pre>
0
2016-09-13T23:12:58Z
39,480,540
<p>You just need to convert the integer to a string to match your dictionary key format.</p> <p>If you don't mind returning False if none of the comparisons exceed your threshold of 0.8, you could do this:</p> <pre><code>return any(self.compare(image1, example[str(i)]) &gt; 0.8 for i in range(1, 6)) </code></pre>
1
2016-09-13T23:46:18Z
[ "python", "loops", "dictionary", "key", "key-value" ]
tensorflow InvalidArgumentError: You must feed a value for placeholder tensor with dtype float
39,480,314
<p>I am new to tensorflow and want to train a logistic model for classification.</p> <pre><code># Set model weights W = tf.Variable(tf.zeros([30, 16])) b = tf.Variable(tf.zeros([16])) train_X, train_Y, X, Y = input('train.csv') #construct model pred = model(X, W, b) # Minimize error using cross entropy cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(pred), reduction_indices=1)) # Gradient Descent learning_rate = 0.1 #optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # Initializing the variables init = tf.initialize_all_variables() get_ipython().magic(u'matplotlib inline') import collections import matplotlib.pyplot as plt training_epochs = 200 batch_size = 300 train_X, train_Y, X, Y = input('train.csv') acc = [] x = tf.placeholder(tf.float32, [None, 30]) y = tf.placeholder(tf.float32, [None, 16]) with tf.Session() as sess: sess.run(init) # Training cycle for epoch in range(training_epochs): avg_cost = 0.0 #print(type(y_train[0][0])) print(type(train_X)) print(type(train_X[0][0])) print X _, c = sess.run([optimizer, cost], feed_dict = {x: train_X, y: train_Y}) </code></pre> <p>The feef_dict method does not work, with the complain: <strong>InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_54' with dtype float [[Node: Placeholder_54 = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]] Caused by op u'Placeholder_54':</strong></p> <p>I check the data type, for the training feature data X:</p> <pre><code> train_X type: &lt;type 'numpy.ndarray'&gt; train_X[0][0]: &lt;type 'numpy.float32'&gt; train_X size: (300, 30) place_holder info : Tensor("Placeholder_56:0", shape=(?, 30), dtype=float32) </code></pre> <p>I do not know why it complains. Hope sb could help, thanks</p>
0
2016-09-13T23:14:48Z
39,480,394
<p>Show the code for model() - I bet it defines two placeholders: X is placeholder_56, so where is placeholder_54 coming from?</p> <p>Then pass the model x,y into the feed_dict, delete your x, y global placeholders, all will work :)</p>
0
2016-09-13T23:26:22Z
[ "python", "types", "tensorflow" ]
tensorflow InvalidArgumentError: You must feed a value for placeholder tensor with dtype float
39,480,314
<p>I am new to tensorflow and want to train a logistic model for classification.</p> <pre><code># Set model weights W = tf.Variable(tf.zeros([30, 16])) b = tf.Variable(tf.zeros([16])) train_X, train_Y, X, Y = input('train.csv') #construct model pred = model(X, W, b) # Minimize error using cross entropy cost = tf.reduce_mean(-tf.reduce_sum(Y*tf.log(pred), reduction_indices=1)) # Gradient Descent learning_rate = 0.1 #optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost) optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) # Initializing the variables init = tf.initialize_all_variables() get_ipython().magic(u'matplotlib inline') import collections import matplotlib.pyplot as plt training_epochs = 200 batch_size = 300 train_X, train_Y, X, Y = input('train.csv') acc = [] x = tf.placeholder(tf.float32, [None, 30]) y = tf.placeholder(tf.float32, [None, 16]) with tf.Session() as sess: sess.run(init) # Training cycle for epoch in range(training_epochs): avg_cost = 0.0 #print(type(y_train[0][0])) print(type(train_X)) print(type(train_X[0][0])) print X _, c = sess.run([optimizer, cost], feed_dict = {x: train_X, y: train_Y}) </code></pre> <p>The feef_dict method does not work, with the complain: <strong>InvalidArgumentError: You must feed a value for placeholder tensor 'Placeholder_54' with dtype float [[Node: Placeholder_54 = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]] Caused by op u'Placeholder_54':</strong></p> <p>I check the data type, for the training feature data X:</p> <pre><code> train_X type: &lt;type 'numpy.ndarray'&gt; train_X[0][0]: &lt;type 'numpy.float32'&gt; train_X size: (300, 30) place_holder info : Tensor("Placeholder_56:0", shape=(?, 30), dtype=float32) </code></pre> <p>I do not know why it complains. Hope sb could help, thanks</p>
0
2016-09-13T23:14:48Z
39,480,419
<p>From your error message, the name of the missing placeholder&mdash;<code>'Placeholder_54'</code>&mdash;is suspicious, because that suggests that at least 54 placeholders have been created in the current interpreter session. </p> <p>There aren't enough details to say for sure, but I have some suspicions. Are you running the same code multiple times in the same interpreter session (e.g. using IPython/Jupyter or the Python shell)? Assuming that is the case, I suspect that your <code>cost</code> tensor depends on placeholders that were created in a previous execution of that code.</p> <p>Indeed, your code creates two <a href="https://www.tensorflow.org/versions/r0.10/api_docs/python/io_ops.html#placeholder" rel="nofollow"><code>tf.placeholder()</code></a> tensors <code>x</code> and <code>y</code> <strong>after</strong> building the rest of the model, so it seems likely that either:</p> <ol> <li><p>The missing placeholder was created in a previous execution of this code, or</p></li> <li><p>The <code>input()</code> function calls <code>tf.placeholder()</code> internally and it is these placeholders (perhaps the tensors <code>X</code> and <code>Y</code>?) that you should be feeding.</p></li> </ol>
0
2016-09-13T23:30:03Z
[ "python", "types", "tensorflow" ]
Chrome webdriver in selenium wont connect to proxy
39,480,393
<p>I've bound port 3003 on my local machine to a remote server</p> <p><code>ssh user@remoteserver -D 3003</code></p> <p>And in my python script</p> <pre><code>from selenium import webdriver chrome_options = webdriver.ChromeOptions() chrome_options.add_argument("--proxy-server=http://127.0.0.1:3003") driver = webdriver.Chrome(chrome_options=chrome_options) driver.get('http://google.com') </code></pre> <p>When I run the script, I get no error, chrome launches and I fail to load google.com. Shouldn't this script be making requests through 127.0.0.1:3003? </p> <p>The ssh tunnel is good. If I manually set a proxy in my browser to 127.0.0.1:3003, requests go through my remote server. Where am I going wrong in this script?</p>
0
2016-09-13T23:26:09Z
39,480,432
<p>per @Shawn Spitz's comment on <a href="http://stackoverflow.com/questions/27730306/setting-a-proxy-for-chrome-driver-in-selenium">Setting a proxy for Chrome Driver in Selenium</a> one needs to use socks5// for this because its a socks proxy. I had http, so chrome_options.add_argument("--proxy-server=socks5://127.0.0.1:3003")</p>
0
2016-09-13T23:31:50Z
[ "python", "selenium", "ssh-tunnel", "chrome-web-driver" ]
Lazy loading on column_property in SQLAlchemy
39,480,514
<p>Say I have the following models:</p> <pre><code>class Department(Base): __tablename__ = 'departments' id = Column(Integer, primary_key=True) class Employee(Base): __tablename__ = 'employees' id = Column(Integer, primary_key=True) department_id = Column(None, ForeignKey(Department.id), nullable=False) department = relationship(Department, backref=backref('employees')) </code></pre> <p>Sometimes, when I query departments, I would also like to fetch the number of employees they have. I can achieve this with a <code>column_property</code>, like so:</p> <pre><code>Department.employee_count = column_property( select([func.count(Employee.id)]) .where(Employee.department_id == Department.id) .correlate_except(Employee)) Department.query.get(1).employee_count # Works </code></pre> <p>But then the count is <em>always</em> fetched via a subquery, even when I don't need it. Apparently I can't ask SQLAlchemy not to load this at query time, either:</p> <pre><code>Department.query.options(noload(Department.employee_count)).all() # Exception: can't locate strategy for &lt;class 'sqlalchemy.orm.properties.ColumnProperty'&gt; (('lazy', 'noload'),) </code></pre> <p>I've also tried implementing this with a hybrid property instead of a column property:</p> <pre><code>class Department(Base): #... @hybrid_property def employee_count(self): return len(self.employees) @employee_count.expression def employee_count(cls): return ( select([func.count(Employee.id)]) .where(Employee.department_id == cls.id) .correlate_except(Employee)) </code></pre> <p>With no luck:</p> <pre><code>Department.query.options(joinedload('employee_count')).all() # AttributeError: 'Select' object has no attribute 'property' </code></pre> <p>I know I can just query the count as a separate entity in the query, but I'd really prefer to have it as an attribute on the model. Is this even possible in SQLAlchemy?</p> <p><strong>Edit:</strong> To clarify, I want to avoid the N+1 problem and have the employee count get loaded in the same query as the departments, not in a separate query for each department.</p>
4
2016-09-13T23:42:07Z
39,552,066
<p>The loading strategies that you tried are for relationships. The loading of a <code>column_property</code> is altered in the same way as normal columns, see <a href="http://docs.sqlalchemy.org/en/latest/orm/loading_columns.html#deferred-column-loading" rel="nofollow">Deferred Column Loading</a>.</p> <p>You can defer the loading of <code>employee_count</code> by default by passing <code>deferred=True</code> to <code>column_property</code>. When a column is deferred, a select statement is emitted when the property is accessed.</p> <p><code>defer</code> and <code>undefer</code> from <code>sqlalchemy.orm</code> allow this to be changed when constructing a query:</p> <pre><code>from sqlalchemy.orm import undefer Department.query.options(undefer('employee_count')).all() </code></pre>
3
2016-09-17T21:49:36Z
[ "python", "sqlalchemy" ]
Redefining print function not working within a function
39,480,619
<p>I am writing a python script in python 3.x in which I need to redefine the <code>print</code> function. When I do it in my interpreter, it works fine. But when I create a function using the same code, it gives out error.</p> <p>Here is my code:</p> <pre><code>list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"] old_print = print def print(s): global catstr catstr += s catstr = "" for item in list: s = item exec(s) print = old_print catstr &gt;&gt; 'Wow!Great!Epic!' </code></pre> <p>As you can see I have got my desired result: <code>'Wow!Great!Epic!'</code></p> <p>Now I make a function using the same code:</p> <pre><code>def execute(list): old_print = print def print(s): global catstr catstr += s catstr = "" for item in list: s = item exec(s) print = old_print return catstr </code></pre> <p>Now when I run this function using the following code:</p> <pre><code>list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"] execute(list) </code></pre> <p>I get the following error:</p> <pre><code>old_print = print UnboundLocalError: local variable 'print' referenced before assignment </code></pre> <p>Does anyone know why this is not working within a function?<br> Any suggestions on how to fix it will be highly appreciated.</p>
1
2016-09-13T23:57:41Z
39,480,691
<p>The interpreter doesn't recognize <strong>print</strong> as the built-in function unless you specifically tell it so. Instead of declaring it global, just remove it (thanks to Padraic Cunningham): the local <strong>print</strong> will take on your desired definition, and the global is never affected.</p> <p>You also have a forward-reference problem with <strong>catstr</strong>. The below code elicits the desired output.</p> <pre><code>catstr = "" def execute(list): def print(s): global catstr catstr += s for item in list: s = item exec(s) print = old_print return catstr list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"] print (execute(list)) </code></pre>
2
2016-09-14T00:10:25Z
[ "python", "function", "printing", "exec" ]
Redefining print function not working within a function
39,480,619
<p>I am writing a python script in python 3.x in which I need to redefine the <code>print</code> function. When I do it in my interpreter, it works fine. But when I create a function using the same code, it gives out error.</p> <p>Here is my code:</p> <pre><code>list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"] old_print = print def print(s): global catstr catstr += s catstr = "" for item in list: s = item exec(s) print = old_print catstr &gt;&gt; 'Wow!Great!Epic!' </code></pre> <p>As you can see I have got my desired result: <code>'Wow!Great!Epic!'</code></p> <p>Now I make a function using the same code:</p> <pre><code>def execute(list): old_print = print def print(s): global catstr catstr += s catstr = "" for item in list: s = item exec(s) print = old_print return catstr </code></pre> <p>Now when I run this function using the following code:</p> <pre><code>list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"] execute(list) </code></pre> <p>I get the following error:</p> <pre><code>old_print = print UnboundLocalError: local variable 'print' referenced before assignment </code></pre> <p>Does anyone know why this is not working within a function?<br> Any suggestions on how to fix it will be highly appreciated.</p>
1
2016-09-13T23:57:41Z
39,480,713
<p>All you need is <em>nonlocal</em> and to forget all the other variables you have created bar <code>catstr</code>:</p> <pre><code>def execute(lst): def print(s): nonlocal catstr catstr += s catstr = "" for item in lst: s = item exec(s) return catstr </code></pre> <p>That gives you:</p> <pre><code>In [1]: paste def execute(lst): def print(s): nonlocal catstr catstr += s catstr = "" for item in lst: s = item exec(s) return catstr ## -- End pasted text -- In [2]: list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"] In [3]: execute(lst) Out[3]: 'Wow!Great!Epic!' </code></pre> <p>Anything that happens in the function is <em>local to the function</em> so you don't need to worry about resetting anything. If you did happen to want to set a reference to print you could use <code>old_print = __builtins__.print</code>.</p> <p>If you want to have your function print without needing a print call use <code>__builtins__.print</code> to do the printing:</p> <pre><code>def execute(lst): catstr = "" def print(s): nonlocal catstr catstr += s for s in lst: exec(s) __builtins__.print(catstr) </code></pre>
2
2016-09-14T00:14:19Z
[ "python", "function", "printing", "exec" ]
Redefining print function not working within a function
39,480,619
<p>I am writing a python script in python 3.x in which I need to redefine the <code>print</code> function. When I do it in my interpreter, it works fine. But when I create a function using the same code, it gives out error.</p> <p>Here is my code:</p> <pre><code>list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"] old_print = print def print(s): global catstr catstr += s catstr = "" for item in list: s = item exec(s) print = old_print catstr &gt;&gt; 'Wow!Great!Epic!' </code></pre> <p>As you can see I have got my desired result: <code>'Wow!Great!Epic!'</code></p> <p>Now I make a function using the same code:</p> <pre><code>def execute(list): old_print = print def print(s): global catstr catstr += s catstr = "" for item in list: s = item exec(s) print = old_print return catstr </code></pre> <p>Now when I run this function using the following code:</p> <pre><code>list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"] execute(list) </code></pre> <p>I get the following error:</p> <pre><code>old_print = print UnboundLocalError: local variable 'print' referenced before assignment </code></pre> <p>Does anyone know why this is not working within a function?<br> Any suggestions on how to fix it will be highly appreciated.</p>
1
2016-09-13T23:57:41Z
39,480,862
<p>Your issue has been already addressed by Prune and Padraic Cunningham answers, here's another alternative way to achieve (i guess) what you want:</p> <pre><code>import io from contextlib import redirect_stdout g_list = ["print('Wow!')\n", "print('Great!')\n", "print('Epic!')\n"] def execute(lst): with io.StringIO() as buf, redirect_stdout(buf): [exec(item) for item in lst] return buf.getvalue() def execute_modified(lst): result = [] for item in lst: with io.StringIO() as buf, redirect_stdout(buf): exec(item) result.append(buf.getvalue()[:-1]) return "".join(result) print(execute(g_list)) print('-' * 80) print(execute_modified(g_list)) </code></pre> <p>Output:</p> <pre><code>Wow! Great! Epic! -------------------------------------------------------------------------------- Wow!Great!Epic! </code></pre>
1
2016-09-14T00:33:42Z
[ "python", "function", "printing", "exec" ]
How do I return a list of tuples with each unique string and its count given a list of strings?
39,480,706
<p>Not sure where to start... item() gives a dictionary and I don't want that.</p> <p>I would say I need to loop through the list....</p> <p>Please someone give me some hints so I can get started!</p> <p>EDIT:</p> <pre><code>count_of_names(names) counts_of_names(['John', John', 'Catherine', 'John', 'Christopher', 'Catherine']' </code></pre> <p>output:</p> <pre><code>[('Catherine', 2), ('Christopher', 1), ('John', 3)] </code></pre>
0
2016-09-14T00:13:16Z
39,480,745
<p>You may use <a href="https://docs.python.org/2/library/collections.html#collections.Counter" rel="nofollow"><code>collections.Counter()</code></a> to achieve this. Example:</p> <pre><code>&gt;&gt;&gt; x = [1,2,3,4,1,1,2,3] &gt;&gt;&gt; my_list = Counter(x).items() &gt;&gt;&gt; my_list [(1, 3), (2, 2), (3, 2), (4, 1)] # In order to sort the list base based on value of tuple at index `1` and then index `0` &gt;&gt;&gt; sorted(my_list, key=lambda x: (x[1], x[0])) [(4, 1), (2, 2), (3, 2), (1, 3)] </code></pre>
0
2016-09-14T00:18:37Z
[ "python", "python-3.x" ]
How do I return a list of tuples with each unique string and its count given a list of strings?
39,480,706
<p>Not sure where to start... item() gives a dictionary and I don't want that.</p> <p>I would say I need to loop through the list....</p> <p>Please someone give me some hints so I can get started!</p> <p>EDIT:</p> <pre><code>count_of_names(names) counts_of_names(['John', John', 'Catherine', 'John', 'Christopher', 'Catherine']' </code></pre> <p>output:</p> <pre><code>[('Catherine', 2), ('Christopher', 1), ('John', 3)] </code></pre>
0
2016-09-14T00:13:16Z
39,480,911
<p>This is a hard way to do it:</p> <pre><code>x = [1,3,2,5,6,6,3,2] x_tuple = [] y = set(x) for i in y: x_tuple.append((i,x.count(i))) print(x_tuple) </code></pre>
0
2016-09-14T00:40:01Z
[ "python", "python-3.x" ]
How do I return a list of tuples with each unique string and its count given a list of strings?
39,480,706
<p>Not sure where to start... item() gives a dictionary and I don't want that.</p> <p>I would say I need to loop through the list....</p> <p>Please someone give me some hints so I can get started!</p> <p>EDIT:</p> <pre><code>count_of_names(names) counts_of_names(['John', John', 'Catherine', 'John', 'Christopher', 'Catherine']' </code></pre> <p>output:</p> <pre><code>[('Catherine', 2), ('Christopher', 1), ('John', 3)] </code></pre>
0
2016-09-14T00:13:16Z
39,481,108
<p>Use <code>set</code> and list comprehension:</p> <pre><code>def counts_of_names(names): return [(name, names.count(name)) for name in set(names)] </code></pre>
0
2016-09-14T01:11:16Z
[ "python", "python-3.x" ]
python recursion pass by reference or by value?
39,480,782
<p>I am working the this problem on leetcode:</p> <pre><code>Given a set of distinct integers, nums, return all possible subsets. input =[1,2,3] output =[[],[3],[2],[2,3],[1],[1,3],[1,2],[1,2,3]] </code></pre> <p>I have the c++ solution, which is accepted, and then i coded exactly the same python solution.</p> <pre><code>class Solution(object): def subsets(self, nums): """ :type nums: List[int] :rtype: List[List[int]] """ solutions = [] self._get_subset(nums, 0, [], solutions) return solutions @staticmethod def _get_subset(nums, curr, path, solutions): if curr&gt;= len(nums): solutions.append(path) return path.append(nums[curr]) Solution._get_subset(nums, curr+1, path, solutions) path.pop() Solution._get_subset(nums, curr+1, path, solutions) </code></pre> <p>The output is now: [[],[],[],[],[],[],[],[]]</p> <p>It seems it is the Python pass by reference/ pass by value causing the problem, but i can't figure out how. The same c++ code works alright:</p> <pre><code>class Solution { public: vector&lt;vector&lt;int&gt;&gt; subsets(vector&lt;int&gt;&amp; nums) { vector&lt;vector&lt;int&gt;&gt; solutions; vector&lt;int&gt; path; _get_path(nums, 0, path, solutions); return solutions; } void _get_path(vector&lt;int&gt;&amp; nums, int curr, vector&lt;int&gt;&amp; path, vector&lt; vector&lt;int&gt; &gt; &amp;solutions) { if(curr &gt;= nums.size()){ solutions.push_back(path); return; } path.push_back(nums[curr]); _get_path(nums, curr+1, path, solutions); path.pop_back(); _get_path(nums, curr+1, path, solutions); } }; </code></pre>
2
2016-09-14T00:24:02Z
39,480,861
<p>The problem is here:</p> <pre><code>solutions.append(path) </code></pre> <p>in C++, <code>vector::push_back</code> makes a copy of <code>path</code> (internally). But in Python, everything is a reference. So you build up your <code>solutions</code> as a list of many references to the same <code>path</code>, which eventually gets reduced to nothing.</p> <p>You want a copy:</p> <pre><code>solutions.append(list(path)) </code></pre> <p>or:</p> <pre><code>solutions.append(path[:]) </code></pre>
4
2016-09-14T00:33:35Z
[ "python", "c++", "recursion", "subset" ]
Remove a tuple from a list if the tuple contains a certain element
39,480,870
<p>I have a list of tuples (num, id):</p> <pre><code>l = [(1000, 1), (2000, 2), (5000, 3)] </code></pre> <p>The second element of each tuple contains the identifier. Say that I want to remove the tuple with the id of <code>2</code>, how do I do that?</p> <p>I.e. I want the new list to be: <code>l = [(1000,1), (5000, 3)]</code></p> <p>I have tried <code>l.remove(2)</code> but it won't work =[</p>
0
2016-09-14T00:35:10Z
39,480,891
<p>You can use a list comprehension with a filter to achieve this.</p> <pre><code>l = [(1000, 1), (2000, 2), (5000, 3)] m = [(val, key) for (val, key) in l if key != 2] </code></pre>
6
2016-09-14T00:37:46Z
[ "python" ]
Remove a tuple from a list if the tuple contains a certain element
39,480,870
<p>I have a list of tuples (num, id):</p> <pre><code>l = [(1000, 1), (2000, 2), (5000, 3)] </code></pre> <p>The second element of each tuple contains the identifier. Say that I want to remove the tuple with the id of <code>2</code>, how do I do that?</p> <p>I.e. I want the new list to be: <code>l = [(1000,1), (5000, 3)]</code></p> <p>I have tried <code>l.remove(2)</code> but it won't work =[</p>
0
2016-09-14T00:35:10Z
39,480,895
<p>That's because the value <strong>2</strong> is not in the list. Instead, something like the below: form a list of the second elements in your tuples, then remove the element at that position.</p> <pre><code>del_pos = [x[1] for x in l].index(2) l.pop(del_pos) </code></pre> <p>Note that this removes only the first such element. If your instance is not unique, then use one of the other solutions. I believe that this is faster, but handles only the single-appearance case.</p>
2
2016-09-14T00:38:04Z
[ "python" ]
Remove a tuple from a list if the tuple contains a certain element
39,480,870
<p>I have a list of tuples (num, id):</p> <pre><code>l = [(1000, 1), (2000, 2), (5000, 3)] </code></pre> <p>The second element of each tuple contains the identifier. Say that I want to remove the tuple with the id of <code>2</code>, how do I do that?</p> <p>I.e. I want the new list to be: <code>l = [(1000,1), (5000, 3)]</code></p> <p>I have tried <code>l.remove(2)</code> but it won't work =[</p>
0
2016-09-14T00:35:10Z
39,480,908
<p>You may do it with:</p> <pre><code>&gt;&gt;&gt; l = [(1000, 1), (2000, 2), (5000, 3)] &gt;&gt;&gt; for i in list(l): ... if i[1] == 2: ... l.remove(i) </code></pre> <p>In case you want to remove only first occurence, add <code>break</code> below <code>remove</code> line</p>
0
2016-09-14T00:39:42Z
[ "python" ]
Remove a tuple from a list if the tuple contains a certain element
39,480,870
<p>I have a list of tuples (num, id):</p> <pre><code>l = [(1000, 1), (2000, 2), (5000, 3)] </code></pre> <p>The second element of each tuple contains the identifier. Say that I want to remove the tuple with the id of <code>2</code>, how do I do that?</p> <p>I.e. I want the new list to be: <code>l = [(1000,1), (5000, 3)]</code></p> <p>I have tried <code>l.remove(2)</code> but it won't work =[</p>
0
2016-09-14T00:35:10Z
39,480,909
<p>Or using filter:</p> <pre><code>l = [(1000, 1), (2000, 2), (5000, 3)] print(list(filter(lambda x: x[1] != 2, l))) </code></pre> <p>output:</p> <pre><code>[(1000, 1), (5000, 3)] </code></pre>
2
2016-09-14T00:39:48Z
[ "python" ]
Remove a tuple from a list if the tuple contains a certain element
39,480,870
<p>I have a list of tuples (num, id):</p> <pre><code>l = [(1000, 1), (2000, 2), (5000, 3)] </code></pre> <p>The second element of each tuple contains the identifier. Say that I want to remove the tuple with the id of <code>2</code>, how do I do that?</p> <p>I.e. I want the new list to be: <code>l = [(1000,1), (5000, 3)]</code></p> <p>I have tried <code>l.remove(2)</code> but it won't work =[</p>
0
2016-09-14T00:35:10Z
39,480,930
<p>Simple list comprehension:</p> <pre><code>[x for x in l if x[1] != 2] </code></pre>
0
2016-09-14T00:43:06Z
[ "python" ]
Why db.session.remove() must be called?
39,480,914
<p>I'm following a tutorial to learn flask web developing, and here is its unit testing file:</p> <pre><code>import unittest from flask import current_app from app import create_app, db class BasicsTestCase(unittest.TestCase): def setUp(self): self.app = create_app('testing') self.app_context = self.app.app_context() self.app_context.push() db.create_all() def tearDown(self): db.session.remove() db.drop_all() self.app_context.pop() def test_foo(self): pass </code></pre> <p>Also, I've found these sentences in SQLAlchemy document:</p> <blockquote> <p>Using the above flow, the process of integrating the <code>Session</code> with the web application has exactly two requirements:</p> <ul> <li><p>......</p></li> <li><p>Ensure that <code>scoped_session.remove()</code> is called when the web request ends, usually by integrating with the web framework’s event system to establish an “on request end” event.</p></li> </ul> </blockquote> <p>My question is: Why do I need to call <code>db.session.remove()</code>?</p> <p>I think as long as <code>db.session.commit()</code> is not invoked, the database won't be modified. Also, when I comment out this line, the application will still be able to pass the unit test. (In the real code, I have more reasonable test cases, rather than just <code>test_foo</code>)</p> <p>I've consulted the documents of both Flask-SQLAlchemy and SQLAlchemy, but the former doesn't even mention <code>db.session.remove()</code>, while the latter is too abstract for me to understand.</p>
-1
2016-09-14T00:40:42Z
39,493,325
<p>I haven't understand why <code>db.session.remove()</code> is necessary until I inspected the whole project:</p> <p>This is because in <code>config.py</code>, <code>SQLALCHEMY_COMMENT_ON_TEARDOWN</code> is set to <code>True</code>. As a result, changes made to <code>db.session</code> would be auto-commited if <code>db.session</code> isn't destroyed.</p>
1
2016-09-14T14:41:21Z
[ "python", "session", "flask", "sqlalchemy", "flask-sqlalchemy" ]
Why db.session.remove() must be called?
39,480,914
<p>I'm following a tutorial to learn flask web developing, and here is its unit testing file:</p> <pre><code>import unittest from flask import current_app from app import create_app, db class BasicsTestCase(unittest.TestCase): def setUp(self): self.app = create_app('testing') self.app_context = self.app.app_context() self.app_context.push() db.create_all() def tearDown(self): db.session.remove() db.drop_all() self.app_context.pop() def test_foo(self): pass </code></pre> <p>Also, I've found these sentences in SQLAlchemy document:</p> <blockquote> <p>Using the above flow, the process of integrating the <code>Session</code> with the web application has exactly two requirements:</p> <ul> <li><p>......</p></li> <li><p>Ensure that <code>scoped_session.remove()</code> is called when the web request ends, usually by integrating with the web framework’s event system to establish an “on request end” event.</p></li> </ul> </blockquote> <p>My question is: Why do I need to call <code>db.session.remove()</code>?</p> <p>I think as long as <code>db.session.commit()</code> is not invoked, the database won't be modified. Also, when I comment out this line, the application will still be able to pass the unit test. (In the real code, I have more reasonable test cases, rather than just <code>test_foo</code>)</p> <p>I've consulted the documents of both Flask-SQLAlchemy and SQLAlchemy, but the former doesn't even mention <code>db.session.remove()</code>, while the latter is too abstract for me to understand.</p>
-1
2016-09-14T00:40:42Z
40,024,715
<blockquote> <p>Using the above flow, the process of integrating the <code>Session</code> with the web application has exactly two requirements:</p> <ul> <li>......</li> <li>Ensure that <code>scoped_session.remove()</code> is called when the web request ends, usually by integrating with the web framework’s event system to establish an “on request end” event.</li> </ul> </blockquote> <p>In <strong>SQLAlchemy</strong>, above action is mentioned because sessions in web application should be <strong>scoped</strong>, meaning that each request handler creates and destroys its own session.</p> <p>This is necessary because web servers can be multi-threaded, so multiple requests might be served at the same time, each working with a different database session.</p> <p>This scenario is beautifully handled by <strong>Flask-SQLAlchemy</strong>, it creates a fresh or new scoped session for each request. If you dig further it you will find out <a href="https://github.com/mitsuhiko/flask-sqlalchemy/blob/master/flask_sqlalchemy/__init__.py#L819" rel="nofollow">here</a>, it also installs a hook on <code>app.teardown_appcontext</code> (for Flask >=0.9), <code>app.teardown_request</code> (for Flask 0.7-0.8), <code>app.after_request</code> (for Flask &lt;0.7) and here is where it calls <code>db.session.remove()</code>.</p> <p>The <em>testing</em> environment does not fully replicate the environment of a real request because it does not push/pop the application context. Because of that the session is never removed at the end of the request.</p> <p>As a side note, keep in mind that functions registered with <code>before_request</code> and <code>after_request</code> are also not called when you call <code>client.get()</code>.</p> <p>You can force an application context automatically push and pop with a small change to your test instead of manually push in <code>setUp()</code> and pop in <code>tearDown()</code>:</p> <pre><code>def test_foo(self): with app.app_context(): client = app.test_client() # do testing here for your endpoints </code></pre> <p>with this change the test passes without manually writing <code>db.session.remove()</code>.</p> <p>The documentation for Flask-Testing seems to be wrong or more likely outdated. Maybe things worked like they describe at some point, but that isn't accurate for current Flask and Flask-SQLAlchemy versions.</p> <p>I hope this helps!</p>
3
2016-10-13T15:08:15Z
[ "python", "session", "flask", "sqlalchemy", "flask-sqlalchemy" ]
I'm using FileReader API to read multiple files and breaking into parts
39,480,960
<p>I want to load multiple large large files. to do this I pinched the code from </p> <blockquote> <p><a href="http://stackoverflow.com/questions/14438187/javascript-filereader-parsing-long-file-in-chunks">javascript FileReader - parsing long file in chunks</a></p> </blockquote> <p>Each chunk is given a part number and passed to a Python CGI program also give the total number of chunks to expect. The Python CGI concatenates the chunks when the full set of chunks are received into the desired file. This works.</p> <p>Now I want to load multiple files with each file broken into chunks as before. Each file has an id number so Python will know to store it in a different directory however I run out of memory and it crashes. In</p> <blockquote> <p><a href="http://stackoverflow.com/questions/13975031/reading-multiple-files-with-javascript-filereader-api-one-at-a-time">Reading multiple files with Javascript FileReader API one at a time</a></p> </blockquote> <p>There are suggestion how to convince Javascript to process one file at a time sequentially rather than asynchronously but my attempts to get this code to work with my current code have been unsuccessful.</p> <p>The file splitting code is below. The variable numparts terminates the recursive call.</p> <p>Can anyone suggest how to modify the code intended by the loop at the bottom to execute setupReader sequentially please.</p> <pre><code> function setupReader(file,filename) { var fileSize = (file.size - 1); var chunkSize = 150000000; var offset = 0; var numparts= Math.ceil(fileSize/chunkSize); var chunkReaderBlock = null; var partno=0; var readEventHandler = function(evt) { if (evt.target.error == null) { offset += chunkSize; callback(evt.target.result); // callback for handling read chunk } else { console.log("Read error: " + evt.target.error); alert("File could not be read"); return; } if (partno &gt;= numparts) { console.log("Done reading file"); return; } chunkReaderBlock(offset, chunkSize, file); } callback=function(result) { partno+=1; var bfile = result; var query="?fileName="+filename+"&amp;Part="+partno+"&amp;of="+numparts+"&amp;bfile="+bfile; loadDoc('partition0',query,0); } chunkReaderBlock = function(_offset, length, _file) { var r = new FileReader(); var blob = _file.slice(_offset, length + _offset); r.onload = readEventHandler; r.readAsDataURL(blob); } chunkReaderBlock(offset, chunkSize, file); } for (var i = 0; i &lt; numfiles; i++) { setupReader(fileList[i],fileList[i].name); </code></pre>
0
2016-09-14T00:49:16Z
39,499,427
<p>I don't know python but isn't there a streaming solution to handle large file uploads? Perhaps like this: <a href="http://stackoverflow.com/questions/2502596/python-http-post-a-large-file-with-streaming">Python: HTTP Post a large file with streaming</a> ?</p> <p>In that case splitting the hole file into chunks and making multiple range request gets pointless</p>
0
2016-09-14T20:49:06Z
[ "javascript", "python" ]
WiringPi and Flask Sudo Conflict
39,480,992
<p>I am running my application in a virtualenv using Python3.4.</p> <p>WiringPi requires sudo privilege to access the hardware pins. Flask, on the other hand, resides in my virtualEnv folder, so I can't access it using <code>sudo flask</code>.</p> <p>I've tried making it run on startup by placing some commands in <code>/etc/rc.local</code> so that it can have root access automatically. It only tells me that it can't find basic Python library modules (like <code>re</code>).</p> <p>My RPI2 is running Raspbian. For the time being I am running it using <code>flask run --localhost=0.0.0.0</code>, which I know I am not supposed to do, but I'll change that later.</p>
0
2016-09-14T00:54:34Z
39,521,293
<p>Turns out I just had to make sure that "root" had the proper libraries installed too. Root and User have different directories for their Python binaries.</p>
0
2016-09-15T22:31:57Z
[ "python", "flask", "raspberry-pi", "virtualenv", "wiringpi" ]
How to use a user function to fillna() in pandas
39,480,997
<p>This is a fragment of the dataframe I have:</p> <pre><code>Title | Age ------+-------- Mr. | 30 Mr. | NaN Mr. | 32 Mrs. | 28 Mrs. | 16 Mr. | 34 Mrs. | NaN </code></pre> <p>Edit: I added the last row, to clarify the question</p> <p>I want to impute the NaNs (second and last row), for the second row, it should use the mean of the other "Mr." in the dataframe, so in this case, should be 32, in the last row it should use the mean of the other "Mrs.", so should be 22</p> <p>To calculate the mean is as easy as doing</p> <pre><code>value = df.loc[df["Title"] == "Mr."]["Age"].mean() </code></pre> <p>So I wrote a function called agefun:</p> <pre><code>def agefun(df, t): return df.loc[df["Title"] == t]["Age"].mean() </code></pre> <p>And it works, now, how can I use this function with the fillna() function? I'd like something like:</p> <pre><code>df['Age'].fillna(agefun(df, this_row_title)) </code></pre> <p>But of course it doesn't work, I don't know how to tell the function I like the value corresponding to the Title in that specific row.</p> <p>How can this be performed?</p>
0
2016-09-14T00:55:44Z
39,481,105
<p><a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation" rel="nofollow">Transform</a> keeps the same shape as the original series in the dataframe.</p> <pre><code>df['Age'] = df.groupby('Title').transform(lambda group: group.fillna(group.mean())) &gt;&gt;&gt; df Title Age 0 Mr. 30 1 Mr. 32 # (30 + 32 + 34) / 3 = 32 2 Mr. 32 3 Mrs. 28 4 Mrs. 16 5 Mr. 34 </code></pre> <p>In the example above, it keeps all of the values unchanged except for the one <code>NaN</code> value on the second row which it fills by calculating the mean for the group, i.e. the mean value of all rows where the <code>Title</code> is <code>Mr.</code>.</p>
2
2016-09-14T01:10:57Z
[ "python", "pandas" ]
How to use a user function to fillna() in pandas
39,480,997
<p>This is a fragment of the dataframe I have:</p> <pre><code>Title | Age ------+-------- Mr. | 30 Mr. | NaN Mr. | 32 Mrs. | 28 Mrs. | 16 Mr. | 34 Mrs. | NaN </code></pre> <p>Edit: I added the last row, to clarify the question</p> <p>I want to impute the NaNs (second and last row), for the second row, it should use the mean of the other "Mr." in the dataframe, so in this case, should be 32, in the last row it should use the mean of the other "Mrs.", so should be 22</p> <p>To calculate the mean is as easy as doing</p> <pre><code>value = df.loc[df["Title"] == "Mr."]["Age"].mean() </code></pre> <p>So I wrote a function called agefun:</p> <pre><code>def agefun(df, t): return df.loc[df["Title"] == t]["Age"].mean() </code></pre> <p>And it works, now, how can I use this function with the fillna() function? I'd like something like:</p> <pre><code>df['Age'].fillna(agefun(df, this_row_title)) </code></pre> <p>But of course it doesn't work, I don't know how to tell the function I like the value corresponding to the Title in that specific row.</p> <p>How can this be performed?</p>
0
2016-09-14T00:55:44Z
39,481,179
<p>You can also do it this way:</p> <pre><code>df['Age'] = df['Age'].fillna(df.loc[df['Title'] == 'Mr.', 'Age'].mean()) </code></pre> <p><code>df</code> output:</p> <pre><code> Age Title 0 30.0 Mr. 1 32.0 Mr. 2 32.0 Mr. 3 28.0 Mrs. 4 16.0 Mrs. 5 34.0 Mr. </code></pre>
0
2016-09-14T01:23:36Z
[ "python", "pandas" ]
referencing a table which is defined after the definition of the first table in web2py
39,481,139
<pre><code> db.define_table("devices", Field('user_id','reference users'),# THIS PRODUCES AN ERROR Field('energyConsumed','integer'), Field('device_password','password'), Field('date_of_measure','date') ); db.define_table("users", Field('device_id','reference devices') ); </code></pre> <p>I am unable to use 'reference users' in the first table because it has not been defined before that tables definition . How can I reference table which is defined only later on .</p>
0
2016-09-14T01:16:00Z
39,484,198
<p>You can not reference to table which is not defined. So you have to use alternative syntax.</p> <p>Use <code>IS_IN_DB</code> validator for this.</p> <pre><code>IS_IN_DB(db, 'users.id') </code></pre> <p>This is already answered here:</p> <p><a href="http://stackoverflow.com/a/38948788/4065350">http://stackoverflow.com/a/38948788/4065350</a> <a href="https://groups.google.com/forum/#!msg/web2py/yNca8bq0HmM/DmVjCPrODQAJ" rel="nofollow">https://groups.google.com/forum/#!msg/web2py/yNca8bq0HmM/DmVjCPrODQAJ</a></p>
1
2016-09-14T06:54:25Z
[ "python", "web2py" ]
Python TypeError Math
39,481,169
<p>I used this code:</p> <pre><code>import pywapi import string weather_com_result = pywapi.get_weather_from_weather_com('33020') temp_f = 'temperature' * 9/5 + 32 print (weather_com_result['current_conditions']['text'].lower() + weather_com_result['current_conditions'][temp_f]) </code></pre> <p>And then when I run, I get:</p> <pre><code>Traceback (most recent call last): File "weather.py", line 6, in &lt;module&gt; temp_f = 'temperature' * 9/5 + 32 TypeError: unsupported operand type(s) for /: 'str' and 'int' </code></pre> <p>Does anyone know what Im doing wrong?</p>
-3
2016-09-14T01:21:38Z
39,481,188
<p>You are trying to do arithmetic operations between a string and a number. <code>'temperature' * 9/5</code></p> <p>You need to get your temperature as a number first. I see you are using <code>pywapi</code> lib. So, you can get the current temperature looking up like this <code>weather_com_result['current_conditions']['temperature']</code> and casting it to int. </p> <pre><code>temperature = int(weather_com_result['current_conditions']['temperature']) temp_f = temperature * 9/5 + 32 </code></pre>
1
2016-09-14T01:24:59Z
[ "python" ]
django- 404 message but no runserver error
39,481,216
<p>Hi im trying to learn Django (doing the first parts of the turtorial >>docs.djangoproject.com/en/1.10/intro/tutorial02/) Im having a hard time getting my server to run, in a way that i can veiw and edit it on a website-like interface This is what is supposed to come up when I go on <a href="http://127.0.0.1:8000" rel="nofollow">http://127.0.0.1:8000</a> <a href="http://i.stack.imgur.com/TFavx.png" rel="nofollow"><img src="http://i.stack.imgur.com/TFavx.png" alt="supposed"></a></p> <p>Instead I get a 404 error message. this is what happens when I run Python manage.py runserver <a href="http://i.stack.imgur.com/rcArC.png" rel="nofollow"><img src="http://i.stack.imgur.com/rcArC.png" alt="error"></a></p> <p>there arent really any Errors it just says the port is already in use, I think thats because I ran that line of code twice.</p> <p>Im not entirely sure exactly what files ill need to change to fix this </p> <p>my <strong>init</strong>.py seems to be empty(maybe thats the problem? idk doesnt seem like it) </p>
-3
2016-09-14T01:29:37Z
39,481,234
<p>Maybe your server python process is hang. Close your console, open it again and try to rerun your server. </p> <p>If after trying that you are still getting <code>port already in use</code> message. Try to kill the process by yourself. </p> <pre><code>ps aux | grep -i manage </code></pre> <p>You will get something like this</p> <pre><code>14770 8264 0.0 1.9 546948 40904 ? S Sep19 0:00 /usr/local/bin/python manage.py runserver 0.0.0.0:8000 </code></pre> <p>Then kill it using <code>kill -9 pid</code></p> <pre><code>kill -9 8264 </code></pre>
2
2016-09-14T01:32:18Z
[ "python", "django", "database", "server" ]
django- 404 message but no runserver error
39,481,216
<p>Hi im trying to learn Django (doing the first parts of the turtorial >>docs.djangoproject.com/en/1.10/intro/tutorial02/) Im having a hard time getting my server to run, in a way that i can veiw and edit it on a website-like interface This is what is supposed to come up when I go on <a href="http://127.0.0.1:8000" rel="nofollow">http://127.0.0.1:8000</a> <a href="http://i.stack.imgur.com/TFavx.png" rel="nofollow"><img src="http://i.stack.imgur.com/TFavx.png" alt="supposed"></a></p> <p>Instead I get a 404 error message. this is what happens when I run Python manage.py runserver <a href="http://i.stack.imgur.com/rcArC.png" rel="nofollow"><img src="http://i.stack.imgur.com/rcArC.png" alt="error"></a></p> <p>there arent really any Errors it just says the port is already in use, I think thats because I ran that line of code twice.</p> <p>Im not entirely sure exactly what files ill need to change to fix this </p> <p>my <strong>init</strong>.py seems to be empty(maybe thats the problem? idk doesnt seem like it) </p>
-3
2016-09-14T01:29:37Z
39,490,713
<p>Just some extra info</p> <p>If 8000 port is already in use, you can use other ports to run your Python-Django server by specifying the port like <code>python manage.py runserver 9000</code> as Django uses the 8000 port by <strong>default</strong> if you don't specify any.</p> <p>Though your original problem must have been resolved by the solution @levi provided.</p>
0
2016-09-14T12:41:17Z
[ "python", "django", "database", "server" ]
django- 404 message but no runserver error
39,481,216
<p>Hi im trying to learn Django (doing the first parts of the turtorial >>docs.djangoproject.com/en/1.10/intro/tutorial02/) Im having a hard time getting my server to run, in a way that i can veiw and edit it on a website-like interface This is what is supposed to come up when I go on <a href="http://127.0.0.1:8000" rel="nofollow">http://127.0.0.1:8000</a> <a href="http://i.stack.imgur.com/TFavx.png" rel="nofollow"><img src="http://i.stack.imgur.com/TFavx.png" alt="supposed"></a></p> <p>Instead I get a 404 error message. this is what happens when I run Python manage.py runserver <a href="http://i.stack.imgur.com/rcArC.png" rel="nofollow"><img src="http://i.stack.imgur.com/rcArC.png" alt="error"></a></p> <p>there arent really any Errors it just says the port is already in use, I think thats because I ran that line of code twice.</p> <p>Im not entirely sure exactly what files ill need to change to fix this </p> <p>my <strong>init</strong>.py seems to be empty(maybe thats the problem? idk doesnt seem like it) </p>
-3
2016-09-14T01:29:37Z
39,491,398
<p>It looks like you typed CTRL + Z (probably by mistake), which suspends the foreground process. Use <code>fg</code> to bring the dev server process back to the foreground again. </p> <p>See: <a href="http://superuser.com/q/476873/596978">http://superuser.com/q/476873/596978</a>). </p>
0
2016-09-14T13:13:00Z
[ "python", "django", "database", "server" ]
Convert datetime back to Windows 64-bit FILETIME
39,481,221
<p>I would like to create network timestamps in NT format.</p> <p>I've been able to convert them to readable time with this function:</p> <pre><code>NetworkStamp = "\xc0\x65\x31\x50\xde\x09\xd2\x01" def GetTime(NetworkStamp): Ftime = int(struct.unpack('&lt;Q',NetworkStamp)[0]) Epoch = divmod(Ftime - 116444736000000000, 10000000) Actualtime = datetime.datetime.fromtimestamp(Epoch[0]) return Actualtime, Actualtime.strftime('%Y-%m-%d %H:%M:%S') print GetTime(NetworkStamp) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>(datetime.datetime(2016, 9, 8, 11, 35, 57), '2016-09-08 11:35:57') </code></pre> <p>Now I'd like to do the opposite, converting <code>'2016/09/08 11:35:57'</code> sec to this format:</p> <pre><code> "\xc0\x65\x31\x50\xde\x09\xd2\x01" </code></pre>
0
2016-09-14T01:30:32Z
39,492,696
<p>If you understand how to perform the conversion in one direction, doing it in reverse is basically using the inverse of each method in reverse order. Just look at the documentation for the modules/classes you're using:</p> <ol> <li><code>strftime</code> has <a href="https://docs.python.org/3/library/datetime.html#datetime.datetime.strptime" rel="nofollow">a <code>strptime</code> counterpart</a></li> <li><code>fromtimestamp</code> <a href="https://docs.python.org/3/library/datetime.html#datetime.datetime.timestamp" rel="nofollow">is matched by <code>timestamp</code></a> (if you're on pre-3.3 Python, <code>timestamp</code> doesn't exist, but you could define <code>FILETIME_epoch = datetime.datetime(1601, 1, 1) - datetime.timedelta(seconds=time.altzone if time.daylight else time.timezone)</code> outside the function to precompute a <code>datetime</code> that represents the <code>FILETIME</code> epoch for your timezone, then use <code>int((mydatetime - FILETIME_epoch).total_seconds())</code> to get <code>int</code> seconds since <code>FILETIME</code> epoch directly, without manually adjusting for difference between <code>FILETIME</code> and Unix epoches separately)</li> <li><code>divmod</code> (which you don't really need, since you only use the quotient, not the remainder, you could just do <code>Epoch = (Ftime - 116444736000000000) // 10000000</code> and avoid indexing later) is trivially reversible (just multiply and add, with the add being unnecessary if you use my trick to convert to <code>FILETIME</code> epoch seconds directly from #2)</li> <li><code>struct.unpack</code> <a href="https://docs.python.org/3/library/struct.html#struct.pack" rel="nofollow">is matched by <code>struct.pack</code></a></li> </ol> <p>I'm not providing the exact code because you really should learn to use these things yourself (and read the docs when necessary); I'm guessing your forward code was written without understanding what it is doing, because if you understood it, the reverse should have been obvious; every step has an inverse documented on the same page.</p>
1
2016-09-14T14:14:40Z
[ "python", "python-2.x", "filetime" ]
Convert datetime back to Windows 64-bit FILETIME
39,481,221
<p>I would like to create network timestamps in NT format.</p> <p>I've been able to convert them to readable time with this function:</p> <pre><code>NetworkStamp = "\xc0\x65\x31\x50\xde\x09\xd2\x01" def GetTime(NetworkStamp): Ftime = int(struct.unpack('&lt;Q',NetworkStamp)[0]) Epoch = divmod(Ftime - 116444736000000000, 10000000) Actualtime = datetime.datetime.fromtimestamp(Epoch[0]) return Actualtime, Actualtime.strftime('%Y-%m-%d %H:%M:%S') print GetTime(NetworkStamp) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>(datetime.datetime(2016, 9, 8, 11, 35, 57), '2016-09-08 11:35:57') </code></pre> <p>Now I'd like to do the opposite, converting <code>'2016/09/08 11:35:57'</code> sec to this format:</p> <pre><code> "\xc0\x65\x31\x50\xde\x09\xd2\x01" </code></pre>
0
2016-09-14T01:30:32Z
39,493,333
<p>&nbsp;&nbsp;&nbsp;&nbsp;Your code that converts the Window's <code>FILETIME</code> value into a <code>datetime.datetime</code> isn't as accurate as it could be—it's truncating any fractional seconds there might have been (because it ignores the remainder of the <code>divmod()</code> result). This isn't noticeable in the readable string your code creates since it only shows whole seconds.<br> &nbsp;&nbsp;&nbsp;&nbsp;Even if the fractional seconds are included, you can't do exactly what you want because the Windows <code>FILETIME</code> structure has values in intervals of 100-nanosecond (.1 microsecond), but Python's <code>datetime</code> only supports accuracy to whole microseconds. So best that's possible to do is approximate the original value due to this loss of information involved even doing the most accurate possible conversion.</p> <p>Here's code, for both Python 2 and 3, demonstrating this using the <code>NetworkStamp</code> test value in your question:</p> <pre><code>import datetime import struct import time WINDOWS_TICKS = int(1/10**-7) # 10,000,000 (100 nanoseconds or .1 microseconds) WINDOWS_EPOCH = datetime.datetime.strptime('1601-01-01 00:00:00', '%Y-%m-%d %H:%M:%S') POSIX_EPOCH = datetime.datetime.strptime('1970-01-01 00:00:00', '%Y-%m-%d %H:%M:%S') EPOCH_DIFF = (POSIX_EPOCH - WINDOWS_EPOCH).total_seconds() # 11644473600.0 WINDOWS_TICKS_TO_POSIX_EPOCH = EPOCH_DIFF * WINDOWS_TICKS # 116444736000000000.0 def get_time(filetime): """Convert windows filetime winticks to python datetime.datetime.""" winticks = struct.unpack('&lt;Q', filetime)[0] microsecs = (winticks - WINDOWS_TICKS_TO_POSIX_EPOCH) / WINDOWS_TICKS return datetime.datetime.fromtimestamp(microsecs) def convert_back(timestamp_string): """Convert a timestamp in Y=M=D H:M:S.f format into a windows filetime.""" dt = datetime.datetime.strptime(timestamp_string, '%Y-%m-%d %H:%M:%S.%f') posix_secs = int(time.mktime(dt.timetuple())) winticks = (posix_secs + int(EPOCH_DIFF)) * WINDOWS_TICKS return winticks def int_to_bytes(n, minlen=0): # helper function """ int/long to bytes (little-endian byte order). Note: built-in int.to_bytes() method could be used in Python 3. """ nbits = n.bit_length() + (1 if n &lt; 0 else 0) # plus one for any sign bit nbytes = (nbits+7) // 8 # number of whole bytes ba = bytearray() for _ in range(nbytes): ba.append(n &amp; 0xff) n &gt;&gt;= 8 if minlen &gt; 0 and len(ba) &lt; minlen: # zero pad? ba.extend([0] * (minlen-len(ba))) return ba # with low bytes first def hexbytes(s): # formatting helper function """Convert string to string of hex character values.""" ba = bytearray(s) return ''.join('\\x{:02x}'.format(b) for b in ba) win_timestamp = b'\xc0\x65\x31\x50\xde\x09\xd2\x01' print('win timestamp: b"{}"'.format(hexbytes(win_timestamp))) dtime = get_time(win_timestamp) readable = dtime.strftime('%Y-%m-%d %H:%M:%S.%f') # includes fractional secs print('datetime repr: "{}"'.format(readable)) winticks = convert_back(readable) # includes fractional secs new_timestamp = int_to_bytes(winticks) print('new timestamp: b"{}"'.format(hexbytes(new_timestamp))) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code>win timestamp: b"\xc0\x65\x31\x50\xde\x09\xd2\x01" datetime repr: "2016-09-08 07:35:57.996998" new timestamp: b"\x80\x44\x99\x4f\xde\x09\xd2\x01" </code></pre>
0
2016-09-14T14:41:47Z
[ "python", "python-2.x", "filetime" ]
following the celery tutorial and I get this error No module named 'src'
39,481,222
<p>I am following the celery tutorial and I get this error No module named 'src'. I dont understand what the issue is.</p> <p>this is my directory structure</p> <pre><code>venv/ src/ __init__.py celery.py manage.py tasks.py </code></pre> <p>my celery.py </p> <pre><code> from __future__ import absolute_import from .gettingstarted.settings.local import BROKER_URL from celery import Celery app = Celery('src', broker=BROKER_URL, backend=BROKER_URL, include=['src.tasks']) # Optional configuration, see the application user guide. app.conf.update( CELERY_TASK_RESULT_EXPIRES=3600, ) if __name__ == '__main__': app.start() </code></pre> <p>my tasks.py file</p> <pre><code> from __future__ import absolute_import from proj.celery import app @app.task def add(x, y): return x + y @app.task def mul(x, y): return x * y @app.task def xsum(numbers): return sum(numbers) </code></pre> <p>I also tried </p> <pre><code>from __future__ import absolute_import import sys sys.path.insert(0, "/project/src/") from proj.celery import app </code></pre> <p>that didnt work either.</p> <p>when I go into src directory and run python shell Iget the follwoing error</p> <pre><code>No module named 'src' </code></pre> <p>what's my malfunction?</p> <h1>EDIT</h1> <p>My traceback</p> <pre><code>(practice) apples-MBP:src ray$ celery worker -A tasks -l info Traceback (most recent call last): File "/Users/ray/Desktop/myheroku/practice/bin/celery", line 11, in &lt;module&gt; sys.exit(main()) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/__main__.py", line 30, in main main() File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/bin/celery.py", line 81, in main cmd.execute_from_commandline(argv) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/bin/celery.py", line 793, in execute_from_commandline super(CeleryCommand, self).execute_from_commandline(argv))) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/bin/base.py", line 309, in execute_from_commandline argv = self.setup_app_from_commandline(argv) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/bin/base.py", line 469, in setup_app_from_commandline self.app = self.find_app(app) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/bin/base.py", line 489, in find_app return find_app(app, symbol_by_name=self.symbol_by_name) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/app/utils.py", line 235, in find_app sym = symbol_by_name(app, imp=imp) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/bin/base.py", line 492, in symbol_by_name return symbol_by_name(name, imp=imp) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/kombu/utils/__init__.py", line 96, in symbol_by_name module = imp(module_name, package=package, **kwargs) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/site-packages/celery/utils/imports.py", line 101, in import_from_cwd return imp(module, package=package) File "/Users/ray/Desktop/myheroku/practice/lib/python3.5/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "&lt;frozen importlib._bootstrap&gt;", line 986, in _gcd_import File "&lt;frozen importlib._bootstrap&gt;", line 969, in _find_and_load File "&lt;frozen importlib._bootstrap&gt;", line 958, in _find_and_load_unlocked File "&lt;frozen importlib._bootstrap&gt;", line 673, in _load_unlocked File "&lt;frozen importlib._bootstrap_external&gt;", line 662, in exec_module File "&lt;frozen importlib._bootstrap&gt;", line 222, in _call_with_frames_removed File "/Users/ray/Desktop/myheroku/practice/src/tasks.py", line 4, in &lt;module&gt; from src.celery import app ImportError: No module named 'src' </code></pre>
0
2016-09-14T01:30:39Z
39,481,938
<p>Your PYTHONPATH has to have the src directory inside of it. If you do <code>cd src</code> it sees your module as <code>celery</code> instead of <code>src.celery</code>.</p> <p>To fix it you can...</p> <p>A. run your code one directory up</p> <pre><code>cd .. &amp;&amp; celery worker -A src.tasks -l info </code></pre> <p>B. Add the parent directory to your PYTHONPATH</p> <pre><code>PYTHONPATH=.. celery worker -A src.tasks -l info </code></pre>
0
2016-09-14T03:12:23Z
[ "python", "django", "redis", "task", "celery" ]
Writing to a Shapefile
39,481,237
<p>I'm having trouble writing/reading a Shapefile in python. I have an array of points that I would like to write to a polygon using pyshp. The relevant parts of the code are:</p> <pre><code>dividedRects = [(7598325.0, 731579.0, 7698325.0, 631579.0), (7598325.0, 631579.0, 7698325.0, 611641.0), (7698325.0, 731579.0, 7728636.0, 631579.0), (7698325.0, 631579.0, 7728636.0, 611641.0)] def createPolys(dividedRects): w = shapefile.Writer(shapefile.POLYGON) for i in range(0, len(dividedRects)): print i topLeft = [dividedRects[i][0],dividedRects[i][1]] topRight = [dividedRects[i][2], dividedRects[i][1]] bottomRight = [dividedRects[i][2], dividedRects[i][3]] bottomLeft = [dividedRects[i][0], dividedRects[i][3]] w.poly(parts=[[topLeft,topRight,bottomRight,bottomLeft]]) w.field("ID", "C", "40") w.field("Events", "C", "40") w.record(str(i), str(0)) w.save('cellFile') createPolys(dividedRects) </code></pre> <p>This causes an error:</p> <pre><code>IndexError Traceback (most recent call last) &lt;ipython-input-36-503affbe838b&gt; in &lt;module&gt;() ----&gt; 1 createPolys(dividedRects) &lt;ipython-input-35-4c552ae29bc7&gt; in createPolys(dividedRects) 10 w.field("ID", "C", "40") 11 w.field("Events", "C", "40") ---&gt; 12 w.record(str(i), str(0)) 13 w.save('cellFile') 14 # topLeft = [dividedRects[1][0],dividedRects[1][1]] C:\Users\Me\Anaconda2\lib\site-packages\shapefile.pyc in record(self, *recordList, **recordDict) 967 if self.fields[0][0].startswith("Deletion"): fieldCount -= 1 968 if recordList: --&gt; 969 [record.append(recordList[i]) for i in range(fieldCount)] 970 elif recordDict: 971 for field in self.fields: IndexError: tuple index out of range </code></pre> <p>If I remove the <code>field</code> and <code>records</code> lines from <code>createPolys</code>:</p> <pre><code>def createPolys(dividedRects): w = shapefile.Writer(shapefile.POLYGON) for i in range(0, len(dividedRects)): print i topLeft = [dividedRects[i][0],dividedRects[i][1]] topRight = [dividedRects[i][2], dividedRects[i][1]] bottomRight = [dividedRects[i][2], dividedRects[i][3]] bottomLeft = [dividedRects[i][0], dividedRects[i][3]] w.poly(parts=[[topLeft,topRight,bottomRight,bottomLeft]]) # w.field("ID", "C", "40") # w.field("Events", "C", "40") # w.record(str(i), str(0)) w.save('cellFile') </code></pre> <p>Then I get an assertionerror when reading the records from the file:</p> <pre><code>createPolys(dividedRects) sf2 = shapefile.Reader("cellFile") print sf2.records() shapes = sf2.shapes() bbox = shapes[1].bbox #['%.3f' % coord for coord in bbox] print bbox points = shapes[1].points print points AssertionError Traceback (most recent call last) &lt;ipython-input-37-597af0b882ba&gt; in &lt;module&gt;() 1 sf2 = shapefile.Reader("cellFile") ----&gt; 2 print sf2.records() 3 shapes = sf2.shapes() 4 bbox = shapes[1].bbox 5 #['%.3f' % coord for coord in bbox] C:\Users\Me\Anaconda2\lib\site-packages\shapefile.pyc in records(self) 528 """Returns all records in a dbf file.""" 529 if not self.numRecords: --&gt; 530 self.__dbfHeader() 531 records = [] 532 f = self.__getFileObj(self.dbf) C:\Users\Me\Anaconda2\lib\site-packages\shapefile.pyc in __dbfHeader(self) 464 self.fields.append(fieldDesc) 465 terminator = dbf.read(1) --&gt; 466 assert terminator == b("\r") 467 self.fields.insert(0, ('DeletionFlag', 'C', 1, 0)) 468 AssertionError: </code></pre> <p>When I removed the loop and wrote one record it seemed to work ok. What's going on?</p>
0
2016-09-14T01:32:36Z
39,481,494
<p>I do not know about <code>pyshp</code> library, but I'll try to help anyways. </p> <p>The two <code>w.field()</code> commands occur in the <code>for</code> loop. This may cause the two columns "ID" and "Events" to be defined multiple times. When you write only one record (polygon), it works fine (i.e. the <code>w.record()</code> command contains two values). After the first iteration, there would be 4, 6, etc columns. That would explain the behavior you describe.</p> <p>Try moving the two <code>w.field()</code> lines before the <code>for loop</code>.</p> <p>When you comment the <code>w.record()</code>, you obtain a <code>shp</code> (and <code>shx</code>) file with a number of records different from the corresponding <code>dbf</code>file. That explains the assertion error when reading. </p> <p>Unrelated to your problem, you could also simplify your code with <code>enumerate</code> (built-in function).</p> <pre><code>w = shapefile.Writer(shapefile.POLYGON) w.field("ID", "C", "40") w.field("Events", "C", "40") for i,rect1 in enumerate(dividedRects): print i topLeft = [rect1[0],rect1[1]] topRight = [rect1[2], rect1[1]] bottomRight = [rect1[2], rect1[3]] bottomLeft = [rect1[0], rect1[3]] .... </code></pre> <p>(I cannot test since I do not have <code>pyshp</code>) Good luck!</p>
1
2016-09-14T02:09:55Z
[ "python", "shapefile" ]
Scrapy - NameError: global name 'base_search_url' is not defined
39,481,437
<p>I am trying to call a local variable from inside a Scrapy spider class but then I got <code>NameError: global name 'base_search_url' is not defined</code>.</p> <pre><code>class MySpider(scrapy.Spider): name = "mine" allowed_domains = ["www.example.com"] base_url = "https://www.example.com" start_date = "2011-01-01" today = datetime.date.today().strftime("%Y-%m-%d") base_search_url = 'https://www.example.com/?city={}&amp;startDate={}&amp;endDate={}&amp;page=1', city_codes = ['on', 'bc', 'ab'] start_urls = (base_search_url.format(city_code, start_date, today) for city_code in city_codes) </code></pre> <p>I tried to use <code>self.base_search_url</code> instead but there is no use. Does anyone know how to solve it?</p> <p>FYI, I use Python 2.7</p>
0
2016-09-14T02:02:32Z
39,481,570
<p>Solved! I end up solving it by using <code>__init__()</code> function.</p> <pre><code>def __init__(self): self.start_urls = (self.base_search_url.format(city_code, self.start_date, self.today) for city_code in self.city_codes) </code></pre>
0
2016-09-14T02:20:39Z
[ "python", "python-2.7", "scrapy", "scrapy-spider", "local-variables" ]
Scrapy - NameError: global name 'base_search_url' is not defined
39,481,437
<p>I am trying to call a local variable from inside a Scrapy spider class but then I got <code>NameError: global name 'base_search_url' is not defined</code>.</p> <pre><code>class MySpider(scrapy.Spider): name = "mine" allowed_domains = ["www.example.com"] base_url = "https://www.example.com" start_date = "2011-01-01" today = datetime.date.today().strftime("%Y-%m-%d") base_search_url = 'https://www.example.com/?city={}&amp;startDate={}&amp;endDate={}&amp;page=1', city_codes = ['on', 'bc', 'ab'] start_urls = (base_search_url.format(city_code, start_date, today) for city_code in city_codes) </code></pre> <p>I tried to use <code>self.base_search_url</code> instead but there is no use. Does anyone know how to solve it?</p> <p>FYI, I use Python 2.7</p>
0
2016-09-14T02:02:32Z
39,509,053
<p>From the <a href="http://doc.scrapy.org/en/latest/topics/spiders.html#scrapy.spiders.Spider.start_urls" rel="nofollow">docs</a>: </p> <blockquote> <p>start_urls: a list of URLs where the Spider will begin to crawl from. The first pages downloaded will be those listed here. The subsequent URLs will be generated successively from data contained in the start URLs.</p> </blockquote> <p>Start urls is a <strong>list</strong></p> <p>Solve it by set in <strong>init</strong> method:</p> <pre><code>def __init__(self): self.start_urls=[] self.start_urls.append( (base_search_url.format(city_code, start_date, today) for city_code in city_codes) ) </code></pre> <p>Or in the class declaration (as you show in your question):</p> <pre><code>start_urls=[] start_urls.append( (base_search_url.format(city_code, start_date, today) for city_code in city_codes) ) </code></pre> <p><strong>Note</strong> </p> <p>Make sure you add correct urls starting by <code>http://</code> or <code>https://</code>.</p>
0
2016-09-15T10:41:30Z
[ "python", "python-2.7", "scrapy", "scrapy-spider", "local-variables" ]
Acquire the data from a row in a Pandas
39,481,516
<p>Instructions given by Professor: 1. Using the list of countries by continent from World Atlas data, load in the countries.csv file into a pandas DataFrame and name this data set as countries. 2. Using the data available on Gapminder, load in the Income per person (GDP/capita, PPP$ inflation-adjusted) as a pandas DataFrame and name this data set as income. 3. Transform the data set to have years as the rows and countries as the columns. Show the head of this data set when it is loaded. 4. Graphically display the distribution of income per person across all countries in the world for any given year (e.g. 2000). What kind of plot would be best?</p> <p>In the code below, I have some of these tasks completed, but I'm having a hard time understanding how to acquire data from a DataFrame row. I want to be able to acquire data from a row and then plot it. It may seem like a trivial concept, but I've been at it for a while and need assistance please.</p> <pre><code>%matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt countries = pd.read_csv('2014_data/countries.csv') countries.head(n=3) income = pd.read_excel('indicator gapminder gdp_per_capita_ppp.xlsx') income = income.T def graph_per_year(year): stryear = str(year) dfList = income[stryear].tolist() graph_per_year(1801) </code></pre>
0
2016-09-14T02:13:48Z
39,482,049
<p>Pandas uses three types of indexing.</p> <p>If you are looking to use integer indexing, you will need to use <code>.iloc</code></p> <pre><code>df_1 Out[5]: consId fan-cnt 0 1155696024483 34.0 1 1155699007557 34.0 2 1155694005571 34.0 3 1155691016680 12.0 4 1155697016945 34.0 df_1.iloc[1,:] #go to the row with index 1 and select all the columns Out[8]: consId 1.155699e+12 fan-cnt 3.400000e+01 Name: 1, dtype: float64 </code></pre> <p>And to go to a particular cell, you can use something along the following lines,</p> <pre><code>df_1.iloc[1][1] Out[9]: 34.0 </code></pre> <p>You need to go through the <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html" rel="nofollow">documentation</a> for other types of indexing namely <code>.ix</code> and <code>.loc</code> as suggested by <a href="http://stackoverflow.com/users/4893008/sohier-dane">sohier-dane</a>.</p>
1
2016-09-14T03:26:25Z
[ "python", "pandas", "matplotlib", "dataframe", "data-science" ]
Acquire the data from a row in a Pandas
39,481,516
<p>Instructions given by Professor: 1. Using the list of countries by continent from World Atlas data, load in the countries.csv file into a pandas DataFrame and name this data set as countries. 2. Using the data available on Gapminder, load in the Income per person (GDP/capita, PPP$ inflation-adjusted) as a pandas DataFrame and name this data set as income. 3. Transform the data set to have years as the rows and countries as the columns. Show the head of this data set when it is loaded. 4. Graphically display the distribution of income per person across all countries in the world for any given year (e.g. 2000). What kind of plot would be best?</p> <p>In the code below, I have some of these tasks completed, but I'm having a hard time understanding how to acquire data from a DataFrame row. I want to be able to acquire data from a row and then plot it. It may seem like a trivial concept, but I've been at it for a while and need assistance please.</p> <pre><code>%matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt countries = pd.read_csv('2014_data/countries.csv') countries.head(n=3) income = pd.read_excel('indicator gapminder gdp_per_capita_ppp.xlsx') income = income.T def graph_per_year(year): stryear = str(year) dfList = income[stryear].tolist() graph_per_year(1801) </code></pre>
0
2016-09-14T02:13:48Z
39,543,763
<p>To answer your first question, a bar graph with a year sector would be best. You'll have to keep countries on y axis and per capita income on y. And a dropdown perhaps to select a particular year for which the graph will change.</p>
0
2016-09-17T06:53:28Z
[ "python", "pandas", "matplotlib", "dataframe", "data-science" ]
Number duplicates sequentially in Pandas DataFrame
39,481,609
<p>I have a Pandas DataFrame that has a column that is basically a foreign key, as below:</p> <pre><code>Index | f_key | values 0 | 1 | red 1 | 2 | blue 2 | 1 | green 3 | 2 | yellow 4 | 3 | orange 5 | 1 | violet </code></pre> <p>What I would like is to add a column that labels the repeated foreign keys sequentially, as in "dup_number" below:</p> <pre><code>Index | dup_number | f_key | values 0 | 1 | 1 | red 1 | 1 | 2 | blue 2 | 2 | 1 | green 3 | 2 | 2 | yellow 4 | 1 | 3 | orange 5 | 3 | 1 | violet </code></pre> <p>The rows can be reordered if needed, I just need to get the "dup_number" keys in there. I wrote following code, which works fine, it gives me a Series which I can then add into the DataFrame, but it is very slow (that for loop is what kills the time), and I feel like it's way more complicated than is needed:</p> <pre><code>df = pd.DataFrame({'f_key': [1,2,1,2,3,1], 'values': ['red', 'blue', 'green', 'yellow', 'orange', 'violet']}) df_unique = df['f_key'].drop_duplicates().reset_index(drop=True) dup_number = pd.DataFrame(columns = ['dup_number', 'temp_index']) for n in np.arange(len(df_unique)): sub_df = df.loc[df['f_key'] == df_unique[n]].reset_index() dup_index = pd.DataFrame({'dup_number': sub_df.index.values[:]+1, 'temp_index': sub_df['index']}) dup_number = dup_number.append(dup_index) dup_number = dup_number.set_index(dup_number['temp_index'].astype(int)) dup_number = dup_number['dup_number'].sort_index() </code></pre> <p>Any suggestions on faster/simpler ways to do this are appreciated!</p>
3
2016-09-14T02:25:38Z
39,481,775
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/version/0.15.0/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow">cumcount()</a></p> <pre><code>df['dup_number'] = df.groupby(['f_key']).cumcount()+1 f_key values dup_number 0 1 red 1 1 2 blue 1 2 1 green 2 3 2 yellow 2 4 3 orange 1 5 1 violet 3 </code></pre>
1
2016-09-14T02:51:22Z
[ "python", "pandas", "dataframe", "duplicates", "foreign-keys" ]
Number duplicates sequentially in Pandas DataFrame
39,481,609
<p>I have a Pandas DataFrame that has a column that is basically a foreign key, as below:</p> <pre><code>Index | f_key | values 0 | 1 | red 1 | 2 | blue 2 | 1 | green 3 | 2 | yellow 4 | 3 | orange 5 | 1 | violet </code></pre> <p>What I would like is to add a column that labels the repeated foreign keys sequentially, as in "dup_number" below:</p> <pre><code>Index | dup_number | f_key | values 0 | 1 | 1 | red 1 | 1 | 2 | blue 2 | 2 | 1 | green 3 | 2 | 2 | yellow 4 | 1 | 3 | orange 5 | 3 | 1 | violet </code></pre> <p>The rows can be reordered if needed, I just need to get the "dup_number" keys in there. I wrote following code, which works fine, it gives me a Series which I can then add into the DataFrame, but it is very slow (that for loop is what kills the time), and I feel like it's way more complicated than is needed:</p> <pre><code>df = pd.DataFrame({'f_key': [1,2,1,2,3,1], 'values': ['red', 'blue', 'green', 'yellow', 'orange', 'violet']}) df_unique = df['f_key'].drop_duplicates().reset_index(drop=True) dup_number = pd.DataFrame(columns = ['dup_number', 'temp_index']) for n in np.arange(len(df_unique)): sub_df = df.loc[df['f_key'] == df_unique[n]].reset_index() dup_index = pd.DataFrame({'dup_number': sub_df.index.values[:]+1, 'temp_index': sub_df['index']}) dup_number = dup_number.append(dup_index) dup_number = dup_number.set_index(dup_number['temp_index'].astype(int)) dup_number = dup_number['dup_number'].sort_index() </code></pre> <p>Any suggestions on faster/simpler ways to do this are appreciated!</p>
3
2016-09-14T02:25:38Z
39,481,879
<p>Below is a similar solution as one listed in <a href="http://stackoverflow.com/questions/17775935/sql-like-window-functions-in-pandas-row-numbering-in-python-pandas-dataframe">this question</a>. Here is a modified version of one of the answers that would apply here:</p> <pre><code>import pandas as pd df = pd.DataFrame({'index':[0,1,2,3,4,5],'f_key':[1,2,1,2,3,1] ,'values':['red','blue','green','yellow','orange','violet']}) df['duplicate_num']=df.sort_values('index') \ .groupby('f_key') \ .cumcount() + 1 </code></pre> <p>In essence, we're applying a window function (conceptually) to the dataframe and generating a row number for each instance (ordered by the index) of a repeating foreign key value.</p>
2
2016-09-14T03:03:14Z
[ "python", "pandas", "dataframe", "duplicates", "foreign-keys" ]
click on button to send adb commmand python
39,481,620
<p>I would like to build a program to send adb commannd to mobile when i click the buttton, i tried with the following code but the command is not send to device,I'm new in Python. Please can someone help me to solve this problem</p> <pre><code>from Tkinter import * import os import subprocess root = Tk() root.title("MUT Tester") root.geometry("500x500") def button(): cmd= os.system("adb devices") b = Button(root, text="Enter", width=30, height=2, command = lambda:(button)) b.pack() root.mainloop() </code></pre>
0
2016-09-14T02:27:29Z
39,505,903
<p>In this line:</p> <pre><code>b = Button(root, text="Enter", width=30, height=2, command = lambda:(button)) </code></pre> <p>the button function is not being called by command when you click (replace button with a print statement to test). Remove the lambda and replace it with just command = button.</p>
0
2016-09-15T08:00:27Z
[ "android", "python" ]
when inserting data to sqlite using button argument 'command'
39,481,651
<p>hello when ever i try to insert data to sqlite there is an error shows up and the button is Disappears from the frame the issue are in this two piece of function <code>def fitcher(insertFun):</code> and <code>def insertFun(self)</code> </p> <pre><code>from tkinter import * from tkinter import ttk import sqlite3 class mainGui(ttk.Frame): def __init__(self): Frame.__init__(self,background="lightblue") self.master.title("Family book library") self.pack(expand=1, fill=BOTH) toolBar = Frame(self) self.buttonPicInsert = PhotoImage(file="insert.png") addbook = ttk.Button(toolBar,image = self.buttonPicInsert,command = self.insertFun) addbook.pack(side=LEFT) self.buttonPicAnalyse = PhotoImage(file="analys.png") analysis = ttk.Button(toolBar, image = self.buttonPicAnalyse,command = self.analysFun) analysis.pack(side=LEFT) self.buttonPicSearch = PhotoImage(file="search.png") search = ttk.Button(toolBar, image = self.buttonPicSearch) search.pack(side=LEFT) toolBar.pack(side=LEFT) menu = Menu(self.master) self.master.config(menu=menu) file = Menu(menu) file.add_command(label="Exit", command=quit) menu.add_cascade(label="File", menu=file) tree = ttk.Treeview(show="headings", height=32) tree["columns"] = ("1", "2", "3", "4", "5", "6", "7" ,"8") tree.column("1", width=132) tree.column("2", width=302) tree.column("3", width=302) tree.column("4", width=142) tree.column("5", width=123) tree.column("6", width=120) tree.column("7", width=120) tree.column("8", width=120) tree.heading("1", text="BookID") tree.heading("2", text="BookTitle") tree.heading("3", text="BookAuthor") tree.heading("4", text="BookPublisher") tree.heading("5", text="edition") tree.heading("6", text="PublisherDate") tree.heading("7", text="familyOwner") tree.heading("8", text="Location") tree.insert("", 0, text="BookID", values=("1A", "1b")) tree.pack(side=TOP) def insertFun(self): new = Toplevel(pady=20) new.title("insert new book") new.transient() new.geometry("570x370") new.resizable(width=False, height=False) # varible text filed #BookTitle label new.BookTitle = ttk.Label(new, text="BookTitle ", justify = CENTER) new.BookTitle.pack() # 1 new.BookTitleFiled = ttk.Entry(new, width=90, justify = CENTER) new.BookTitleFiled.pack() # varible text filed #BookAuthor label new.BookAuthor = ttk.Label(new, text="BookAuthor ", justify = CENTER) new.BookAuthor.pack() # 2 new.BookAuthorFiled = ttk.Entry(new, width=90, justify = CENTER) new.BookAuthorFiled.pack() # varible text filed #BookPublisher label new.BookPub = ttk.Label(new, text="BookPublisher ", justify = CENTER) new.BookPub.pack() # 3 new.BookPubFiled = ttk.Entry(new, width=90, justify = CENTER) new.BookPubFiled.pack() # varible text filed #Edition label new.Edition = ttk.Label(new, text="Edition ", justify = CENTER) new.Edition.pack() # 4 new.EditionFiled = ttk.Entry(new, width=90, justify = CENTER) new.EditionFiled.pack() # varible text filed #PublisherDate label new.PublisherDate = ttk.Label(new, text="PublisherDate ", justify = CENTER) new.PublisherDate.pack() # 5 new.PublisherDateFiled = ttk.Entry(new, width=90, justify = CENTER) new.PublisherDateFiled.pack() # varible text filed #Owner label new.Owner = ttk.Label(new, text="Family Owner ", justify = CENTER) new.Owner.pack() # 6 new.OwnerFiled = ttk.Entry(new, width=90, justify = CENTER) new.OwnerFiled.pack() # varible text filed #location label new.location = ttk.Label(new, text="location ", justify = CENTER) new.location.pack() # 7 new.locationFiled = ttk.Entry(new, width=90, justify = CENTER) new.locationFiled.pack() new.submit = ttk.Button(new, text="insert data", command = new.fitcher) new.submit.pack(side=BOTTOM,fill=X,pady=1,padx=10) def analysFun(self): anal = Toplevel() anal.title("Wall of fam") anal.transient() anal.geometry("320x90+450+300") anal.resizable(width=False, height=False) anal.nsum = Label(anal, text="NumberOfBook") anal.nsum.grid(sticky='nw',row=1, column=0) anal.nowner = Label(anal, text="NumberOfOwner") anal.nowner.grid(sticky='nw',row=2, column=0) anal.nowner = Label(anal, text="OwnerName") anal.nowner.grid(sticky='nw', row=3, column=0) anal.locations = Label(anal, text="LocationDistribution") anal.locations.grid(sticky='nw',row=4, column=0) def fitcher(insertFun): conn = sqlite3.connect('books.sqlite') c = conn.cursor() BookTitleValue = insertFun.BookTitleFiled.get() BookAuthorValue = insertFun.BookAuthorFiled.get() BookPublisherValue = insertFun.BookPubFiled.get() editionValue = insertFun.EditionFiled.get() PublisherDateValue = insertFun.PublisherDateFiled.get() familyOwnerValue = insertFun.OwnerFiled.get() LocationValue = insertFun.locationFiled.get() c.execute("INSERT INTO library (BookTitle, BookAuthor, BookPublisher, edition, PublisherDate,familyOwner,Location) values (? , ?, ?, ?, ?, ?, ?)", (BookTitleValue, BookAuthorValue, BookPublisherValue, editionValue, PublisherDateValue, familyOwnerValue, LocationValue)) if __name__ == "__main__": mainGui().mainloop() </code></pre> <p>the error message are down below </p> <pre><code> C:\Users\issba\AppData\Local\Programs\Python\Python35-32\python.exe C:/Users/issba/Desktop/workstation/familyLibrary/mainGui.py Exception in Tkinter callback Traceback (most recent call last): File "C:\Users\issba\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1550, in __call__ return self.func(*args) File "C:/Users/issba/Desktop/workstation/familyLibrary/mainGui.py", line 121, in insertFun new.submit = ttk.Button(new, text="insert data", command = new.fitcher) AttributeError: 'Toplevel' object has no attribute 'fitcher' Process finished with exit code 0 </code></pre>
-1
2016-09-14T02:32:12Z
39,522,867
<p>you are missing a lot of things here as far as i see first you need this function </p> <pre><code>new.wait_window() return (new.varb..etc) </code></pre> <p>and you need to add this </p> <pre><code>new.conn.commit() new.conn.close() </code></pre> <p>this function you have to change the indentation take it out of this function <code>__init__</code></p> <pre><code>def insertFun(): </code></pre> <p>you have to use this function to take the data </p> <pre><code>textvariable = new.locationGet </code></pre> <p>for example </p> <pre><code>new.locationGet = tk.StringVar() new.locationFiled = ttk.Entry(new, width=90, justify = CENTER,textvariable = new.locationGet) new.locationFiled.pack() </code></pre> <p>also change the commend just read this page it's very into your problem <a href="http://ebanshi.cc/questions/4479874/how-to-get-the-value-from-tkinter-toplevel-in-side-a-0-attribute-function" rel="nofollow">here</a></p>
1
2016-09-16T02:17:18Z
[ "python", "sqlite", "tkinter" ]
PyCharm throwing an error when I try to use macports python
39,481,676
<p>I'm trying to get Theano installed so I can start playing with some cool ML stuff, but I'm running into a problem with PyCharm. I'm following the instructions <a href="http://deeplearning.net/software/theano/install.html" rel="nofollow">here</a> to get all the prereqs installed so I can run Theano smoothly, so I've used macports to download numpy and scipy, as well as python 2.7.12. PyCharm was initially set to use the interpreter at usr/bin/python (a different python version I have on my computer from last time I was working in python), but I switched it to go to opt/local/bin/python so that everything would be running from the same macports version.</p> <p>The problem is, booting up my PyCharm with the macports python gives me this error in the console:</p> <pre><code>/opt/local/bin/python -u /Applications/PyCharm CE.app/Contents/helpers/pydev/pydevconsole.py 60364 60365 /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/utils/traitlets.py:5: UserWarning: IPython.utils.traitlets has moved to a top-level traitlets package. warn("IPython.utils.traitlets has moved to a top-level traitlets package.") Traceback (most recent call last): File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevconsole.py", line 491, in &lt;module&gt; pydevconsole.StartServer(pydev_localhost.get_localhost(), int(port), int(client_port)) File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydevconsole.py", line 328, in StartServer interpreter = InterpreterInterface(host, client_port, threading.currentThread()) File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydev_ipython_console.py", line 26, in __init__ self.interpreter = get_pydev_frontend(host, client_port) File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydev_ipython_console_011.py", line 469, in get_pydev_frontend _PyDevFrontEndContainer._instance = _PyDevFrontEnd() File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydev_ipython_console_011.py", line 300, in __init__ self.ipython = PyDevTerminalInteractiveShell.instance() File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/traitlets/config/configurable.py", line 412, in instance inst = cls(*args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/terminal/interactiveshell.py", line 396, in __init__ super(TerminalInteractiveShell, self).__init__(*args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/IPython/core/interactiveshell.py", line 495, in __init__ self.init_completer() File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydev_ipython_console_011.py", line 219, in init_completer self.Completer = self._new_completer_200() File "/Applications/PyCharm CE.app/Contents/helpers/pydev/pydev_ipython_console_011.py", line 191, in _new_completer_200 use_readline=self.has_readline, AttributeError: 'PyDevTerminalInteractiveShell' object has no attribute 'has_readline' Process finished with exit code 1 Couldn't connect to console process. </code></pre> <p>I've done some googling, but all I can find is people who had this resolved by an update (I just downloaded this python ver so I'm pretty sure it's up to date), and people who haven't had it resolved at all. Any ideas? Help is much appreciated. :)</p> <p>EDIT: Found <a href="https://youtrack.jetbrains.com/issue/PY-20013" rel="nofollow">this</a> page, where people have been getting similar errors. There's a patch posted, but I can't find the pydev files it references anywhere on my computer. I also downloaded the readline library, and ipython2.7. Still no luck :(</p> <p>EDIT 2: Fixed it! The problem was with my ipython (5.1), which wasn't compatible with pycharm. I uninstalled it and reverted back to 4.2, which cleared up the errors with using the console, but left the one about the ipython.utils.traitlets package. To fix this, I just disabled pycharm's use of ipython. Everything seems to be working smoothly now!</p>
0
2016-09-14T02:35:56Z
39,949,718
<p>Fixed it! The problem was with my ipython (5.1), which wasn't compatible with pycharm. I uninstalled it and reverted back to 4.2, which cleared up the errors with using the console, but left the one about the ipython.utils.traitlets package. To fix this, I just disabled pycharm's use of ipython. Everything seems to be working smoothly now!</p>
0
2016-10-09T23:55:52Z
[ "python", "python-2.7", "pycharm", "theano", "macports" ]
Counting inversion in python: int object not iterable error
39,481,682
<p>Hi I am new to python and I am having trouble on the counting inversion problem using mergesort. The error said ""int" object is not iterable. However, I don't think I am iterating any number at this stage. Since I am stucked here, I am not sure if there are more bugs in this code.. Can anyone help me figuring out what is going on here? Thank you very much. </p> <pre><code>import sys def merge_and_count_inversions(x, y): sorted_array = [] count = 0 i, j = 0, 0 #print ("inside merge", x, y) while i &lt; len(x) and j &lt; len(y): if x[i] &gt; y[j]: count += len(x) - i sorted_array.append(y[j]) j += 1 else: sorted_array.append(x[i]) i += 1 while i &lt; len(x): sorted_array.append(x[i]) i += 1 while j &lt; len(y): sorted_array.append(y[j]) j += 1 #print ("overall count = ", count ) #print ("sorted_array", sorted_array) return count, sorted_array def get_number_of_inversions(a, b, left, right): number_of_inversions = 0 if right - left &lt;= 1: return number_of_inversions ave = (left + right) // 2 number_of_inversions_A, a[left:ave] = get_number_of_inversions(a, b, left, ave) #print ("left list", a[left : ave]) #print ("number_of_inversions left half = ", number_of_inversions ) number_of_inversions_B, a[ave:right] = get_number_of_inversions(a, b, ave, right) #print ("right list", a[ave : right]) #print ("number_of_inversions left + right half = ", number_of_inversions ) number_of_inversions_C, sorted_list = merge_and_count_inversions(a[left:ave],a[ave:right]) tot_inversions = number_of_inversions_A + number_of_inversions_B + number_of_inversions_C #print ("number_of_inversions overall = ", number_of_inversions ) return tot_inversions, sorted_list input_ = input() n, *a = list(map(int, input_.split())) #n is the length of a b = n * [0] get_number_of_inversions(a, b, 0, len(a)) print(get_number_of_inversions(a, b, 0, len(a))) </code></pre> <p>And the error said:</p> <pre><code>&lt;ipython-input-60-e1f94361f38a&gt; in get_number_of_inversions(a, b, left, right) 33 ave = (left + right) // 2 34 print ("average = ", ave) ---&gt; 35 number_of_inversions_A, a[left:ave] = get_number_of_inversions(a, b, left, ave) 36 print ("left list", a[left : ave]) 37 print ("number_of_inversions left half = ", number_of_inversions ) &lt;ipython-input-60-e1f94361f38a&gt; in get_number_of_inversions(a, b, left, right) 33 ave = (left + right) // 2 34 print ("average = ", ave) ---&gt; 35 number_of_inversions_A, a[left:ave] = get_number_of_inversions(a, b, left, ave) 36 print ("left list", a[left : ave]) 37 print ("number_of_inversions left half = ", number_of_inversions ) TypeError: 'int' object is not iterable </code></pre>
0
2016-09-14T02:36:58Z
39,482,125
<p>Well, I'm not versed in the Merge Sort algorithm but in <code>get_number_of_inversions()</code> you have two exits:</p> <p>First:</p> <pre><code>number_of_inversions = 0 if right - left &lt;= 1: return number_of_inversions </code></pre> <p>And:</p> <pre><code>return tot_inversions, sorted_list </code></pre> <p>You use the return values in an expression like:</p> <pre><code>number_of_inversions_A, a[left:ave] = get_number_of_inversions(a, b, left, ave) </code></pre> <p>So sometimes you return an integer and sometimes a tuple. My guess is that when you return an integer, you get this error.</p>
1
2016-09-14T03:37:08Z
[ "python", "mergesort", "inversion" ]
boto error to launch ec2 instance
39,481,770
<p>i am using ansible to create an ec2 isntance . during the playbook run, i am getting the error as:</p> <pre><code>TASK [create a single ec2 instance] ******************************************** An exception occurred during task execution. To see the full traceback, use -vvv. The error was: NameError: name 'botocore' is not defined fatal: [localhost]: FAILED! =&gt; {"changed": false, "failed": true, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_eVkiMV/ansible_module_ec2.py\", line 1554, in &lt;module&gt;\n from ansible.module_utils.ec2 import *\n File \"/tmp/ansible_eVkiMV/ansible_modlib.zip/ansible/module_utils/ec2.py\", line 61, in &lt;module&gt;\n File \"/tmp/ansible_eVkiMV/ansible_modlib.zip/ansible/module_utils/ec2.py\", line 62, in AWSRetry\nNameError: name 'botocore' is not defined\n", "module_stdout": "", "msg": "MODULE FAILURE"} to retry, use: --limit @apache_aws.retry PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=1 </code></pre> <p>it says that botocore is not defined but i installed boto using pip install boto. can anyone please figure out what problem is ?</p>
0
2016-09-14T02:50:55Z
39,482,023
<p>You need to install boto3 (<code>pip install boto3</code>)</p>
0
2016-09-14T03:22:56Z
[ "python", "python-2.7", "python-3.x", "ansible", "ansible-playbook" ]